title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 5
8
| search_term
stringclasses 18
values | text
stringlengths 0
8.42M
|
---|---|---|---|---|
Raising awareness of alcohol as a modifiable risk factor for breast cancer: A randomized controlled trial comparing the efficacy of accessing an interactive website with a non-interactive website | 4c9f80c8-19e6-4011-9d4c-b641cc99905b | 11743881 | Patient Education as Topic[mh] | Introduction Breast cancer (BC) is the most prevalent cancer worldwide with approximately 7.8 million women, estimated at the end of 2020, diagnosed in the previous 5 years . Despite effectiveness of treatments for BC improving, mortality rates remain high . Alcohol use, physical inactivity, being overweight or obese, and hormone replacement therapy (HRT) are potentially modifiable BC risk factors, responsible for approximately 30 % of new BC cases . The relationship between alcohol consumption and the risk of developing BC is dose-dependent , with an increased risk of approximately 10 % for each additional daily standard unit of alcohol (AU). This corresponds to approximately one glass of wine, a can of beer, or a small glass of spirits . In Europe, almost 11,000 new cases of BC per year are related to light to moderate drinking levels (less than 2 AU per day ), with no alcohol consumption as the lowest risk . However, as most of the general population consume alcohol, many countries have produced ‘lower risk’ thresholds at which alcohol-related consequences are minimized to help people adopt healthier behaviors . In general, healthy, non-pregnant, and non-breast-feeding women should avoid alcohol consumption higher than one AU per day, and women at higher risk of BC due to other factors, such as family history of BC or obesity, should completely avoid alcohol or consume alcohol only occasionally . In France, it has been estimated that perfect adherence to daily thresholds between 2015 and 2050 would prevent more than 61,000 new BC cases . Similar data have been estimated by other studies conducted in Australia , Canada , as well as worldwide . To adhere to these recommendations, women should be aware of modifiable BC risk factors, how to quantify alcohol consumption into AU, and the daily alcohol threshold. In fact, most women are neither aware of modifiable risk factors for BC nor how to quantify alcohol consumption into AU . Recently, asking a sample of women to list BC risk factors, we found that less than 20 % mentioned obesity, alcohol use, lifestyle, and use of exogenous hormones and most ignored how to measure alcohol consumption into AU . These data advocate the urgency to develop evidence-based educational programs to increase the knowledge of BC risk factors . Digital tools have shown promising results in reducing at-risk drinking as well as improved health processes . On the other hand, two recent reviews concluded that further studies are needed to clarify the effectiveness of interactive digital tools to help pregnant women and adolescents to manage weight gain. Promising results were achieved by studies investigating the effectiveness of interactive digital tools in improving psychological well-being, and/or quality of life of BC patients . Accordingly, a growing number of studies are starting to evaluate the effectiveness of these tools in increasing the knowledge of BC risk factors among healthy young women . These latter studies are mainly focused on healthy diet and physical activity . Our study was aimed at evaluating the efficacy of an interactive website compared to a validated noninteractive website in increasing the knowledge of alcohol as a BC risk factor. We hypothesized that this interactive tool may be more effective in increasing women's awareness than a similar but non-interactive digital tool.
Methods and material 2.1 Study design We conducted a randomized controlled trial (RCT) in agreement with the Declaration of Helsinki and Good Clinical Practice guidelines. The study was approved by the Ethic Committee of the University Hospital of Cagliari, Italy. 2.2 Procedures Participants were adult women (age >18 years), able to speak the Italian language, understand and give informed consent. Women were recruited at two Hospitals in Cagliari, Italy (the University Hospital and the Civil Hospital) among outpatients waiting to undergo mammography for organized BC screening or other reasons like personal initiative, symptoms, family history of BC, and/or previous BC diagnosis. Based on the results of our previous study , we planned to recruit a sample of approximately 600 women and divide them randomly using a computer-generated list, into two arms, the intervention and control groups, in which women accessed an interactive and a non-interactive website, respectively. Before accessing the websites , participants were submitted a questionnaire investigating their baseline knowledge of BC risk factors, the alcohol content of common alcoholic beverages expressed in the number of AU, and the daily recommended threshold to avoid high alcohol-related risks (see , , Questionnaire). Information on age, civil status, employment, education, reason for medical assessment, family history of BC, was also collected. Then, each participant received a tablet already linked to the homepages of the interactive and non-interactive websites, for the intervention and control groups, respectively. For the intervention group, we used a modified draft of the "Abreast of health", created by the University of Southampton, UK (freely available at the link https://abreastofhealth.github.io/ ) to increase women's knowledge on BC risk factors in a simple and interactive way. Information is provided as minimal text and several figures. Briefly, it contains the questionnaire AUDIT-C and other questions on smoking, height, and weight. The image provided with AUDIT-C questions describes the content in AU of the most common beverages to help women to quantify their alcohol use. After that, the website provides tailored feedback. In agreement with the lead author of the UK team (JS), we adapted the text for use in Italy. Furthermore, as the content of alcohol in a 'standard drink' varies between the UK (8 g) and Italy (12 g) , we also adapted the images of the alcoholic beverages corresponding to an AU in Italy. For instance, we substituted the image of the half pint with a can of beer. The Italian version was re-translated into English and the new English version was submitted to the authors of the original version to ensure consistency between the two English versions The Italian version is freely available at the link “ https://allasalute.github.io ”. Replies given by participants to the questions of the interactive website were not registered. For the control group, we used the website devoted to BC created by the Italian Ministry of Health, freely available at Il tumore della mammella (salute.gov.it ), containing certified information on BC but without interaction or figures. Participants of both groups accessed the websites for as long as they wished, then they returned the tablets back to the researcher, and were re-submitted the same questionnaire administered at baseline . We also measured the time spent accessing the websites. 2.3 Outcomes The primary outcome was the increase in the knowledge of alcohol as a BC risk factor evaluated as the rate of women who mentioned alcohol as a BC risk factor in section of the questionnaire. Secondary outcomes were the increase in the knowledge of the other modifiable BC risk factors, the alcohol content of common alcoholic beverages expressed in the number of AU, and the daily recommended threshold to avoid high alcohol-related risks. 2.4 Statistical analysis Student t -test was carried out to assess differences in age between groups; Mann Whitney test was carried out to assess differences in time spent browsing the assigned website. Chi-squared tests were carried out to verify that the baseline knowledge of control and intervention groups did not differ. McNemar's tests for paired data were used to assess the efficacy of both websites comparing baseline to post-intervention knowledge. Chi-squared tests were also carried out to assess differences in the efficacy between the two websites. Multivariable logistic regression analysis was performed to explore socio-demographic factors affecting the probability of acquiring knowledge of alcohol as a BC risk factor. Detailed statistical procedures are reported in , Supplementary data.
Study design We conducted a randomized controlled trial (RCT) in agreement with the Declaration of Helsinki and Good Clinical Practice guidelines. The study was approved by the Ethic Committee of the University Hospital of Cagliari, Italy.
Procedures Participants were adult women (age >18 years), able to speak the Italian language, understand and give informed consent. Women were recruited at two Hospitals in Cagliari, Italy (the University Hospital and the Civil Hospital) among outpatients waiting to undergo mammography for organized BC screening or other reasons like personal initiative, symptoms, family history of BC, and/or previous BC diagnosis. Based on the results of our previous study , we planned to recruit a sample of approximately 600 women and divide them randomly using a computer-generated list, into two arms, the intervention and control groups, in which women accessed an interactive and a non-interactive website, respectively. Before accessing the websites , participants were submitted a questionnaire investigating their baseline knowledge of BC risk factors, the alcohol content of common alcoholic beverages expressed in the number of AU, and the daily recommended threshold to avoid high alcohol-related risks (see , , Questionnaire). Information on age, civil status, employment, education, reason for medical assessment, family history of BC, was also collected. Then, each participant received a tablet already linked to the homepages of the interactive and non-interactive websites, for the intervention and control groups, respectively. For the intervention group, we used a modified draft of the "Abreast of health", created by the University of Southampton, UK (freely available at the link https://abreastofhealth.github.io/ ) to increase women's knowledge on BC risk factors in a simple and interactive way. Information is provided as minimal text and several figures. Briefly, it contains the questionnaire AUDIT-C and other questions on smoking, height, and weight. The image provided with AUDIT-C questions describes the content in AU of the most common beverages to help women to quantify their alcohol use. After that, the website provides tailored feedback. In agreement with the lead author of the UK team (JS), we adapted the text for use in Italy. Furthermore, as the content of alcohol in a 'standard drink' varies between the UK (8 g) and Italy (12 g) , we also adapted the images of the alcoholic beverages corresponding to an AU in Italy. For instance, we substituted the image of the half pint with a can of beer. The Italian version was re-translated into English and the new English version was submitted to the authors of the original version to ensure consistency between the two English versions The Italian version is freely available at the link “ https://allasalute.github.io ”. Replies given by participants to the questions of the interactive website were not registered. For the control group, we used the website devoted to BC created by the Italian Ministry of Health, freely available at Il tumore della mammella (salute.gov.it ), containing certified information on BC but without interaction or figures. Participants of both groups accessed the websites for as long as they wished, then they returned the tablets back to the researcher, and were re-submitted the same questionnaire administered at baseline . We also measured the time spent accessing the websites.
Outcomes The primary outcome was the increase in the knowledge of alcohol as a BC risk factor evaluated as the rate of women who mentioned alcohol as a BC risk factor in section of the questionnaire. Secondary outcomes were the increase in the knowledge of the other modifiable BC risk factors, the alcohol content of common alcoholic beverages expressed in the number of AU, and the daily recommended threshold to avoid high alcohol-related risks.
Statistical analysis Student t -test was carried out to assess differences in age between groups; Mann Whitney test was carried out to assess differences in time spent browsing the assigned website. Chi-squared tests were carried out to verify that the baseline knowledge of control and intervention groups did not differ. McNemar's tests for paired data were used to assess the efficacy of both websites comparing baseline to post-intervention knowledge. Chi-squared tests were also carried out to assess differences in the efficacy between the two websites. Multivariable logistic regression analysis was performed to explore socio-demographic factors affecting the probability of acquiring knowledge of alcohol as a BC risk factor. Detailed statistical procedures are reported in , Supplementary data.
Results 3.1 Sample description Recruitment occurred between January to June 2023. Overall, 738 women met our inclusion criteria and were invited to participate : 67 (9 %) refused or did not complete the final questionnaire because they were called for mammography, and 671 (91 %) were included in the study. Participants were randomly allocated to 329 (49.0 %) in the intervention and 342 (51.0 %) in the control group. Women did not differ in sociodemographic characteristics between the two groups . 3.2 Baseline knowledge Baseline knowledge of BC risk factors, alcohol content in AU of alcoholic beverages, and the daily alcohol threshold did not differ between the intervention and control groups . In detail, only 20 % of women of both the groups mentioned alcohol as a BC risk factor. Women in the intervention group spent a longer time accessing the interactive website than non-interactive one (median value in minutes: 2.35 vs 1.98; p < 0.001). 3.3 Efficacy of accessing the websites in increasing the knowledge of BC risk factors According to McNemar's tests' results, accessing both websites significantly increased the percentages of women acquiring knowledge of most BC risk factors examined . Specifically, compared to their baseline values (20 % in both groups), the percentages of women who mentioned alcohol as a BC risk factor were 85 % (p < 0.001; ) and 75 % (p < 0.001; ) in intervention and control groups, respectively. Regarding the content in AU of commonly consumed beverages, accessing both the websites significantly, although weakly, increased the percentages of women who reported the right content except for “a mug of beer” in the control group . Regarding the daily alcohol threshold, at baseline most women were already aware of this value . Accessing both the websites significantly but slightly increased this knowledge (both groups, p < 0.001; ). 3.4 Comparing the efficacy of accessing the interactive website to accessing the not interactive website Analyzing only those women who were not aware of risk factors at baseline, shows that accessing the interactive website resulted in higher percentages of women acquiring knowledge on most BC risk factors than the non-interactive site, although the control group were more likely to acquire the knowledge on age as a risk factor. Specifically, among women who were not aware at baseline, those who acquired this information were 82 % and 69 % of the intervention and control groups, respectively (p < 0.001; ). Finally, we found no differences between the two groups in acquiring knowledge on diet, contraceptive pills, being female, and family history of BC. Regarding the content in AU of commonly consumed beverages (can of beer, mug of beer and two shot glasses), shows that the intervention group achieved better results compared to the control group in half of the beverages investigated with no differences in the other beverages. Regarding the daily alcohol threshold, shows that accessing the interactive website induced better results than the non-interactive website. 3.5 Factors associated with the acquisition of knowledge that alcohol is a BC risk factor shows factors significantly associated with the probability of acquiring knowledge that alcohol is a BC risk factor. In detail, the level of education interacts with the specifically accessed website, while civil status and reasons for mammography influence this probability independent by of other factors: - women with middle-school level education are more likely to gain increased knowledge when included in the interventional rather than the control group (OR 2.64; p = 0.01) - women with high school education are more likely to increase their knowledge when included in the interventional group compared to the control group (OR 2.56; p = 0.01) - women with a university degree do not differ in their probability of increasing knowledge when included in either interventional or control groups (OR 0.75; p = 0.55) - for the entire sample, single or unmarried status was negatively associated with the probability of increasing their knowledge compared to those with any other civil status (OR 0.56; p = 0.03) - for the entire sample, participating in spontaneous BC screening was positively associated with the probability of increasing their knowledge compared to participating in organized screening programs (OR 1.61; p = 0.04).
Sample description Recruitment occurred between January to June 2023. Overall, 738 women met our inclusion criteria and were invited to participate : 67 (9 %) refused or did not complete the final questionnaire because they were called for mammography, and 671 (91 %) were included in the study. Participants were randomly allocated to 329 (49.0 %) in the intervention and 342 (51.0 %) in the control group. Women did not differ in sociodemographic characteristics between the two groups .
Baseline knowledge Baseline knowledge of BC risk factors, alcohol content in AU of alcoholic beverages, and the daily alcohol threshold did not differ between the intervention and control groups . In detail, only 20 % of women of both the groups mentioned alcohol as a BC risk factor. Women in the intervention group spent a longer time accessing the interactive website than non-interactive one (median value in minutes: 2.35 vs 1.98; p < 0.001).
Efficacy of accessing the websites in increasing the knowledge of BC risk factors According to McNemar's tests' results, accessing both websites significantly increased the percentages of women acquiring knowledge of most BC risk factors examined . Specifically, compared to their baseline values (20 % in both groups), the percentages of women who mentioned alcohol as a BC risk factor were 85 % (p < 0.001; ) and 75 % (p < 0.001; ) in intervention and control groups, respectively. Regarding the content in AU of commonly consumed beverages, accessing both the websites significantly, although weakly, increased the percentages of women who reported the right content except for “a mug of beer” in the control group . Regarding the daily alcohol threshold, at baseline most women were already aware of this value . Accessing both the websites significantly but slightly increased this knowledge (both groups, p < 0.001; ).
Comparing the efficacy of accessing the interactive website to accessing the not interactive website Analyzing only those women who were not aware of risk factors at baseline, shows that accessing the interactive website resulted in higher percentages of women acquiring knowledge on most BC risk factors than the non-interactive site, although the control group were more likely to acquire the knowledge on age as a risk factor. Specifically, among women who were not aware at baseline, those who acquired this information were 82 % and 69 % of the intervention and control groups, respectively (p < 0.001; ). Finally, we found no differences between the two groups in acquiring knowledge on diet, contraceptive pills, being female, and family history of BC. Regarding the content in AU of commonly consumed beverages (can of beer, mug of beer and two shot glasses), shows that the intervention group achieved better results compared to the control group in half of the beverages investigated with no differences in the other beverages. Regarding the daily alcohol threshold, shows that accessing the interactive website induced better results than the non-interactive website.
Factors associated with the acquisition of knowledge that alcohol is a BC risk factor shows factors significantly associated with the probability of acquiring knowledge that alcohol is a BC risk factor. In detail, the level of education interacts with the specifically accessed website, while civil status and reasons for mammography influence this probability independent by of other factors: - women with middle-school level education are more likely to gain increased knowledge when included in the interventional rather than the control group (OR 2.64; p = 0.01) - women with high school education are more likely to increase their knowledge when included in the interventional group compared to the control group (OR 2.56; p = 0.01) - women with a university degree do not differ in their probability of increasing knowledge when included in either interventional or control groups (OR 0.75; p = 0.55) - for the entire sample, single or unmarried status was negatively associated with the probability of increasing their knowledge compared to those with any other civil status (OR 0.56; p = 0.03) - for the entire sample, participating in spontaneous BC screening was positively associated with the probability of increasing their knowledge compared to participating in organized screening programs (OR 1.61; p = 0.04).
Discussion The results of the present study support the use of digital tools to improve the knowledge of alcohol as a modifiable risk factor for BC, the daily alcohol threshold, and how to measure alcohol consumption. In addition, our results suggest that accessing interactive websites is more effective than accessing non-interactive websites, especially among people with lower levels of education. Educational programs aiming at increasing the knowledge of modifiable BC risk factors, including alcohol use, should consider alcohol health literacy . Health literacy defined as “the motivation, knowledge, and competencies to access, understand, appraise and apply health information to make judgments and take decisions in everyday life” is a key field of activity in health promotion. Accordingly, educational programs aiming at increasing the knowledge of BC risk factors should consider the degree to which people may process and understand information on alcohol use, alcohol-related consequences, contents of alcoholic beverages expressed in AU, and the daily alcohol threshold. From an empowerment perspective, increasing women's alcohol literacy is needed to guarantee their right to make informed choices on alcohol consumption. Women need to understand and learn the impact of alcohol on their health and how to measure alcohol consumption into AU to be able to adhere to alcohol recommendations . It is noteworthy that accessing the interactive website obtained better results than the non-interactive website among participants with lower education levels. Education level is a proxy indicator of socio-economic status and, in general, the lower the education level, the lower the level of health literacy . Our interactive website was co-created to be a "literacy friendly" tool, with women presenting for breast screening , also suitable for people with low levels of education, with poor reading skills, for whom the acquisition of knowledge by written texts, as in the case of the non-interactive website of the control group, could be more difficult. The free availability of this digital tool could also have important implications in terms of equity of access to simple yet scientifically correct information to make informed choices, contributing to reducing BC burden. Although it appears there has been a sustained reduction in alcohol consumption in Italy , the incidence of new cases of BC has been increased over the same time period . However, the observed reduction does not include unrecorded alcohol consumption (e.g., home-produced alcoholic beverages). Furthermore, in Italy as well as in other Western countries, alcohol consumption has increased in women in recent years instead of reducing . Accordingly, among female population, breast malignant tumors’ incidence increased between 2008 and 2016 in all age groups . However, this increased incidence could be at least in part due to the introduction of organized screening programmes as well as by the widespread use of opportunistic screening . Our study faced the challenges of adapting a tool realized for use in the UK to Italy. The main problem concerned the difference in the contents of alcohol of the AU between Italy and the UK. Although AU represent a useful tool to simplify the evaluation of alcohol consumption, how AUs are defined differs between countries ranging from 8 g in the UK to 19.75 g in Japan . As part of the translation of any questionnaire it needs to ensure good face validity to participants, and so in our study, we modified the images of alcoholic beverages corresponding to the AU to take this difference into account. However, in planning the study, we realized that this problem also concerns some screening tools. As an example, despite the AUDIT being a WHO developed tool, the third question of the AUDIT investigates alcohol consumption equal to or greater than six AU on a single occasion in both the Italian and English versions despite such consumption corresponds to different amounts of alcohol in Italy (72 g) and the UK (48 g). Difficulties due to these differences deserve to be addressed by further studies. Some limitations of this study should be acknowledged. The lack of a control group without any information would have allowed an appreciation of the efficacy of accessing both the websites in increasing the knowledge of our sample of women. However, engaging women in research in this area and then leaving them with incorrect knowledge about BC risk factors would have been unethical. Our digital platform also has some limitations. For instance, both accessing the interactive and non-interactive websites only slightly increased knowledge about alcohol contents in AU of the most common alcoholic beverages. For some beverages, we found no differences between the two groups. This negative result may be due, at least in part, to the ambiguity of the images showing the content in AU. Both the image of a wine drink and aperitif may be subject to interpretation and may have confused the 'visual memory' of our participants who visited the interactive website . In the light of these results, we intend to adapt these images during our next iteration of the platform to make them clearer. Finally, our results show that most women were already aware of the daily alcohol threshold at baseline. This finding appears to differ from the results observed in a younger population . This may be due in part to the different ages of the populations. However, it may also be due to the response option "1 AU" representing both the right reply and the lowest threshold. Accordingly, women can have chosen it from a "precautionary principle" perspective. Our study did not evaluate the persistence of the acquired knowledge in the middle and long-term. The results demonstrate the efficacy of our interactive website in transferring comprehensible information to a sample of women, including low education participants. Future studies will be conducted to evaluate the persistence of this acquired knowledge, and its impact on behaviour. Finally, accessing the interactive website did not increase the knowledge of certain BC risk factors (i.e., contraceptive pill, age, being female) and did not differ in the efficacy compared to control group in others (i.e., diet, contraceptive pill, being female, family history of BC). This is because the website was developed to examine the change in knowledge in alcohol within the context of understanding risk factors for BC. As a research tool this in part mitigates that information on risk factors was given to both groups. The design of the website enables similar testing of the other risk factors (e.g., weight and diet) in a similar fashion. This is a strength of the method that it takes as modular approach. In future studies, other modules of our interactive website devoted to other modifiable BC risk factors like contraceptive pills and diets will be developed and tested.
Conclusions Digital tools are promising instruments to increase people's awareness of healthy behaviors; however, their efficacy needs to be evidence-based and verified . To the best of our knowledge, our study is the first RCT conducted to specifically evaluate the efficacy of an interactive website specifically designed to increase the knowledge of alcohol as a BC risk factor. Our results show evidence of the effectiveness of accessing this tool in increasing women's awareness and that the interactive website is being more effective compared to the non-interactive one, especially among women with lower education levels.
Claudia Sardu: Writing – review & editing, Writing – original draft, Validation, Supervision, Software, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Fabrizio Angius: Writing – review & editing, Formal analysis, Data curation. Paolo Contu: Writing – review & editing, Formal analysis, Data curation. Sofia Cosentino: Writing – review & editing, Formal analysis, Data curation. Monica Deiana: Writing – review & editing, Formal analysis, Data curation. Matteo Fraschini: Writing – review & editing, Validation, Supervision, Software, Methodology, Formal analysis, Data curation. Clelia Madeddu: Writing – review & editing, Visualization, Investigation, Data curation. Elena Massa: Writing – review & editing, Visualization, Investigation, Data curation. Alessandra Mereu: Writing – review & editing, Formal analysis, Data curation. Luigi Minerba: Writing – review & editing, Formal analysis, Data curation. Carola Politi: Writing – review & editing, Visualization, Investigation, Data curation. Silvia Puxeddu: Writing – review & editing, Formal analysis, Data curation. Francesco Salis: Writing – review & editing, Formal analysis, Data curation. Julia M.A. Sinclair: Writing – review & editing, Validation, Supervision, Software, Methodology, Conceptualization. Roberta Agabio: Writing – review & editing, Writing – original draft, Supervision, Software, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization.
Data are available from the authors upon reasonable request.
The study was approved by the Ethic Committee of the University Hospital of Cagliari, Italy (Prot. 2022/33; December 14, 2022; see , Supplementary data). The study was conducted in full accordance with the guidelines for randomized clinical trials and ethical principles of the Declaration of Helsinki and Good Clinical Practice, including rules concerning the protection of personal data. All participants provided written informed consent.
This research was partially supported by a grant by the 10.13039/100014810 Fondazione di Sardegna (“ Alcol e altri fattori di rischio modificabili per il tumore alla mammella: progetto finalizzato ad aumentare la loro conoscenza tra la popolazione femminile ”; Prot. U685.2022/AI.670.BE). Fondazione di Sardegna had no role in the design and conduct of the study; access and collection of data; analysis and interpretation of data; preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
The authors declare no conflicts of interest.
|
Albrecht von Graefe and the foundation of scientific ophthalmology | f5cd1ede-5948-4d8a-829a-4810e6a71853 | 7933826 | Ophthalmology[mh] | In November 1850, Graefe was back in Berlin. He opened an eye clinic on the model of those he had seen in France. There, he provided medical consulting, treatment, teaching, and research. Facilities of this kind played an important role in the development of medical specialization in Germany. Medical specialization is a relatively recent phenomenon. For centuries, specialized medical practice existed outside official medicine. Healers who performed operations on specific parts of the body were dentists, oculists, and bonesetters. They transmitted their knowledge orally and empirically, and occupied a low position in the medical hierarchy. Until the 1850s, medical specialization developed unevenly across Europe. Ophthalmology was one of the first fields to organize as a structured specialty. Chairs of ocular surgery were established in France in 1765 (Paris) and 1788 (Montpellier), but the outbreak of the French Revolution put an end to the development of French ophthalmology for nearly 50 years. In Austria, the teaching of ophthalmology was introduced in 1773 by Empress Maria Theresa. In 1812, a professorship in ophthalmology was established at the University of Vienna did not lead to further developments. In Germany, the University of Berlin offered a course in ophthalmology from 1828. The same year, the Charité Hospital established an eye clinic. From the middle of the century, specialists gradully grew in number and became a recognizable social category. They opened private clinics combining teaching and medical research. Medical specialization became perceived simultaneously as a form of knowledge and practice. The specialized physician began to be associated with medical progress, and to be considered as more competent than a general practitioner because of his ability to deal with difficult cases. By the 1890s, the “battle” for the acceptance of specialties within traditional medical institutions and the general public was won. Graefe's clinic became a reference point for the treatment of eye diseases, attracting patients, students, and assistants. They flocked from all over the world to attend his lessons, which he gave in German, French, and English. He had numerous assistants from Europe: Argyll Robertson (1837–1909) from Edinburgh, John Soelberg Wells (1824–1879) from London, Sir Henry Rosborough Swanzy (1844–1913) from Dublin, Andreas Anagnostakis (1826–1897) from Athens, Robert Blessig (1830–1878) from St. Petersburg, Eduard Junge (1832–1898) from Moskow, Carl Waldenhauer (1820–1899) from Riga, Henri Dor (1835–1912) from Bern, Friedrich Horne (1831–1886) from Zürich, Edmund Hansen Grut (1831–1907) from Copenhagen; others came from North America: Eklanan Williams (1822–1888) from Cincinnati, Aaron Friedenwald (1836–1902) from Baltimore, Charles Stedman Bull (1844–1914) from New York, and Francis Buller (1844–1905) from Montreal. In 1854, Graefe founded the Arkiv für Ophthalmologie , the first specialized scientific journal in ophthalmology published in a German state. He was also at the origins of the foundation of the German Ophthalmological Society (1857), the oldest medical scientific association in the world. By the age of 30, Graefe was one of the most renowned ophthalmologists of the time. In 1867, he presided over the third International Congress of Ophthalmology held in Paris. In 1870, he was elected as foreign member of the Royal Swedish Academy of Sciences. Graefe is applied in his profession a rigorous method based on clinical observations and experimental practice. He was among the first practitioners to make a systematic use of the ophthalmoscope. Invented in 1851 by Hermann von Helmholtz (1821–1894), this instrument made possible for the first time in history to observe the posterior segment of a living eye. The use of the ophthalmoscope let von Graefe making numerous contributions in the physiology and pathology of the eye. He discovered that the fusion of the two images from both eyes into a coherent image occurred not in the retina but in the brain. He identified three subtypes of glaucoma; he introduced iridectomy to relieve intraocular pressure and applied this procedure to treat iritis and iridochoroiditis. He introduced the linear extraction for cataracts, reducing infections in the cornea. He also invented a specific knife equipped with a narrow, pointed blade that minimized the egress of aqueous humor. Von Graefe also improved the surgical treatment of strabismus. He had the idea of sharing the surgical correction on the two eyes, intervening on the healthy eye to correct a squint. He also described different cases of hemianopsias, postulating that homonymous hemianopsias were due to unilateral cerebral disease. Von Graefe died in Berlin on July 20, 1870 . He left an impressive number of publications that form an unequalled repository of knowledge and a monument to the early years of scientific ophthalmology. His commitment to his profession, his dedication to his patients, students, and colleagues, and his relentless work for creating an international community of eye specialists remain foundational in modern ophthalmology.
Nil.
There are no conflicts of interest.
|
Predictors and outcomes in patients undergoing surgery for acute type A aortic dissection requiring concomitant venoarterial extracorporeal membrane oxygenation support—a retrospective multicentre cohort study | 849c8aa8-25a1-426e-a028-3e46ec9327b1 | 11805496 | Surgical Procedures, Operative[mh] | Preoperative malperfusion, advanced age and stroke are well studied risk factors for patients’ outcome following surgery for acute type A aortic dissection (ATAAD) . Lesser is known about ventricular failure and consecutive low cardiac output syndrome (LCOS) requiring the use of venoarterial extracorporeal membrane oxygenation (ECMO), which may be one of the worst-case scenarios in perioperative treatment of ATAAD . In the setting of ATAAD, multiple reasons lead to ventricular failure, especially in case of preoperative tamponade leading to resuscitation or obstruction of the coronary arteries by the haematoma up to involvement of coronary dissection leading to malperfusion and potentially leading to additional coronary artery bypass grafting during aortic repair . So far, only limited data are available evaluating the outcome of patients receiving perioperative venoarterial ECMO support, indicating mortality rates far above 70% . Additionally, it is unclear if there is a realistic chance of weaning from venoarterial ECMO for these patients or if it can even act as a bridging therapy to more advanced surgical treatments like left ventricular assist devices or heart transplantation in very selected cases. This highlights the importance of the identification of independent risk factors and predictors for mortality in this subgroup. We report a multicentre experience of patients undergoing surgery for ATAAD and receiving perioperative support with venoarterial ECMO. We focus on the chance of successful weaning and evaluate the pre-ECMO lactate peak as a potential candidate for survival assessment. Ethics approval The local ethics committees of participating centres officially approved this study (Berlin EA2/096/20, Innsbruck UN 5106, Freiburg 24-1247-S1, Köln 201212_1, Bern 2020–01149). It complies with the Declaration of Helsinki. Patient population Study design followed the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology.) statement. Out of 3719 patients undergoing surgery for ATAAD between 2004 and 2023 at 5 different European aortic centres, 117 (3.1%) patients needed perioperative ECMO support. These 117 patients constitute the retrospective multicentre study cohort. Patients who received preoperative ECMO implantation before aortic repair were not included in this study ( n = 10) given the high mortality reported in literature. Additionally, iatrogenic ATAAD and patients with subacute/chronic aortic dissection (onset ≥ 14 days) were excluded. Quantification of preoperative organ malperfusion in ATAAD was described recently and followed the structure of the German Registry for Acute Aortic Dissection Type A . Surgical procedure Median sternotomy was the standard operative access. After establishing cardiopulmonary bypass, systemic cooling was initiated. The level of hypothermia mainly included moderate hypothermia ranging from 20°C to 28°C in combination with antegrade cerebral perfusion during caudal circulatory arrest. The main arterial cannulation sites were the right axillary artery, the right femoral artery or direct central cannulation of the ascending aorta or the aortic arch. After induction of cardioplegic arrest and inspection of the entry site, the ascending aorta was resected with or without the aortic arch. Total arch replacement, with or without the use of the frozen elephant trunk technique, was performed in case of an aneurysm or located entry tear. This also accounted for the aortic root. Concomitant coronary artery bypass grafting (CABG) using vein grafts was carried out primarily in case of severe aortic root destruction with involvement of the dissection into the coronary artery (Neri B lesion) or circumferential detachment of the coronary artery (Neri C lesion) or presence of severe calcification in combination with preoperative or intraoperative coronary malperfusion. CABG was performed in the setting of acute aortic repair. The decision for ECMO support and measurement of lactate The decision for venoarterial ECMO support was either made at the end of the operation or during the postoperative course on the intensive care unit (ICU). If weaning from cardiopulmonary bypass after reperfusion was not possible despite inotropic and vasoactive support, intraoperative ECMO implantation was performed. It was distinguished whether it was a primary left ventricular, right ventricular or biventricular pump failure based on transoesophageal echocardiographic findings. Whenever possible, the inserted cannulas were used for continuing ECMO support. In selected cases of peripheral malperfusion, the cannulation site was switched from femoral artery to the side branch of the surgical prosthesis in order to establish antegrade flow. Details on arterial cannulation site of ECMO are displayed in Table . The pre-ECMO lactate peak (mg/dl) was defined as the highest available lactate peak in an arterial blood gas analysis right before venoarterial ECMO implantation. This was also the case for the pre-ECMO creatine kinase-MB (CK-MB) peak (U/l), which was measured via blood sample analysis to assess the extent of myocardial injury. Definition of outcomes and follow-up Primary endpoints were 30-day mortality and successful weaning of venoarterial ECMO support after haemodynamic stabilization. Open chest therapy was defined as non-sternal closure at the end of the operation due to pronounced bleeding and/or cardiopulmonary oedema. The reasons for death were divided into cardiac and non-cardiac reasons. Cardiac was defined by the underlying ventricular failure and consecutive LCOS. Non-cardiac was divided into 4 different entities, whereas the presence of more than one reason was also possible: multiorgan failure despite ECMO support; cerebral ischaemic and/or haemorrhagic stroke and/or cerebral oedema; sepsis and consecutive septic shock; and aortic rupture and/or extensive bleeding and consecutive haemorrhagic shock. A postoperative cerebral computed tomography scan was mandatory for the diagnosis of stroke and cerebral oedema. The follow-up was 97.5% complete for 30-day mortality. Only 3 patients (2.5%) were lost to follow-up after discharge before 30 days. The median survival time was 6 (1–27) days. Follow-up was closed in March 2024. Statistical analysis Continuous variables were tested for normal distribution by using the Shapiro–Wilk test and additional visualization using histograms. All continuous variables were not normally distributed. Therefore, they were presented as median with corresponding interquartile range (IQR; 25th–75th percentile). Categorical data were represented as absolute numbers with corresponding percentages. Multivariable binary logistic regression analysis was performed to identify independent risk factors for 30-day mortality. All variables included in Table (preoperative variables), Table (intraoperative variables) and Table (variables for venoarterial ECMO support) were considered for the initial logistic regression model. To include pre-ECMO CK-MB peak as a variable in the regression model, single imputation by using predictive mean matching considering all preoperative variables was performed. Variables for the regression model were chosen using the backward selection technique based on the Akaike information criterion. All selected variables were then used for the multivariable binary logistic regression and are shown in Table . Univariable binary logistic regression was performed complementary. A receiver operating curve was designed to measure the prediction accuracy of pre-ECMO lactate peak for 30-day mortality. The corresponding area under the curve (AUC) was defined as acceptable with a value >0.70. Restricted cubic splines were used to investigate the association between pre-ECMO lactate peak and survival. Kaplan–Meier curves including a log-rank test were prepared to illustrate and compare survival between groups according to the indication for ECMO implantation. All P -values are 2-sided. The α -level was defined at 0.05. Statistical analysis was performed using R (The R Foundation for Statistical Computing) version 4.3.2. The local ethics committees of participating centres officially approved this study (Berlin EA2/096/20, Innsbruck UN 5106, Freiburg 24-1247-S1, Köln 201212_1, Bern 2020–01149). It complies with the Declaration of Helsinki. Study design followed the STROBE (STrengthening the Reporting of OBservational studies in Epidemiology.) statement. Out of 3719 patients undergoing surgery for ATAAD between 2004 and 2023 at 5 different European aortic centres, 117 (3.1%) patients needed perioperative ECMO support. These 117 patients constitute the retrospective multicentre study cohort. Patients who received preoperative ECMO implantation before aortic repair were not included in this study ( n = 10) given the high mortality reported in literature. Additionally, iatrogenic ATAAD and patients with subacute/chronic aortic dissection (onset ≥ 14 days) were excluded. Quantification of preoperative organ malperfusion in ATAAD was described recently and followed the structure of the German Registry for Acute Aortic Dissection Type A . Median sternotomy was the standard operative access. After establishing cardiopulmonary bypass, systemic cooling was initiated. The level of hypothermia mainly included moderate hypothermia ranging from 20°C to 28°C in combination with antegrade cerebral perfusion during caudal circulatory arrest. The main arterial cannulation sites were the right axillary artery, the right femoral artery or direct central cannulation of the ascending aorta or the aortic arch. After induction of cardioplegic arrest and inspection of the entry site, the ascending aorta was resected with or without the aortic arch. Total arch replacement, with or without the use of the frozen elephant trunk technique, was performed in case of an aneurysm or located entry tear. This also accounted for the aortic root. Concomitant coronary artery bypass grafting (CABG) using vein grafts was carried out primarily in case of severe aortic root destruction with involvement of the dissection into the coronary artery (Neri B lesion) or circumferential detachment of the coronary artery (Neri C lesion) or presence of severe calcification in combination with preoperative or intraoperative coronary malperfusion. CABG was performed in the setting of acute aortic repair. The decision for venoarterial ECMO support was either made at the end of the operation or during the postoperative course on the intensive care unit (ICU). If weaning from cardiopulmonary bypass after reperfusion was not possible despite inotropic and vasoactive support, intraoperative ECMO implantation was performed. It was distinguished whether it was a primary left ventricular, right ventricular or biventricular pump failure based on transoesophageal echocardiographic findings. Whenever possible, the inserted cannulas were used for continuing ECMO support. In selected cases of peripheral malperfusion, the cannulation site was switched from femoral artery to the side branch of the surgical prosthesis in order to establish antegrade flow. Details on arterial cannulation site of ECMO are displayed in Table . The pre-ECMO lactate peak (mg/dl) was defined as the highest available lactate peak in an arterial blood gas analysis right before venoarterial ECMO implantation. This was also the case for the pre-ECMO creatine kinase-MB (CK-MB) peak (U/l), which was measured via blood sample analysis to assess the extent of myocardial injury. Primary endpoints were 30-day mortality and successful weaning of venoarterial ECMO support after haemodynamic stabilization. Open chest therapy was defined as non-sternal closure at the end of the operation due to pronounced bleeding and/or cardiopulmonary oedema. The reasons for death were divided into cardiac and non-cardiac reasons. Cardiac was defined by the underlying ventricular failure and consecutive LCOS. Non-cardiac was divided into 4 different entities, whereas the presence of more than one reason was also possible: multiorgan failure despite ECMO support; cerebral ischaemic and/or haemorrhagic stroke and/or cerebral oedema; sepsis and consecutive septic shock; and aortic rupture and/or extensive bleeding and consecutive haemorrhagic shock. A postoperative cerebral computed tomography scan was mandatory for the diagnosis of stroke and cerebral oedema. The follow-up was 97.5% complete for 30-day mortality. Only 3 patients (2.5%) were lost to follow-up after discharge before 30 days. The median survival time was 6 (1–27) days. Follow-up was closed in March 2024. Continuous variables were tested for normal distribution by using the Shapiro–Wilk test and additional visualization using histograms. All continuous variables were not normally distributed. Therefore, they were presented as median with corresponding interquartile range (IQR; 25th–75th percentile). Categorical data were represented as absolute numbers with corresponding percentages. Multivariable binary logistic regression analysis was performed to identify independent risk factors for 30-day mortality. All variables included in Table (preoperative variables), Table (intraoperative variables) and Table (variables for venoarterial ECMO support) were considered for the initial logistic regression model. To include pre-ECMO CK-MB peak as a variable in the regression model, single imputation by using predictive mean matching considering all preoperative variables was performed. Variables for the regression model were chosen using the backward selection technique based on the Akaike information criterion. All selected variables were then used for the multivariable binary logistic regression and are shown in Table . Univariable binary logistic regression was performed complementary. A receiver operating curve was designed to measure the prediction accuracy of pre-ECMO lactate peak for 30-day mortality. The corresponding area under the curve (AUC) was defined as acceptable with a value >0.70. Restricted cubic splines were used to investigate the association between pre-ECMO lactate peak and survival. Kaplan–Meier curves including a log-rank test were prepared to illustrate and compare survival between groups according to the indication for ECMO implantation. All P -values are 2-sided. The α -level was defined at 0.05. Statistical analysis was performed using R (The R Foundation for Statistical Computing) version 4.3.2. Pre- and intraoperative variables The study cohort consisted of 117 patients. Preoperative variables are shown in Table . The median age was 61 (IQR 55–69) years and almost half of the patients (46%) were female. Preoperative shock was present in 47 (40%) patients and 27 (23%) of them underwent preoperative resuscitation. Coronary malperfusion was the most present type of preoperative organ malperfusion in 55 (47%) patients. The intraoperative variables are depicted in Table . Median cardiopulmonary bypass time was 291 (IQR 238–387) and aortic cross-clamp time 136 (IQR 97–173) min. The main arterial cannulation technique was the axillary artery in 52 (44%) patients. Additional replacement of the aortic root was performed in 55 (47%) patients and 32 (27%) underwent total arch replacement. Concomitant CABG was performed in 49 (42%) patients. Variables for ECMO support The results for ECMO support are shown in Table . The majority of 87 (74%) patients received intraoperative implantation of ECMO due to weaning failure from cardiopulmonary bypass, whereas 30 (26%) patients underwent ECMO implantation during the early postoperative course in the ICU due to the development of ventricular failure and consecutive LCOS. The main indication was a biventricular failure in 51 (44%) patients, followed by isolated right in 39 (33%) and isolated left ventricular failure and 27 (23%) patients. The median pre-ECMO lactate peak was 85 (IQR 57–131) mg/dl and the median pre-ECMO CK-MB peak was 63 (IQR 23–232) U/l. Outcomes and survival The postoperative variables are shown in Table . Six patients (5%) died intraoperatively despite ECMO implantation, 3 of them experienced aortic rupture in downstream segments and in the other 3 maintaining sufficient circulation was not possible. Median time in the ICU was 7 (IQR 2–18) days and time on ECMO 3 (IQR 1–7) days. Successful weaning of venoarterial ECMO was achieved in 36 (31%) patients. Thirty-day mortality was 72% (84 patients) and in-hospital mortality 80% (94 patients). The leading cause of death was non-cardiac in the majority of patients (55%, 65 patients). Multiorgan failure despite ECMO support was the main reason in 45 (39%) of patients, fatal neurologic injury and sepsis accounted for almost 30% of early deaths. Those 3 early postoperative complications limited the duration of ECMO support and were the driving reasons for a withdrawal of life support. Cardiac causes of death—continuous and/or recurring ventricular failure and consequent LCOS—accounted for death in 29 patients (25%). The 90-day survival probability according to type of ventricular failure including patients at risk is shown in Fig. . No significant difference in terms of right-, left- or biventricular failure was observed ( P = 0.390). However, there was a trend towards a higher chance of successful weaning from ECMO in the case of isolated right ventricular failure when compared to the left- and biventricular failure ( P = 0.050). Independent risk factors for mortality and validation of pre-ECMO lactate as predictor for survival The results of uni- and multivariable binary logistic regression for 30-day mortality are shown in Table . The pre-ECMO lactate peak [odds ratio (OR) 1.02, 95% confidence interval (CI) 1.005–1.032], the presence of preoperative shock (OR 9.49, 95% CI 1.785–96.504) and the need for total arch replacement (OR 6.67, 95% CI 1.639–34.695) were independent associates for 30-day mortality. The corresponding receiver operating curve for the pre-ECMO lactate peak is illustrated in Fig. A. Pre-ECMO lactate peak was identified as a valid predictor for 30-day mortality (AUC 0.73). Furthermore, restricted cubic splines indicated a relevant association between the pre-ECMO lactate peak and overall survival ( P = 0.004) (Fig. B). The median lactate of 85 mg/dl represented the threshold with an increased risk. The study cohort consisted of 117 patients. Preoperative variables are shown in Table . The median age was 61 (IQR 55–69) years and almost half of the patients (46%) were female. Preoperative shock was present in 47 (40%) patients and 27 (23%) of them underwent preoperative resuscitation. Coronary malperfusion was the most present type of preoperative organ malperfusion in 55 (47%) patients. The intraoperative variables are depicted in Table . Median cardiopulmonary bypass time was 291 (IQR 238–387) and aortic cross-clamp time 136 (IQR 97–173) min. The main arterial cannulation technique was the axillary artery in 52 (44%) patients. Additional replacement of the aortic root was performed in 55 (47%) patients and 32 (27%) underwent total arch replacement. Concomitant CABG was performed in 49 (42%) patients. The results for ECMO support are shown in Table . The majority of 87 (74%) patients received intraoperative implantation of ECMO due to weaning failure from cardiopulmonary bypass, whereas 30 (26%) patients underwent ECMO implantation during the early postoperative course in the ICU due to the development of ventricular failure and consecutive LCOS. The main indication was a biventricular failure in 51 (44%) patients, followed by isolated right in 39 (33%) and isolated left ventricular failure and 27 (23%) patients. The median pre-ECMO lactate peak was 85 (IQR 57–131) mg/dl and the median pre-ECMO CK-MB peak was 63 (IQR 23–232) U/l. The postoperative variables are shown in Table . Six patients (5%) died intraoperatively despite ECMO implantation, 3 of them experienced aortic rupture in downstream segments and in the other 3 maintaining sufficient circulation was not possible. Median time in the ICU was 7 (IQR 2–18) days and time on ECMO 3 (IQR 1–7) days. Successful weaning of venoarterial ECMO was achieved in 36 (31%) patients. Thirty-day mortality was 72% (84 patients) and in-hospital mortality 80% (94 patients). The leading cause of death was non-cardiac in the majority of patients (55%, 65 patients). Multiorgan failure despite ECMO support was the main reason in 45 (39%) of patients, fatal neurologic injury and sepsis accounted for almost 30% of early deaths. Those 3 early postoperative complications limited the duration of ECMO support and were the driving reasons for a withdrawal of life support. Cardiac causes of death—continuous and/or recurring ventricular failure and consequent LCOS—accounted for death in 29 patients (25%). The 90-day survival probability according to type of ventricular failure including patients at risk is shown in Fig. . No significant difference in terms of right-, left- or biventricular failure was observed ( P = 0.390). However, there was a trend towards a higher chance of successful weaning from ECMO in the case of isolated right ventricular failure when compared to the left- and biventricular failure ( P = 0.050). The results of uni- and multivariable binary logistic regression for 30-day mortality are shown in Table . The pre-ECMO lactate peak [odds ratio (OR) 1.02, 95% confidence interval (CI) 1.005–1.032], the presence of preoperative shock (OR 9.49, 95% CI 1.785–96.504) and the need for total arch replacement (OR 6.67, 95% CI 1.639–34.695) were independent associates for 30-day mortality. The corresponding receiver operating curve for the pre-ECMO lactate peak is illustrated in Fig. A. Pre-ECMO lactate peak was identified as a valid predictor for 30-day mortality (AUC 0.73). Furthermore, restricted cubic splines indicated a relevant association between the pre-ECMO lactate peak and overall survival ( P = 0.004) (Fig. B). The median lactate of 85 mg/dl represented the threshold with an increased risk. In this retrospective multicentre cohort study, we report—to the best of our knowledge—the largest current experience of patients undergoing surgery for ATAAD receiving perioperative venoarterial ECMO support due to LCOS. This patient cohort represents a subgroup at high risk in surgical treatment of ATAAD with tremendously high mortality. Based on this, considerable arguments arise questioning the realistic chance of weaning from venoarterial ECMO and even considering ECMO as potential futile indication given the patient’s overall prognosis. Given the high rate of preoperative tamponade and coronary malperfusion, surgeons were confronted with complex and haemodynamically unstable patients at admission. This might explain the rather high rate of emergent femoral cannulation of 36% causing retrograde arterial flow for CPB. In general, all institutions involved in this study follow a clear preference of antegrade flow via axillary or central cannulation. Once decision for ECMO need was made, patients with preoperative distal malperfusion underwent a switch of cannulation sites to establish antegrade flow via the sidebranch of the prosthesis. This guarantees perfusion of the true lumen and avoids retrograde perfusion of the false lumen via distal re-entries, a clear benefit in ATAAD patients when addressing distal malperfusion. Additionally antegrade flow via the sidebranch or axillary artery should be the preferred cannulation strategy in this setting of LCOS in order to reduce afterload for the left ventricle. In this study, a successful weaning rate of 31% was achieved. Given the comparably young age of 61 (IQR 55–69) years of our patient cohort, perioperative ECMO enabled survival of 20% of these patients who otherwise would not have survived ATAAD. The duration of ECMO was comparatively short with a median of 3 (IQR 1–7) days for the total cohort and 5 (IQR 5–7) days for the patients who were successfully weaned. This might be driven by the fact that primarily non-cardiac causes were the main reasons of death in this cohort, mainly progressive multiorgan failure despite venoarterial ECMO support. In a recent study with data from the observational, multicentre ‘PC-ECMO’ registry investigating the outcomes of patients with postcardiotomy cardiogenic shock after surgery for ATAAD requiring venoarterial ECMO support, a total of 62 patients were analysed and compared to non-dissection patients . In-hospital mortality accounted for 74.2%, while 37.1% of patients were successfully weaned from venoarterial ECMO. These results are comparable to the findings of this study and further reveal that in this patient cohort, venoarterial ECMO emerged as an effective rescue option, demonstrating comparable in-hospital mortality and postoperative outcomes to the general patient cohort with postcardiotomy cardiogenic shock. A Chinese single-centre observation on 27 patients who underwent surgery for ATAAD and received venoarterial ECMO support, 9 (33.3%) patients were successfully weaned . The overall in-hospital mortality was 81.5%. The main causes of death were multiorgan failure, neurological complications and bleeding. Similar outcome results on early morbidity are presented in this study, which furthermore accounts for the rather short duration of ECMO support in our patient cohort. Lethal early complications during ECMO after surgical repair for ATAAD have also been reported by other groups. Sultan et al. provided data from the Pennsylvania Health Care Cost Containment Council with median time from ECMO implantation to death of 1 day. The rate of coronary malperfusion (47%) was high in our study cohort when compared to reported incidences of coronary involvement in ATAAD of 9% in single-centre reports . This observation has also been pointed out on a recently published systematic review and meta-analysis by Sá et al. on the use of ECMO after surgical repair for ATAAD. This clearly highlights the complexity of patients suffering from lethal aortic event with coronary complications. Furthermore, it also underlines the importance of rapid diagnosis and initiation of surgery due to not only the well-known hourly death rate of aortic dissection but also the ongoing coronary ischaemia and potential irreversible myocardial damage in these patients. Markers for myocardial injury have been assessed by Hou et al. comparing preoperative CK-MB levels between successful and failed weaning candidates. CK-MB levels were lower in patients who were successfully weaned from venoarterial ECMO compared to those who failed to wean and died [14 (6–30) vs 55 (28–138) U/l, P < 0.01]. This could indicate that the extent of myocardial injury measured by CK-MB before ECMO implantation might be a relevant indicator for non-successful weaning and mortality. However, the pre-ECMO CK-MB peak was not associated with 30-day mortality in this study, whereas different timepoints of CK-MB measurement may cause relevant bias and make conclusions difficult. Instead, the pre-ECMO lactate peak emerged as a valid predictor for 30-day mortality. A rising lactate before ECMO implantation may originate from many different sources both from systemic malperfusion in as well as local organ malperfusion (coronary, peripheral and visceral) caused by ATAAD. A previous study on pre-ECMO lactate peak in case of extracorporeal cardiopulmonary resuscitation discovered that the pre-ECMO lactate levels in patients with refractory cardiac arrest were associated with 1-year survival . The authors concluded that the pre-ECMO lactate may be an easily accessible and quickly available point-of-care measurement which might act as an early prognostic marker when considering initiation or continuation of extracorporeal cardiopulmonary resuscitation treatment. In our study, preoperative lactate peak emerged as strong predictor for 30-day mortality. Furthermore, a threshold of 85 mg/dl was identified and associated with impaired survival. Lactate levels should be considered in perioperative decision-making in the heart team but should not be the single basis for or against the decision for perioperative ECMO support. Limitations This study is limited by its retrospective nature. Furthermore, it lacks information about the quality of life of the survivors, which would be of great interest in context of a severe course of ICU treatment after ECMO support and surgery for ATAAD. In terms of validation of the pre-ECMO lactate peak, lactate sources driven by local malperfusion should be evaluated based on specific radiographic assessment in every single patient, which was not possible in this study. A corresponding lactate clearance after ECMO implantation could give more detailed insights in addition to the lactate peak for further identification of patients at high risk during the immediate course after ECMO implantation. This study is limited by its retrospective nature. Furthermore, it lacks information about the quality of life of the survivors, which would be of great interest in context of a severe course of ICU treatment after ECMO support and surgery for ATAAD. In terms of validation of the pre-ECMO lactate peak, lactate sources driven by local malperfusion should be evaluated based on specific radiographic assessment in every single patient, which was not possible in this study. A corresponding lactate clearance after ECMO implantation could give more detailed insights in addition to the lactate peak for further identification of patients at high risk during the immediate course after ECMO implantation. Patients requiring perioperative venoarterial ECMO support due to LCOS after surgery for ATAAD show tremendously high morbidity and mortality. Nevertheless, ECMO as salvage treatment was successfully weaned in almost one-third of patients and generated 20% of hospital survivors. The pre-ECMO lactate peak is a valid predictor for mortality and shows a strong correlation with survival. Decision-making for venoarterial ECMO support can be crucial under the circumstances of ATAAD and should be well balanced against the high-risk profile in this patient cohort. Lactate levels in conjunction with clinical parameters should be considered as an additive measure when considering this treatment option. |
Knowledge Gaps in Gluten-Free Diet Awareness among Patients and Healthcare Professionals: A Call for Enhanced Nutritional Education | d6c4d124-7d85-4862-b940-474770f2b347 | 11314127 | Patient Education as Topic[mh] | Celiac disease (CeD) is a chronic immune-mediated disorder that affects approximately 1% of the general population. CeD is characterized by inflammation of the small intestinal mucosa and subsequent villous atrophy, triggered by the ingestion of gluten protein. Gluten ingestion leads to several intestinal (e.g., diarrhea, abdominal pain) and extraintestinal (e.g., osteoporosis) symptoms in patients with CeD. If left untreated, CeD can lead to serious complications, including intestinal cancer or infertility . The only available treatment is a strict, lifelong, gluten-free diet (GFD), which should result in complete symptomatic, histological, and serological remission, and prevent these complications . However, it can be exceedingly difficult to completely avoid all gluten-containing foods. Thus, adherence to a GFD among people with CeD is estimated to range from 42% to 91% in adults , and from 23% to 98% in children and adolescents , depending on the population considered and the criteria used to define adherence. The key to the success lies in dietary counseling by a specialized dietitian–nutritionist and in the maintenance of adherence to the prescribed diet by the patient . Several studies have examined the factors associated with adherence to a GFD and the most often reported are cognitive (knowledge, attitudes, understanding of product labels, and other food intolerances); emotional (anger, depression, anxiety); and sociocultural and sociodemographic characteristics (public awareness, eating out, travel, social events, and cost of gluten-free foods); as well as joining an advocacy group and having access to a regular dietary follow-up . Limited education about the disease and a GFD among CeD patients is also an attributing factor to inadequate adherence . Additionally, the management of healthcare professionals (HCPs) might influence the adherence of patients to this diet. Many CeD patients express dissatisfaction with the time dedicated and quality of information provided by their physicians regarding a GFD, leading them to seek information on social networks . Therefore, it has been widely demonstrated that achieving good adherence to a GFD requires two main issues : (1) that HCPs dedicate sufficient time to explain the diet after diagnosis, that they stay constantly updated on the diet, and that they have practical tools to measure adherence during the follow-up. This control allows them to detect and correct any errors and transgressions in the diet and (2) that patients and their families have comprehensive counseling and nutritional education about a GFD. They must be informed about changes in their food habits and lifestyle, and be taught about how to integrate a GFD into all spheres of their life . There are some guidelines that outline the essential information that patients should receive to correctly follow a GFD. These include explaining the disease and the requirement for a lifelong GFD, planning a balanced GFD, discussing the benefits of adhering to a GFD and the risk of nutritional deficiencies, identifying sources of hidden gluten in various food items and critical points of cross-contamination, educating patients on how to read labels before purchasing the gluten-free food, providing precautions while eating out and traveling, and ensuring access to celiac support groups and resources . Thus, the aim of our study was to assess the current knowledge about a GFD and the clinical monitoring of adherence to the diet among CeD patients and HCPs in Spain in order to design improvement strategies in the training of patients and professionals..
2.1. Study Design and Instruments Specific questionnaires were designed to assess the knowledge of the celiac population, and their caregivers, regarding CeD and a GFD (Q1, questionnaire 1). Additionally, the follow-up of the pathology in clinical settings was analyzed from the perspectives of patients or their relatives (Q2, questionnaire 2) and HCPs (Q3, questionnaire 3). The questionnaires were developed with inputs from gastroenterologists, registered dietitian–nutritionists, and representatives of patients’ associations. Surveys were created for online filling out and included multiple answer choices to ensure the maximum accuracy in the responses. Q1 and Q2 were intended for individuals with CeD or people who are responsible for the care of those with CeD (such as parents or guardians). They were distributed online among CeD patient association members of FACE (Spanish Federation of Celiac Societies). On the one hand, the Q1 survey contained 3 general questions on sources of information about CeD and a GFD and 14 questions to measure the knowledge of a GFD among people with CeD, mainly in relation to the gluten content of different food types and cross-contact. On the other hand, Q2 was designed by researchers from the Spanish Society of Celiac Disease (SEEC). It comprised 20 questions and was divided into 5 subsections: sociodemographic questions (4 items), information obtained from HCPs about a GFD (3 items), inquiries about sources of information (3 items), details about the follow-up to ensure dietary compliance (5 items), and questions related to knowledge about a GFD (5 items). The Q3 questionnaire was also designed by the SEEC to be answered by HCPs working with CeD patients, both pediatric and adult. It was distributed online among scientific societies related to CeD, gastroenterology, and nutrition in Spain. The Q3 questionnaire for HCPs was composed of 22 questions, divided into 3 subsections: sociodemographic background (3 items), clinical practice related with diagnosis and follow-up and questions regarding the explanation of a GFD during the follow-up (11 items), and inquiries related to knowledge about a GFD (8 items). All the questionnaires were distributed throughout 17 Spanish autonomous communities and sampling was carried out by the snowball method. In order to ensure greater dissemination, they were also shared through the different social network platforms (Facebook, Instagram, X) of FACE and their member associations. Additionally, HCPs distributed the questionnaires among their CeD patients, aiming to reach non-member patients as well. Before the start of the study, all participants agreed to take part in it. The study was submitted to the Ethics Committee for Human Research of the University of the Basque Country, UPV/EHU (M10_2023_303). This committee established that this research does not require evaluation by the Ethics Committee for human subjects, given the anonymized data fall outside the scope of the General Data Protection Regulation (GDPR). 2.2. Statistical Analysis The study of frequencies and percentages was used to conduct the descriptive analysis. Chi-square tests were used to compare the qualitative responses between groups. Results were considered as statistically significant with a p -value less than 0.05 (95% confidence interval). Participants who did not complete the entire questionnaire were excluded from the analysis. IBM SPSS Statistics for Windows, version 28.0. (IBM Corp., Armonk, NY, USA), was used for the statistical analysis of the data.
Specific questionnaires were designed to assess the knowledge of the celiac population, and their caregivers, regarding CeD and a GFD (Q1, questionnaire 1). Additionally, the follow-up of the pathology in clinical settings was analyzed from the perspectives of patients or their relatives (Q2, questionnaire 2) and HCPs (Q3, questionnaire 3). The questionnaires were developed with inputs from gastroenterologists, registered dietitian–nutritionists, and representatives of patients’ associations. Surveys were created for online filling out and included multiple answer choices to ensure the maximum accuracy in the responses. Q1 and Q2 were intended for individuals with CeD or people who are responsible for the care of those with CeD (such as parents or guardians). They were distributed online among CeD patient association members of FACE (Spanish Federation of Celiac Societies). On the one hand, the Q1 survey contained 3 general questions on sources of information about CeD and a GFD and 14 questions to measure the knowledge of a GFD among people with CeD, mainly in relation to the gluten content of different food types and cross-contact. On the other hand, Q2 was designed by researchers from the Spanish Society of Celiac Disease (SEEC). It comprised 20 questions and was divided into 5 subsections: sociodemographic questions (4 items), information obtained from HCPs about a GFD (3 items), inquiries about sources of information (3 items), details about the follow-up to ensure dietary compliance (5 items), and questions related to knowledge about a GFD (5 items). The Q3 questionnaire was also designed by the SEEC to be answered by HCPs working with CeD patients, both pediatric and adult. It was distributed online among scientific societies related to CeD, gastroenterology, and nutrition in Spain. The Q3 questionnaire for HCPs was composed of 22 questions, divided into 3 subsections: sociodemographic background (3 items), clinical practice related with diagnosis and follow-up and questions regarding the explanation of a GFD during the follow-up (11 items), and inquiries related to knowledge about a GFD (8 items). All the questionnaires were distributed throughout 17 Spanish autonomous communities and sampling was carried out by the snowball method. In order to ensure greater dissemination, they were also shared through the different social network platforms (Facebook, Instagram, X) of FACE and their member associations. Additionally, HCPs distributed the questionnaires among their CeD patients, aiming to reach non-member patients as well. Before the start of the study, all participants agreed to take part in it. The study was submitted to the Ethics Committee for Human Research of the University of the Basque Country, UPV/EHU (M10_2023_303). This committee established that this research does not require evaluation by the Ethics Committee for human subjects, given the anonymized data fall outside the scope of the General Data Protection Regulation (GDPR).
The study of frequencies and percentages was used to conduct the descriptive analysis. Chi-square tests were used to compare the qualitative responses between groups. Results were considered as statistically significant with a p -value less than 0.05 (95% confidence interval). Participants who did not complete the entire questionnaire were excluded from the analysis. IBM SPSS Statistics for Windows, version 28.0. (IBM Corp., Armonk, NY, USA), was used for the statistical analysis of the data.
3.1. Knowledge of Celiac Population Concerning a GFD The Q1 questionnaire involved 2437 people with CeD. Out of these participants, 2036 (83.5%) reported that they were members of a patient association. The remaining respondents cited various reasons for not being members: 252 (10.3%) cannot afford it, 94 (3.9%) believe they no longer need it, and 55 (2.3%) consider it to be of no use. Participants were asked, “Who explained to you what you know about CD?” In response, 1267 subjects (52%) said it was the doctor who diagnosed them, 1053 participants (43.2%) credited celiac associations, 62 (2.5%) mentioned a private nutritionist, 51 (2.1%) said the practice nurse, and 4 participants (0.2%) obtained information from other sources. When asked where they turn to for information about a GFD, 1253 participants (51.4%) reported using the Internet and social networks, 759 (31.1%) turned to the patient association, 371 (15.2%) consulted their doctor, 51 (2.1%) sought advice from a dietitian–nutritionist, and 3 (0.1%) looked for information through other means. Participants answered 14 questions to measure their knowledge of a GFD . Knowledge was assessed on a scale of 0–14 according to the number of correct answers. The average score was 11.06 ± 1.97 points. The distribution of the scores is illustrated in . The average total score varied based on who provided the information about CeD and a GFD. Statistically significant differences were observed between those who received the information from the doctor who diagnosed them and those who received it from the association ( p < 0.001). Those who received information through associations achieved higher scores (10.81 ± 2.02 points vs. 11.36 ± 1.83 points, respectively). 3.2. Follow-Up of a GFD in Clinical Settings 3.2.1. The Healthcare Professional’s Perspective To begin, descriptive issues of clinical practice need to be detailed. The Q3 questionnaire was distributed among multidisciplinary HCPs related to CeD. It involved 346 multidisciplinary HCPs: primary care pediatricians ( n = 125; 36.1%); gastroenterologists (42.8%) either for adult ( n = 66) or pediatric ( n = 82) patients; family physicians ( n = 47; 13.6%); nurses ( n = 9; 2.6%); dietitians–nutritionists ( n = 6; 1.7%); and other HCPs ( n = 11; 3.2%). Two-thirds of HCPs were specialized in pediatric care, while one-third were in adult care. Of these respondents, 83.2% reported diagnosing between 0 and 25 cases of CeD per year while 11% diagnosed between 25 and 50 cases annually. Regarding the follow-up care, 61.8% provide it to 0–25 people with CeD, while 38.2% indicated monitoring more than 25 patients. Participants were queried about how much time they typically spend explaining a GFD to patients during the diagnostic visit, and 91% indicated a duration of less than half an hour. Of these, 166 individuals (48%) allocate less than 15 min, while 143 (41.3%) spend between 15 and 30 min. Conversely, 31 professionals (9%) dedicate between 30 and 60 min with only 6 (1.7%) extending beyond 60 min. Despite this, the majority of the respondents ( n = 276; 79.8%) expressed a desire for more time in consultation to thoroughly guide patients in adhering to a GFD. Regarding the time spent on follow-ups to measure the adherence to a GFD in patients, it was found that 290 individuals (83.8%) reported devoting less than 15 min. Additionally, 48 (13.9%) stated they spent between 15 and 30 min. Only a small fraction, six individuals (2.3%), reported spending between 30 and 60 min. There were noticeable differences in the time in consultation, whether for diagnosis or follow-up, depending on the age of the patients. In this regard, the time spent explaining a GFD after diagnosis was related to the type of patient treated ( p < 0.001; Cramer’s V = 0.23), with a higher percentage of professionals dedicated to children in the categories denoting more time . Similarly, the time spent explaining a GFD during follow-up also showed a statistically significant association with the type of patient treated ( p = 0.014; Cramer’s V = 0.16) . Curiously, during the follow-up, the percentage of HCPs spending 30–60 min with adults was higher than with children. This could be related to the persistence of symptoms and the ongoing effort to identify their underlying causes. The willingness of HCPs to spend more time explaining a GFD was related to gender ( p = 0.005; Cramer’s V = 0.18), with women requesting more time (84.5% in women compared to 69.2% in men), and to the age of the professionals ( p = 0.047; Cramer’s V = 0.15). Professionals in the younger age groups, specifically those up to 50 years of age, requested the most time. Regarding the recommendations to visit a dietitian–nutritionist, 145 (41.9%) of respondents do not recommend such visits, while a similarly sized group ( n = 146; 42.2%) said they sometimes suggested it. Merely 15 (4.3%) indicated recommending it to half of their patients and 40 (11.6%) always give this advice. Interestingly, recommendations vary depending on the age of the patients, with a higher tendency to endorse it for adults than for children ( p < 0.001; Cramer’s V = 0.25). Concerning the recommendations to join a patient association, 300 HCPs (86.7%) point out that they always advise it after the diagnosis, 31 (9.0%) say they suggested it sometimes, and 15 (4.3%) never recommend it. Interestingly, the recommendation to join a patient association was only mentioned after the initial diagnosis, but not during follow-up visits. When asked where they direct their patients when they have doubts about a GFD, the majority of respondents ( n = 316; 91.3%) recommend consulting the local celiac association. Additionally, 166 (48.0%) suggest visiting specific websites, and 116 (33.2%) refer patients to scientific societies. Only 100 (28.9%) endorse visiting a dietitian–nutritionist, 30 (8.7%) to their reference general practitioner, 19 (5.5%) to others, and 3 (0.8%) do not give any advice. To continue, the quality of consultation needs to be addressed. Regarding adherence to a GFD, only 41 (11.8%) HCPs claimed to use specific nutritional tools like nutritional surveys to assess adherence. A majority (63.3%) mentioned using general, open-ended, non-specific questions, while a significant number of participants (20.2%) do not ask their patients about adherence-related issues. As far as the HCP’s knowledge about a GFD is concerned, participants answered four questions to measure their knowledge of a GFD . Knowledge was assessed on a scale of 0–4 according to the number of correct answers. The average score was 2.06 ± 0.94 points. The distribution of the scores is illustrated in . Noticeably, 61 (17.6%) of HCPs believe that quinoa and amaranth may contain gluten and 245 (70.8%) believe that the declaration of gluten-free traces is mandatory. Moreover, approximately 15% do not know more than three critical points where cross-contamination might occur or they cannot specify any at all. In terms of knowledge and information about a GFD, a meaningful 96% of participants considered it relevant to have access to specific information, training courses, and materials. When they have specific doubts regarding a GFD, 235 (67.9%) look for the information in national and international scientific societies, 182 (52.6%) mention they use specific medical websites, 71 (20.6%) browse their doubts on the Internet and 63 (18.2%) do so in specific divulgation blogs about CeD, 25 (7.2%) use specific resources from the specialized food industry, and 27 (7.8%) use other references. The point here is that only 131 (35.0%) turn to the patient associations and consider them as an interesting partner. In addition, the vast majority (93.4%) stated that the national health system should incorporate more dietitians–nutritionists to better assess patients with specific dietary needs. Small differences were observed when considering the age of HCPs ( p = 0.007; Cramer’s V = 0.16). Nearly all HCPs under 50 years of age supported this incorporation (98.1%), compared to 90.0% of those over 50 years. 3.2.2. The Patient’s Perspective A total of 1294 individuals participated in the Q2 questionnaire, ranging in age from 6 to 80 years (mean = 40.65; SD = 13.15). Of the respondents, 16.9% ( n = 219) identified as men, 82.3% ( n = 1065) as women, and 0.8% ( n = 10) preferred not to disclose their gender. Among the participants, 67.5% ( n = 873) reported being diagnosed with CeD, while 32.5% ( n = 421) were first-degree relatives of someone with this disease. The age at first diagnosis ranged from 9 months to 72 years, with an average age of 10.42 years (SD = 17.28). To begin, descriptive issues of managing a GFD need to be detailed. Regarding the first steps to follow a GFD, the majority of the respondents ( n = 924; 71.4%) agreed that their first recommendations about the diet were provided by their physician. Additionally, 182 (14.1%) reported receiving guidance from the local celiac association, 34 (2.6%) from a dietitian–nutritionist, and 13 (1%) from their nurse. Furthermore, it should be noted that 141 (10.9%) cited other sources, with friends/partners/family members ( n = 62) and self-study ( n = 51) being the most notable. However, opinions about the quality of the information provided about a GFD for the first time were diverse. Approximately 415 (32.1%) considered it poor, while 204 (15.8%) found it sufficient. On the optimistic side, 354 (27.4%) regarded it as good, and 321 (24.8%) deemed it very good. Apart from that, it is noteworthy that 908 (70.2%) of respondents had never consulted with a dietitian–nutritionist. When asked about the sources of information they rely on when they have doubts about a GFD, local patient associations emerged as the preferred choice/option for 804 (62.1%) of respondents. Additionally, 306 (23.7%) consulted their family physician, while 106 (8.2%) sought advice from dietitians–nutritionists. Furthermore, 100 (7.7%) expressed confidence in the information provided by scientific societies. In terms of social and familial networks, 230 (17.8%) sought guidance from specific blogs or influencers, while 149 (11.5%) relied on other sources such as family, friends, colleagues, and consultation groups formed on digital media platforms (like Facebook and WhatsApp). The Internet, in general, was the second most utilized source of information for addressing doubts about following a GFD, with 759 (58.7%) of participants referring to it. Differences were detected in the frequency of use of this tool: 264 (20.4%) use it infrequently, 594 (45.9%) occasionally, 96 (7.4%) monthly, and 294 (22.7%) use it weekly. When it comes to the quality of their GFD, 1082 (83.6%) believed they maintained a healthy diet, while 142 (11.0%) were unsure, and 70 (5.4%) considered their diet to be unhealthy. This positive perception may be correlated with the responses to the question about visiting a dietitian–nutritionist for advice, as only 34.6% of celiac patients answered affirmatively. Concerning oat consumption, a considerable percentage (68.2%, n = 882) abstain from consuming oats altogether. Among those who do consume oats, the majority opt for certified gluten-free varieties. Of the latter, 342 (26.4%) partake of/eat oats occasionally, while 62 (4.8%) include them in their daily diet. Fortunately, only a small minority (0.6%) consume oats without confirming whether they are certified gluten-free. They were also asked questions about the different food groups to assess the risk of gluten contamination and the subsequent risk of transgression of a GFD. Participants answered two questions, and this knowledge was assessed on a scale of 0–2 according to the number of correct answers. The average score was 1.70 ± 0.49 points. The distribution of the scores is illustrated in . A total of 74.5% of participants demonstrate the ability to identify gluten-free food staples based on their natural absence of gluten. A higher percentage, 95.4%, identified foods prone to contamination, although it is true that queried foods were described within the tables provided by celiac associations . To continue, the quality of consultation needs to be addressed. When querying about follow-up medical appointments, our focus was on assessing if these visits inquired about adherence to a GFD and the level of compliance with it. Significant statistical differences were observed depending on whether the responses were provided by the patients themselves or their first-degree relatives ( p < 0.001; Cramer’s V = 0.19). While 69% of patients with CeD responded positively, this percentage escalated to 86.2% when family members were surveyed. Moreover, it was asked whether these follow-up visits involve a thorough nutritional assessment for the patient, including evaluations of weight, height, and body composition, as well as specialized complete blood tests aimed at evaluating vitamin and mineral levels. Again, significant statistical differences were observed depending on the group being asked ( p < 0.001; Cramer’s V = 0.36). The majority of patients with CeD responded negatively ( n = 524; 60%), indicating that they did not undergo this assessment, while 102 (24.2%) family members similarly reported that such an evaluation was not conducted. Similarly, patients stated unequivocally (95.3%) that they did not have their food intake recorded to assess the nutritional quality of their diet. Finally, the patient’s caregivers interviewed also expressed a more positive evaluation regarding the perceived knowledge of HCPs conducting follow-up on a GFD ( p < 0.001; Cramer’s V = 0.26). illustrates the obtained answers. 3.2.3. Differences and Similarities between the Perspectives of CeD Patients and HCPs The perception of the need to visit a dietitian–nutritionist varied between patients and HCPs. Among the patients, 386 (29.8%) believed it was necessary to see a dietitian–nutritionist, whereas 201 HCPs (58.1%) considered such visits essential. This indicates that a significantly higher percentage of professionals recognized the requirement of consulting a diet specialist ( p < 0.001; Cramer’s V = 0.24). In contrast, both groups agreed on the need for more specific training of HCPs on a GFD. A total of 1242 patients (96.0%) and 332 professionals (96.0%) considered this training necessary.
The Q1 questionnaire involved 2437 people with CeD. Out of these participants, 2036 (83.5%) reported that they were members of a patient association. The remaining respondents cited various reasons for not being members: 252 (10.3%) cannot afford it, 94 (3.9%) believe they no longer need it, and 55 (2.3%) consider it to be of no use. Participants were asked, “Who explained to you what you know about CD?” In response, 1267 subjects (52%) said it was the doctor who diagnosed them, 1053 participants (43.2%) credited celiac associations, 62 (2.5%) mentioned a private nutritionist, 51 (2.1%) said the practice nurse, and 4 participants (0.2%) obtained information from other sources. When asked where they turn to for information about a GFD, 1253 participants (51.4%) reported using the Internet and social networks, 759 (31.1%) turned to the patient association, 371 (15.2%) consulted their doctor, 51 (2.1%) sought advice from a dietitian–nutritionist, and 3 (0.1%) looked for information through other means. Participants answered 14 questions to measure their knowledge of a GFD . Knowledge was assessed on a scale of 0–14 according to the number of correct answers. The average score was 11.06 ± 1.97 points. The distribution of the scores is illustrated in . The average total score varied based on who provided the information about CeD and a GFD. Statistically significant differences were observed between those who received the information from the doctor who diagnosed them and those who received it from the association ( p < 0.001). Those who received information through associations achieved higher scores (10.81 ± 2.02 points vs. 11.36 ± 1.83 points, respectively).
3.2.1. The Healthcare Professional’s Perspective To begin, descriptive issues of clinical practice need to be detailed. The Q3 questionnaire was distributed among multidisciplinary HCPs related to CeD. It involved 346 multidisciplinary HCPs: primary care pediatricians ( n = 125; 36.1%); gastroenterologists (42.8%) either for adult ( n = 66) or pediatric ( n = 82) patients; family physicians ( n = 47; 13.6%); nurses ( n = 9; 2.6%); dietitians–nutritionists ( n = 6; 1.7%); and other HCPs ( n = 11; 3.2%). Two-thirds of HCPs were specialized in pediatric care, while one-third were in adult care. Of these respondents, 83.2% reported diagnosing between 0 and 25 cases of CeD per year while 11% diagnosed between 25 and 50 cases annually. Regarding the follow-up care, 61.8% provide it to 0–25 people with CeD, while 38.2% indicated monitoring more than 25 patients. Participants were queried about how much time they typically spend explaining a GFD to patients during the diagnostic visit, and 91% indicated a duration of less than half an hour. Of these, 166 individuals (48%) allocate less than 15 min, while 143 (41.3%) spend between 15 and 30 min. Conversely, 31 professionals (9%) dedicate between 30 and 60 min with only 6 (1.7%) extending beyond 60 min. Despite this, the majority of the respondents ( n = 276; 79.8%) expressed a desire for more time in consultation to thoroughly guide patients in adhering to a GFD. Regarding the time spent on follow-ups to measure the adherence to a GFD in patients, it was found that 290 individuals (83.8%) reported devoting less than 15 min. Additionally, 48 (13.9%) stated they spent between 15 and 30 min. Only a small fraction, six individuals (2.3%), reported spending between 30 and 60 min. There were noticeable differences in the time in consultation, whether for diagnosis or follow-up, depending on the age of the patients. In this regard, the time spent explaining a GFD after diagnosis was related to the type of patient treated ( p < 0.001; Cramer’s V = 0.23), with a higher percentage of professionals dedicated to children in the categories denoting more time . Similarly, the time spent explaining a GFD during follow-up also showed a statistically significant association with the type of patient treated ( p = 0.014; Cramer’s V = 0.16) . Curiously, during the follow-up, the percentage of HCPs spending 30–60 min with adults was higher than with children. This could be related to the persistence of symptoms and the ongoing effort to identify their underlying causes. The willingness of HCPs to spend more time explaining a GFD was related to gender ( p = 0.005; Cramer’s V = 0.18), with women requesting more time (84.5% in women compared to 69.2% in men), and to the age of the professionals ( p = 0.047; Cramer’s V = 0.15). Professionals in the younger age groups, specifically those up to 50 years of age, requested the most time. Regarding the recommendations to visit a dietitian–nutritionist, 145 (41.9%) of respondents do not recommend such visits, while a similarly sized group ( n = 146; 42.2%) said they sometimes suggested it. Merely 15 (4.3%) indicated recommending it to half of their patients and 40 (11.6%) always give this advice. Interestingly, recommendations vary depending on the age of the patients, with a higher tendency to endorse it for adults than for children ( p < 0.001; Cramer’s V = 0.25). Concerning the recommendations to join a patient association, 300 HCPs (86.7%) point out that they always advise it after the diagnosis, 31 (9.0%) say they suggested it sometimes, and 15 (4.3%) never recommend it. Interestingly, the recommendation to join a patient association was only mentioned after the initial diagnosis, but not during follow-up visits. When asked where they direct their patients when they have doubts about a GFD, the majority of respondents ( n = 316; 91.3%) recommend consulting the local celiac association. Additionally, 166 (48.0%) suggest visiting specific websites, and 116 (33.2%) refer patients to scientific societies. Only 100 (28.9%) endorse visiting a dietitian–nutritionist, 30 (8.7%) to their reference general practitioner, 19 (5.5%) to others, and 3 (0.8%) do not give any advice. To continue, the quality of consultation needs to be addressed. Regarding adherence to a GFD, only 41 (11.8%) HCPs claimed to use specific nutritional tools like nutritional surveys to assess adherence. A majority (63.3%) mentioned using general, open-ended, non-specific questions, while a significant number of participants (20.2%) do not ask their patients about adherence-related issues. As far as the HCP’s knowledge about a GFD is concerned, participants answered four questions to measure their knowledge of a GFD . Knowledge was assessed on a scale of 0–4 according to the number of correct answers. The average score was 2.06 ± 0.94 points. The distribution of the scores is illustrated in . Noticeably, 61 (17.6%) of HCPs believe that quinoa and amaranth may contain gluten and 245 (70.8%) believe that the declaration of gluten-free traces is mandatory. Moreover, approximately 15% do not know more than three critical points where cross-contamination might occur or they cannot specify any at all. In terms of knowledge and information about a GFD, a meaningful 96% of participants considered it relevant to have access to specific information, training courses, and materials. When they have specific doubts regarding a GFD, 235 (67.9%) look for the information in national and international scientific societies, 182 (52.6%) mention they use specific medical websites, 71 (20.6%) browse their doubts on the Internet and 63 (18.2%) do so in specific divulgation blogs about CeD, 25 (7.2%) use specific resources from the specialized food industry, and 27 (7.8%) use other references. The point here is that only 131 (35.0%) turn to the patient associations and consider them as an interesting partner. In addition, the vast majority (93.4%) stated that the national health system should incorporate more dietitians–nutritionists to better assess patients with specific dietary needs. Small differences were observed when considering the age of HCPs ( p = 0.007; Cramer’s V = 0.16). Nearly all HCPs under 50 years of age supported this incorporation (98.1%), compared to 90.0% of those over 50 years. 3.2.2. The Patient’s Perspective A total of 1294 individuals participated in the Q2 questionnaire, ranging in age from 6 to 80 years (mean = 40.65; SD = 13.15). Of the respondents, 16.9% ( n = 219) identified as men, 82.3% ( n = 1065) as women, and 0.8% ( n = 10) preferred not to disclose their gender. Among the participants, 67.5% ( n = 873) reported being diagnosed with CeD, while 32.5% ( n = 421) were first-degree relatives of someone with this disease. The age at first diagnosis ranged from 9 months to 72 years, with an average age of 10.42 years (SD = 17.28). To begin, descriptive issues of managing a GFD need to be detailed. Regarding the first steps to follow a GFD, the majority of the respondents ( n = 924; 71.4%) agreed that their first recommendations about the diet were provided by their physician. Additionally, 182 (14.1%) reported receiving guidance from the local celiac association, 34 (2.6%) from a dietitian–nutritionist, and 13 (1%) from their nurse. Furthermore, it should be noted that 141 (10.9%) cited other sources, with friends/partners/family members ( n = 62) and self-study ( n = 51) being the most notable. However, opinions about the quality of the information provided about a GFD for the first time were diverse. Approximately 415 (32.1%) considered it poor, while 204 (15.8%) found it sufficient. On the optimistic side, 354 (27.4%) regarded it as good, and 321 (24.8%) deemed it very good. Apart from that, it is noteworthy that 908 (70.2%) of respondents had never consulted with a dietitian–nutritionist. When asked about the sources of information they rely on when they have doubts about a GFD, local patient associations emerged as the preferred choice/option for 804 (62.1%) of respondents. Additionally, 306 (23.7%) consulted their family physician, while 106 (8.2%) sought advice from dietitians–nutritionists. Furthermore, 100 (7.7%) expressed confidence in the information provided by scientific societies. In terms of social and familial networks, 230 (17.8%) sought guidance from specific blogs or influencers, while 149 (11.5%) relied on other sources such as family, friends, colleagues, and consultation groups formed on digital media platforms (like Facebook and WhatsApp). The Internet, in general, was the second most utilized source of information for addressing doubts about following a GFD, with 759 (58.7%) of participants referring to it. Differences were detected in the frequency of use of this tool: 264 (20.4%) use it infrequently, 594 (45.9%) occasionally, 96 (7.4%) monthly, and 294 (22.7%) use it weekly. When it comes to the quality of their GFD, 1082 (83.6%) believed they maintained a healthy diet, while 142 (11.0%) were unsure, and 70 (5.4%) considered their diet to be unhealthy. This positive perception may be correlated with the responses to the question about visiting a dietitian–nutritionist for advice, as only 34.6% of celiac patients answered affirmatively. Concerning oat consumption, a considerable percentage (68.2%, n = 882) abstain from consuming oats altogether. Among those who do consume oats, the majority opt for certified gluten-free varieties. Of the latter, 342 (26.4%) partake of/eat oats occasionally, while 62 (4.8%) include them in their daily diet. Fortunately, only a small minority (0.6%) consume oats without confirming whether they are certified gluten-free. They were also asked questions about the different food groups to assess the risk of gluten contamination and the subsequent risk of transgression of a GFD. Participants answered two questions, and this knowledge was assessed on a scale of 0–2 according to the number of correct answers. The average score was 1.70 ± 0.49 points. The distribution of the scores is illustrated in . A total of 74.5% of participants demonstrate the ability to identify gluten-free food staples based on their natural absence of gluten. A higher percentage, 95.4%, identified foods prone to contamination, although it is true that queried foods were described within the tables provided by celiac associations . To continue, the quality of consultation needs to be addressed. When querying about follow-up medical appointments, our focus was on assessing if these visits inquired about adherence to a GFD and the level of compliance with it. Significant statistical differences were observed depending on whether the responses were provided by the patients themselves or their first-degree relatives ( p < 0.001; Cramer’s V = 0.19). While 69% of patients with CeD responded positively, this percentage escalated to 86.2% when family members were surveyed. Moreover, it was asked whether these follow-up visits involve a thorough nutritional assessment for the patient, including evaluations of weight, height, and body composition, as well as specialized complete blood tests aimed at evaluating vitamin and mineral levels. Again, significant statistical differences were observed depending on the group being asked ( p < 0.001; Cramer’s V = 0.36). The majority of patients with CeD responded negatively ( n = 524; 60%), indicating that they did not undergo this assessment, while 102 (24.2%) family members similarly reported that such an evaluation was not conducted. Similarly, patients stated unequivocally (95.3%) that they did not have their food intake recorded to assess the nutritional quality of their diet. Finally, the patient’s caregivers interviewed also expressed a more positive evaluation regarding the perceived knowledge of HCPs conducting follow-up on a GFD ( p < 0.001; Cramer’s V = 0.26). illustrates the obtained answers. 3.2.3. Differences and Similarities between the Perspectives of CeD Patients and HCPs The perception of the need to visit a dietitian–nutritionist varied between patients and HCPs. Among the patients, 386 (29.8%) believed it was necessary to see a dietitian–nutritionist, whereas 201 HCPs (58.1%) considered such visits essential. This indicates that a significantly higher percentage of professionals recognized the requirement of consulting a diet specialist ( p < 0.001; Cramer’s V = 0.24). In contrast, both groups agreed on the need for more specific training of HCPs on a GFD. A total of 1242 patients (96.0%) and 332 professionals (96.0%) considered this training necessary.
To begin, descriptive issues of clinical practice need to be detailed. The Q3 questionnaire was distributed among multidisciplinary HCPs related to CeD. It involved 346 multidisciplinary HCPs: primary care pediatricians ( n = 125; 36.1%); gastroenterologists (42.8%) either for adult ( n = 66) or pediatric ( n = 82) patients; family physicians ( n = 47; 13.6%); nurses ( n = 9; 2.6%); dietitians–nutritionists ( n = 6; 1.7%); and other HCPs ( n = 11; 3.2%). Two-thirds of HCPs were specialized in pediatric care, while one-third were in adult care. Of these respondents, 83.2% reported diagnosing between 0 and 25 cases of CeD per year while 11% diagnosed between 25 and 50 cases annually. Regarding the follow-up care, 61.8% provide it to 0–25 people with CeD, while 38.2% indicated monitoring more than 25 patients. Participants were queried about how much time they typically spend explaining a GFD to patients during the diagnostic visit, and 91% indicated a duration of less than half an hour. Of these, 166 individuals (48%) allocate less than 15 min, while 143 (41.3%) spend between 15 and 30 min. Conversely, 31 professionals (9%) dedicate between 30 and 60 min with only 6 (1.7%) extending beyond 60 min. Despite this, the majority of the respondents ( n = 276; 79.8%) expressed a desire for more time in consultation to thoroughly guide patients in adhering to a GFD. Regarding the time spent on follow-ups to measure the adherence to a GFD in patients, it was found that 290 individuals (83.8%) reported devoting less than 15 min. Additionally, 48 (13.9%) stated they spent between 15 and 30 min. Only a small fraction, six individuals (2.3%), reported spending between 30 and 60 min. There were noticeable differences in the time in consultation, whether for diagnosis or follow-up, depending on the age of the patients. In this regard, the time spent explaining a GFD after diagnosis was related to the type of patient treated ( p < 0.001; Cramer’s V = 0.23), with a higher percentage of professionals dedicated to children in the categories denoting more time . Similarly, the time spent explaining a GFD during follow-up also showed a statistically significant association with the type of patient treated ( p = 0.014; Cramer’s V = 0.16) . Curiously, during the follow-up, the percentage of HCPs spending 30–60 min with adults was higher than with children. This could be related to the persistence of symptoms and the ongoing effort to identify their underlying causes. The willingness of HCPs to spend more time explaining a GFD was related to gender ( p = 0.005; Cramer’s V = 0.18), with women requesting more time (84.5% in women compared to 69.2% in men), and to the age of the professionals ( p = 0.047; Cramer’s V = 0.15). Professionals in the younger age groups, specifically those up to 50 years of age, requested the most time. Regarding the recommendations to visit a dietitian–nutritionist, 145 (41.9%) of respondents do not recommend such visits, while a similarly sized group ( n = 146; 42.2%) said they sometimes suggested it. Merely 15 (4.3%) indicated recommending it to half of their patients and 40 (11.6%) always give this advice. Interestingly, recommendations vary depending on the age of the patients, with a higher tendency to endorse it for adults than for children ( p < 0.001; Cramer’s V = 0.25). Concerning the recommendations to join a patient association, 300 HCPs (86.7%) point out that they always advise it after the diagnosis, 31 (9.0%) say they suggested it sometimes, and 15 (4.3%) never recommend it. Interestingly, the recommendation to join a patient association was only mentioned after the initial diagnosis, but not during follow-up visits. When asked where they direct their patients when they have doubts about a GFD, the majority of respondents ( n = 316; 91.3%) recommend consulting the local celiac association. Additionally, 166 (48.0%) suggest visiting specific websites, and 116 (33.2%) refer patients to scientific societies. Only 100 (28.9%) endorse visiting a dietitian–nutritionist, 30 (8.7%) to their reference general practitioner, 19 (5.5%) to others, and 3 (0.8%) do not give any advice. To continue, the quality of consultation needs to be addressed. Regarding adherence to a GFD, only 41 (11.8%) HCPs claimed to use specific nutritional tools like nutritional surveys to assess adherence. A majority (63.3%) mentioned using general, open-ended, non-specific questions, while a significant number of participants (20.2%) do not ask their patients about adherence-related issues. As far as the HCP’s knowledge about a GFD is concerned, participants answered four questions to measure their knowledge of a GFD . Knowledge was assessed on a scale of 0–4 according to the number of correct answers. The average score was 2.06 ± 0.94 points. The distribution of the scores is illustrated in . Noticeably, 61 (17.6%) of HCPs believe that quinoa and amaranth may contain gluten and 245 (70.8%) believe that the declaration of gluten-free traces is mandatory. Moreover, approximately 15% do not know more than three critical points where cross-contamination might occur or they cannot specify any at all. In terms of knowledge and information about a GFD, a meaningful 96% of participants considered it relevant to have access to specific information, training courses, and materials. When they have specific doubts regarding a GFD, 235 (67.9%) look for the information in national and international scientific societies, 182 (52.6%) mention they use specific medical websites, 71 (20.6%) browse their doubts on the Internet and 63 (18.2%) do so in specific divulgation blogs about CeD, 25 (7.2%) use specific resources from the specialized food industry, and 27 (7.8%) use other references. The point here is that only 131 (35.0%) turn to the patient associations and consider them as an interesting partner. In addition, the vast majority (93.4%) stated that the national health system should incorporate more dietitians–nutritionists to better assess patients with specific dietary needs. Small differences were observed when considering the age of HCPs ( p = 0.007; Cramer’s V = 0.16). Nearly all HCPs under 50 years of age supported this incorporation (98.1%), compared to 90.0% of those over 50 years.
A total of 1294 individuals participated in the Q2 questionnaire, ranging in age from 6 to 80 years (mean = 40.65; SD = 13.15). Of the respondents, 16.9% ( n = 219) identified as men, 82.3% ( n = 1065) as women, and 0.8% ( n = 10) preferred not to disclose their gender. Among the participants, 67.5% ( n = 873) reported being diagnosed with CeD, while 32.5% ( n = 421) were first-degree relatives of someone with this disease. The age at first diagnosis ranged from 9 months to 72 years, with an average age of 10.42 years (SD = 17.28). To begin, descriptive issues of managing a GFD need to be detailed. Regarding the first steps to follow a GFD, the majority of the respondents ( n = 924; 71.4%) agreed that their first recommendations about the diet were provided by their physician. Additionally, 182 (14.1%) reported receiving guidance from the local celiac association, 34 (2.6%) from a dietitian–nutritionist, and 13 (1%) from their nurse. Furthermore, it should be noted that 141 (10.9%) cited other sources, with friends/partners/family members ( n = 62) and self-study ( n = 51) being the most notable. However, opinions about the quality of the information provided about a GFD for the first time were diverse. Approximately 415 (32.1%) considered it poor, while 204 (15.8%) found it sufficient. On the optimistic side, 354 (27.4%) regarded it as good, and 321 (24.8%) deemed it very good. Apart from that, it is noteworthy that 908 (70.2%) of respondents had never consulted with a dietitian–nutritionist. When asked about the sources of information they rely on when they have doubts about a GFD, local patient associations emerged as the preferred choice/option for 804 (62.1%) of respondents. Additionally, 306 (23.7%) consulted their family physician, while 106 (8.2%) sought advice from dietitians–nutritionists. Furthermore, 100 (7.7%) expressed confidence in the information provided by scientific societies. In terms of social and familial networks, 230 (17.8%) sought guidance from specific blogs or influencers, while 149 (11.5%) relied on other sources such as family, friends, colleagues, and consultation groups formed on digital media platforms (like Facebook and WhatsApp). The Internet, in general, was the second most utilized source of information for addressing doubts about following a GFD, with 759 (58.7%) of participants referring to it. Differences were detected in the frequency of use of this tool: 264 (20.4%) use it infrequently, 594 (45.9%) occasionally, 96 (7.4%) monthly, and 294 (22.7%) use it weekly. When it comes to the quality of their GFD, 1082 (83.6%) believed they maintained a healthy diet, while 142 (11.0%) were unsure, and 70 (5.4%) considered their diet to be unhealthy. This positive perception may be correlated with the responses to the question about visiting a dietitian–nutritionist for advice, as only 34.6% of celiac patients answered affirmatively. Concerning oat consumption, a considerable percentage (68.2%, n = 882) abstain from consuming oats altogether. Among those who do consume oats, the majority opt for certified gluten-free varieties. Of the latter, 342 (26.4%) partake of/eat oats occasionally, while 62 (4.8%) include them in their daily diet. Fortunately, only a small minority (0.6%) consume oats without confirming whether they are certified gluten-free. They were also asked questions about the different food groups to assess the risk of gluten contamination and the subsequent risk of transgression of a GFD. Participants answered two questions, and this knowledge was assessed on a scale of 0–2 according to the number of correct answers. The average score was 1.70 ± 0.49 points. The distribution of the scores is illustrated in . A total of 74.5% of participants demonstrate the ability to identify gluten-free food staples based on their natural absence of gluten. A higher percentage, 95.4%, identified foods prone to contamination, although it is true that queried foods were described within the tables provided by celiac associations . To continue, the quality of consultation needs to be addressed. When querying about follow-up medical appointments, our focus was on assessing if these visits inquired about adherence to a GFD and the level of compliance with it. Significant statistical differences were observed depending on whether the responses were provided by the patients themselves or their first-degree relatives ( p < 0.001; Cramer’s V = 0.19). While 69% of patients with CeD responded positively, this percentage escalated to 86.2% when family members were surveyed. Moreover, it was asked whether these follow-up visits involve a thorough nutritional assessment for the patient, including evaluations of weight, height, and body composition, as well as specialized complete blood tests aimed at evaluating vitamin and mineral levels. Again, significant statistical differences were observed depending on the group being asked ( p < 0.001; Cramer’s V = 0.36). The majority of patients with CeD responded negatively ( n = 524; 60%), indicating that they did not undergo this assessment, while 102 (24.2%) family members similarly reported that such an evaluation was not conducted. Similarly, patients stated unequivocally (95.3%) that they did not have their food intake recorded to assess the nutritional quality of their diet. Finally, the patient’s caregivers interviewed also expressed a more positive evaluation regarding the perceived knowledge of HCPs conducting follow-up on a GFD ( p < 0.001; Cramer’s V = 0.26). illustrates the obtained answers.
The perception of the need to visit a dietitian–nutritionist varied between patients and HCPs. Among the patients, 386 (29.8%) believed it was necessary to see a dietitian–nutritionist, whereas 201 HCPs (58.1%) considered such visits essential. This indicates that a significantly higher percentage of professionals recognized the requirement of consulting a diet specialist ( p < 0.001; Cramer’s V = 0.24). In contrast, both groups agreed on the need for more specific training of HCPs on a GFD. A total of 1242 patients (96.0%) and 332 professionals (96.0%) considered this training necessary.
The primary objective of this study was to evaluate the knowledge about the GFD of people with CeD, as well as of HCPs involved in diagnosing and treating this condition. Additionally, this study evaluated the clinical approaches used for diet adherence, assessing the perceptions of both patients and HCPs. Based on the results obtained, two blocks can be discussed as follows: understanding of a GFD and compliance to the diet. Both aspects have a direct impact on adherence to a GFD. Currently, EU Regulation 1169/2011, which came into force in 2014, permits foods that naturally do not contain gluten to be labeled as “gluten-free.” However, no official regulation specifies which foods are considered naturally gluten-free. To address this, the Association of European Coeliac Societies (AOECS) has developed a classification system defining three categories: generic foods (naturally gluten-free), conventional foods (naturally gluten-free but potentially contaminated during processing), and specific foods (produced without gluten under conditions ensuring maximum safety). This classification, endorsed by all patient associations in Europe, is crucial as it facilitates the easy and safe categorization of foods. Consequently, understanding this classification can be regarded as enhanced understanding for patients and their families . In this context, the results according to the survey conducted by FACE (Q1) indicate that the general questions on food classification and cross-contamination are well understood, with over 78% of celiac respondents answering these questions correctly. However, when specific questions about the safety of certain foods are asked, there is a higher rate of incorrect answers. For these specific food-related questions, the correct response rate is only 62%. Regarding questions about medications in the survey designed for patients and/or their relatives, 20% expressed doubts, and more than 15% incorrectly believe that CeD patients are not a risk group for vaccination. It is noteworthy that individuals who are members of a patient association tend to have higher rates of correct answers. This is important because previous research has shown that patients who belong to celiac associations or groups have more knowledge and greater adherence to a GFD, as they receive more emotional and social support . Therefore, the role of associations in ensuring proper adherence to a GFD is crucial for patients and has been widely recognized in earlier studies . These results are consistent with those obtained from the Q2 survey targeting CeD patients or their relatives, where three out of every four respondents knew how to identify gluten-free products well, and the overwhelming majority were able to identify cross-contamination risks. This indicates a high degree of patient knowledge in these two critical areas of a GFD. In contrast, a study by Paganizza et al. in Italy rated CeD patients’ knowledge about the gluten content of foods as poor, with only 1 out of 104 participants (0.96%) answering all questions correctly . Compared to that, in our study, 156 of 2437 participants (6.4%) answered correctly to all questions in Q1. The study conducted in Italy emphasized the association of the knowledge of CeD people about a GFD with the adherence to the diet, suggesting the promotion of educational and behavioral programs . Comparable results were obtained by Sahin et al. in Turkey, where they observed that none of the CeD participants answered all questions correctly, in a knowledge questionnaire, highlighting significant gaps in knowledge . Similarly, Riznik et al. found that patients scored an average of 56.4% correct on a CeD knowledge questionnaire, indicating a widespread lack of understanding . Additionally, Pohoreski et al. found that 63% of adolescents with CeD were not sufficiently trained about a GFD . Furthermore, a recent systematic review carried out by Abu-Janb analyzed the facilitators and barriers to adherence to a GFD among adults with CeD at various levels: individual, interpersonal, organizational, community, and systemic. This research demonstrated that at the individual level, knowledge of the disease and/or a GFD was the most significant factor identified in the literature. Specifically, fourteen studies reported that the lack of knowledge was a barrier to GFD adherence while up to eight studies identified a good level of awareness is a facilitator . The authors emphasized the importance of patients receiving correct nutritional education about a GFD to prevent this lack of knowledge from being a barrier to gluten-free adherence. These findings agree with those of our study. Another cross-sectional study conducted by Muhammad et al. analyzed the association between receiving a GFD prescription and understanding food labeling with adherence to a GFD . They revealed that a misunderstanding of food labels was significantly associated with a poorer gluten-free dietary adherence CDAT score. More precisely, 73% of those who reported not comprehending food labels were classified as not adhering to a GFD, compared to 45% who understood food labels. Although we did not specifically analyze adherence to a GFD, we did assess knowledge related to food labeling. Based on Muhammad’s findings, we anticipate that patients who make errors in labeling questions may exhibit poorer adherence to the diet. Improving knowledge in this area could potentially enhance adherence . In relation to a GFD and its follow-up, between 70% and 52% of patients, in both the Q1 and Q2 surveys, indicate that information about a GFD is given by their physician after diagnosis, with almost half considering that the information received was scarce or just sufficient. These facts are relevant because, in the survey aimed at HCPs (Q3), there are questions with a high percentage of errors on the basic aspects of a GFD. For instance, almost one out of five of the HCP respondents mistakenly believed that pseudocereals like quinoa and/or amaranth may have gluten, and only 13.3% were aware that the declaration of gluten traces is not mandatory. These data highlight the need for improved knowledge about a GFD. This necessity is further emphasized by the limited time spent explaining the diet, with more than 90% of HCPs dedicating less than half an hour to this task after the diagnosis. Knowledge about the diet and the time dedicated to it are two fundamental areas that professionals should focus on to enhance patient adherence to treatment. Moreover, it is important to emphasize that almost all (96%) of the HCPs demand more training, indicating their perception of needing to increase their knowledge of a GFD. These results are consistent with others reported in different studies that have shown that one of the major pitfalls is their dissatisfaction with the extent and quality of information provided by their physicians . In addition, Ukkola et al. reported that patients were more satisfied with the counseling provided by a dietitian–nutritionist than that provided by physicians. The information provided after the CeD diagnosis was deemed inadequate in 28% of cases by physicians and 12% of cases by dietitians. The primary reasons for patient dissatisfaction were scant information (59% for physicians and 20% for dietitians) and insufficient counselor training (7% and 18%, respectively). These data align with our findings, where 50% of patients consider that they received poor information, reinforcing the idea of including dietitians–nutritionists in the ongoing care of celiac patients. Riznik and coworkers also analyzed HCPs’ knowledge about CeD in Central Europe . The authors concluded that this level of understanding is unsatisfactory given that, on average, only half of the questions were answered correctly. Although this study focuses more on knowledge about the disease in general and the diagnosis rather than a GFD specifically, the findings are comparable and can be extrapolated to our study, where comprehension about a relevant aspect of a GFD is low among professionals . Other published studies support these outcomes and underline the importance of enhancing nutritional programs among HCPs . In contrast, despite our study revealing a lack of knowledge among HCPs and their own demand for more training, the perception of patients during follow-up appears more positive than at the time of diagnosis. Specifically, 43% of celiac people and 40% of caregivers consider the level of knowledge of their physician to be acceptable, while 44% of caregivers rate it as very good. This perception is influenced by the fact that caregivers of minors with CeD, who are typically followed by pediatricians, responded to the survey. Pediatricians, as noted in our survey, demonstrate better accuracy in GFD-related questions. In this regard, Sahin et al. also found that pediatric gastroenterologists were the physicians who responded best to the questionnaire, with a score of approximately 66 out of 100 . Similar findings were obtained by Riznik et al., who observed that pediatric gastroenterologists obtained the highest scores on the knowledge questionnaire and it was speculated that this may be associated with a greater awareness about the burden of CeD . It is also plausible that over the course of follow-up, patients may acquire more knowledge about a GFD. Consequently, they might perceive HCPs as more knowledgeable, since they have fewer questions that need answering compared to the time of diagnosis. While it is encouraging to note this positive result, it is important to acknowledge the findings from studies such as Ukkola et al., which emphasize the critical nature of the information provided about a GFD at the time of CeD diagnosis compared to that during follow-up. Ukkola’s study showed that physicians’ attitudes and the guidance given at diagnosis significantly influenced patients’ experiences with the disease and their adherence to treatment after one year. Poor doctor–patient communication and scant information at diagnosis were associated with shock reaction, disapproval, and a negative attitude towards both the disease and the diet . Finally, regarding the search for knowledge, another important aspect is where patients seek information about a GFD. In the Q1 survey, more than half of the respondents (51.4%) reported looking for information on the Internet and social networks when they have questions about the diet, while 30% consult their local association. This can be explained by the immediacy the Internet provides for resolving doubts. Other studies, such as the one conducted in Italy in 2016, indicated that 37% of participants used the Internet for information, with this percentage increasing to 45% among those who demonstrated adequate adherence to a GFD . A more recent study conducted in 2020 showed increased use of this resource, indicating that 96% of celiac patients and their families in the Saudi Arabian Celiac Patient Support Group (SCPSG) used social networking platforms to manage their disease . The use of this resource was notably high, with 76.4% of respondents consulting it daily . These figures are significantly higher compared to the usual use described in Q2, where only 22.7% consult it weekly and nearly half use it only occasionally. In the SCPSG survey, the majority of respondents acknowledged that social media was helpful in increasing their understanding of the disease and their adherence to a GFD. More precisely, 78% of participants considered social media effective in raising community awareness of celiac disease, a finding similar to that found by us in a previous cross-sectional study published recently . Tomlin et al. concluded that the Internet significantly influences parental knowledge of CeD. However, they emphasized that accurate information from specialists is essential to alleviate anxiety related to the use of a GFD . This information is relevant because it opens a new source of information about a GFD that will have to be managed from the professionals’ consultation and include, as part of the dietary advice, where to look for reliable information on the Internet. However, for this to happen, it is essential that HCPs are also aware of these resources and able to validate information from the Internet and social media sources as well. Continuing with the resources to improve knowledge and resolve doubts, while three-fifths of patients (62.1%) turn to the celiac association to resolve their doubts, only one-third of HCPs utilize this resource, even though they mostly recommend going to an association after diagnosis. This disparity is noteworthy because patient associations are becoming increasingly professionalized and staffed by dietitian–nutritionists and psychologists, as well as professionals with specialized postgraduate training. These resources, currently underutilized by HCPs, can serve as valuable allies or stakeholders for HCPs in addressing the disease. The survey results concluded that most of the HCPs stated that the National Health System should incorporate more dietitians–nutritionists for stronger dietary monitoring and compliance with the specific dietary needs of patients. In this sense, a recent review highlighted the usefulness of the clinical follow-up of the diet by a specialized dietitian–nutritionist since, among other advantages, the early detection of transgressions actually results in cost savings for the healthcare system . Interestingly, this perspective contrasts with the fact that HCPs often do not recommend their patients visit a dietitian or nutritionist. It is plausible to suggest that this inconsistency is due to the belief that such services should be covered by the healthcare system rather than being the financial responsibility of the patient. This fact is corroborated by a work carried out in Spain that evaluates the integration of dietitians–nutritionists into multidisciplinary teams across primary, specialized, and public healthcare, which reveals a low or virtually non-existent implementation at the state level . Educational programs can help to improve the detected gaps by first identifying the real concerns, requirements, uncertainties, and challenges faced by CeD individuals and HCPs . Next, the type of educational program should be tailored to the target audience described. Similarly, it is essential that those delivering nutrition education have adequate training, highlighting the role of dietitians–nutritionists. Regarding the methodology, it has been proven that group-based educational programs are successful in improving both gastrointestinal symptoms and overall quality of life . In the case of children, parental involvement in the program is essential . Nutritional education can be delivered through face-to-face sessions or online, as significant results have been published through virtual formats . Finally, e-learning is effective in improving the comprehension of a GFD in children and their families , and it has also been proposed as a useful tool for HCPs . The strength of this study lies in assessing knowledge about gluten-free foods and diet for monitoring CeD. It aims to assess a GFD in terms of both knowledge and clinical practice. In addition, the high number of participants, which reached 3731 among CeD patients and their relatives, adds robustness to the results. Moreover, the substantial participation of HCPs from various specialties, covering both adult and pediatric patients, further strengthens this study. A noteworthy aspect of this study is the parallel consideration of both patient and HCP perspectives in diet follow-up, enabling a comparison between them. However, there are some weaknesses. The attempt to limit the number of questions led to incomplete coverage of both perspectives in certain areas. These surveys were conducted nationwide in Spain; therefore, the conclusions may not be applicable to other healthcare systems or cultural contexts. Another limitation of this study may be the potential self-reporting bias in the questionnaire responses, particularly with regard to the knowledge and practices of healthcare professionals. Data collection through FACE and its associations may introduce bias, as many respondents are linked to patient associations. Previous studies have shown that members of these associations are more familiar with and adhere more closely to the dietary guidelines. Additionally, when information is provided by a family member, it is often assumed the patient is a pediatric case, although this cannot be confirmed categorically because this information was not specifically requested.
The knowledge of the celiac population and their caregivers regarding gluten-free foods is insufficient to ensure correct adherence to a GFD and achieve the nutritional balance of the diet. From the perspective of HCPs, the very limited time available during consultations, along with the need for additional specialized training, may explain the lack of knowledge among healthcare providers and their restricted ability to monitor adherence to a GFD. HCPs agree that this task should be carried out by dietitians–nutritionists, but referrals to these diet specialists are recommended only on a limited basis probably due to their minimal presence in public healthcare. Patient associations frequently fill this gap, but patients and caregivers often resort to less reliable sources of information, such as the Internet and social networks, when they have doubts. A fundamental option is to enhance nutritional education, not only for patients but also for clinicians, and to reinforce the social networks consulted to ensure that the information disseminated is reliable and scientifically based.
|
The past and future of industrial hygiene in Japan | 969f4408-95ca-4bcf-95f1-b6cd317bbb8d | 10079497 | Preventive Medicine[mh] | Industrial hygiene in Japan has generally been considered emerge in the mid to late 1950s. Of course, even before the 1950s, the importance of ensuring workers’ health had been recognized mainly in the medical field; however, it was not until the “Hepburn Sandal Incident” that industrial hygiene research, which incorporated technology and information from the science and engineering fields, was launched in earnest under the leadership of the Japanese government. The Hepburn Sandals Incident was a major industrial disease in Japan during the mid to late 1950s that was triggered by the success of one American romantic movie. The movie “Roman Holiday”, released in Japan in 1954, was a huge hit, and the sandals worn by the lead actress (Audrey Hepburn) in the movie immediately became widely popular among young Japanese women. At this time, most footwear used in Japan, including sandals, were produced by small-scale manufacturers with only several employees. Unfortunately, at a time when laws and regulations to protect workers’ health were absent, most workers in sandal manufacturers were exposed to and unprotected against toxic solvents, such as benzene, used in the production processes. Benzene, which today requires extremely strict control due to its high carcinogenic potential, was not regulated in Japan at that time. Therefore, workers—many of whom were young females—in sandal manufacturing workshops were exposed to high concentrations of benzene vapor on a daily basis, which produced a large number of victims in a short period of time. The Japanese government responded promptly and promulgated the Ordinance on Prevention of Organic Solvent Poisoning in 1960 to prevent incidents of benzene poisoning, which had frequently occurred among small-scale footwear manufacturers. The ordinance was subsequently incorporated into the Industrial Safety and Health Law (1972) and has continued to significantly impact Japanese industrial hygiene from 1960 to the present considering that the ordinance specifies the methods for measuring organic solvent concentrations and ventilation requirements for workplaces involved with organic solvents. On the other hand, the major early administrative measure in Japan for occupational dust exposure was the enactment of the Pneumoconiosis Law in 1960. Unlike the Ordinance on Prevention of Organic Solvent Poisoning mentioned earlier, the Pneumoconiosis Law regulates workers’ health care and does not provide for working environment control. Thus, it had no significant and direct impact on industrial hygiene research in Japan. The Pneumoconiosis Law was amended several times thereafter; however, even 20 years after its enactment, it made no significant contribution to the reduction of pneumoconiosis. In 1978, the Japanese government enacted the Ordinance on Prevention of Hazards Due to Dust , which mandated the wetting and sealing of dust sources, installation of various types of ventilators, wearing of personal protective equipment, and working environment measurements. This ordinance contributed to the promotion of research on methods of measuring dust concentration, particle size, and chemical composition, as well as research on techniques to protect workers from dust, such as designing effective ventilation systems and the development of high-performance dust masks. The ordinance can be considered successful given that it promoted a decrease in the number of newly diagnosed pneumoconiosis cases from 6,842 in 1980 to 124 in 2020. Indeed, industrial hygiene in Japan has reduced the number of occupational diseases in conjunction with various government regulations; however, it must be noted that the needs for industrial hygiene have gradually changed as society has evolved. Since the mid-20th century, the share of the tertiary sector in Japanese industry has steadily expanded. According to the Japanese Census, the share of tertiary workers in 1950 was 29.6%, whereas that in 2019 was 71.2%. In line with this, ensuring the health of office workers, caregivers, delivery service providers, and hospitality workers, such as preventing low back pain, muscle fatigue, eye strain, and passive smoking, had emerged as an important issue for industrial hygiene, increasing the presence of ergonomics, aerosol science, and chemical engineering. In this context, the relative presence of conventional industrial hygiene declined with times, and the “Osaka Occupational Cholangiocarcinoma Disaster (2012)” occurred. The “Osaka Occupational Cholangiocarcinoma Disaster” was an industrial disease that occurred at a small printing factory in Osaka City, in which 17 employees developed cholangiocarcinoma, among whom 9 died. Subsequent investigations found that the primary cause of their cholangiocarcinoma was exposure to dichloropropane (DCP), which was used to clean the printing presses. However, no legal restrictions on the use of DCP were in place at this time. This industrial disease prompted Japanese labor administrators and industrial hygienists to recognize that conventional controls of chemical substances through legal restrictions alone were insufficient to protect workers’ health. The Japanese government immediately designated DCP as a regulated substance while developing a new law on risk assessment for chemicals. Currently, DCP is classified as a “special organic solvent” under the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , which requires particularly strict control measures for use. In 2016, the Industrial Safety and Health Law was amended to require discretionary risk assessment, in which chemical users have discretion in the frequency and method of their assessment, for 640 chemicals, including approximately 520 chemicals that have yet to be legally regulated. Since 2016, chemicals subject to risk assessment have been added continuously, with risk assessment being mandatory for 674 chemicals as of January 2023. In the future, the Japanese government looks to increase the number of substances subject to risk assessment, which is expected to reach approximately 3,000 substances within a few years. In addition, the government intends to require personal exposure measurement using a personal sampler in addition to conventional working environment measurements based on area sampling. Along with these, the government also plans to essentially abolish the Ordinance on Prevention of Organic Solvent Poisoning , the Ordinance on Prevention of Hazards Due to Specified Chemical Substances , and other ordinances that have been the cornerstones of Japanese industrial hygiene, although no definite date has yet been finalized as of January 2023. As mentioned earlier, these ordinances specify not only the measurement procedures of the substances concerned but also countermeasures against their exposure. For example, when a local ventilation system (LEV) is applied to prevent exposure to regulated organic solvents, the current ordinance specifies the type of exhaust hood to be applied and the exhaust flow velocity. Therefore, after the abolishment of the ordinance, the users of the organic solvent will be responsible for selecting exposure control methods, including the LEV at their own discretion. However, it may be difficult for most users to select appropriate control methods independently. Currently, the Ministry of Health, Labor and Welfare Japan (MHLW) is preparing a “Recommended Case Studies for Reducing Chemical Exposure” through the National Institute of Occupational Safety and Health, Japan (JNIOSH), which will be of great benefit to many industrial hygienists who are struggling with countermeasures against hazardous substances once completed and released. One of the serious problems expected by the Japanese industrial hygiene system in the near future will be the shortage of young professionally trained industrial hygienists. In fact, three Japanese universities, namely Kitasato University, University of Occupational and Environmental Health, Japan (UOEH), and Waseda University, offered specialized industrial hygiene courses just a few years ago, but only the School of Health Sciences, UOEH remains now. Furthermore, even JNIOSH, which is to be the national center of occupational safety and health research in Japan, seems to abolish its research branch on industrial ventilation within a few years. As such, the future of industrial hygiene in Japan will perhaps be directed not by experts from universities or public research institutes but primarily by engineers from ventilator or protective equipment manufacturers or publicly licensed professionals, such as certified consultants, occupational hygienists, industrial physicians, official health supervisors, and environmental measurement specialists who are in charge of health and safety practices in the workplace. |
Relatives’ experiences of visiting restrictions during the COVID-19 pandemic’s first wave: a PREMs study in Valais Hospital, Switzerland | e11be2bf-f0c0-4826-8906-e9f5d36e3fe6 | 10510254 | Internal Medicine[mh] | The first cases of COVID-19 struck the Canton of Valais, Switzerland, at the end of February 2020. The Swiss Confederation and the Canton of Valais enacted significant measures to limit the virus’ spread throughout the population, including a reorganization of acute hospital care . During the onset of the pandemic, from March 15 to April 30, 2020, the Canton instituted COVID-19-related visiting restrictions on healthcare institutions, prohibited patients from leaving their rooms, and closed hospital restaurants, coffee shops, and other communal areas. All visits were banned, including by relatives, with some exceptions made for parents visiting pediatrics wards . These decisions were made when knowledge about the virus’ spread was not very advanced regarding patient safety . Restricting hospital visits during the COVID-19 pandemic’s first wave had the following aims: (1) preventing the transmission of SARS-CoV-2 from the community into acute hospitals (infecting healthcare staff and patients), (2) preventing transmission in the other direction (infecting visitors), and (3) maintaining adequate supplies of personal protective equipment . At the pandemic’s onset, potential visitor-related COVID-19 outbreaks were considered a substantial risk and, thus, all types of visits were restricted, despite a lack of scientific evidence linking visitors to SARS-CoV-2 transmission in hospitals . Restricting visits is not only an emotional hardship for patients and relatives, but healthcare staff also perceive the absence of relatives at the bedside to be a hindrance to delivering person- and family-centered care . The present study defined relatives as the non-professional persons providing physical help and psychological support to patients, and they could be family members, friends, or acquaintances . Patient accompaniment by healthcare staff can be conceptualized as social, emotional (e.g., moral support), and informational support (e.g., helping to facilitate patient–healthcare staff communication) that increases beneficial health outcomes. Accompanying patients in those three dimensions may not always be feasible for many relatives due to work or other responsibilities during hospitalization. However, accompaniment by relatives can significantly influence chronic illness self-care. The presence of relatives facilitates communication between patients and healthcare professionals and enhances patients’ satisfaction with them. Understanding how the mechanisms of relatives’ involvement influences care and outcomes is critical to better understanding the concept of visiting restrictions . Under normal circumstances, relatives at the bedside can observe how different healthcare staff care for their loved ones . Depending on the patient’s disabilities and unique needs, relatives can learn how to assist their loved ones in the activities of daily living and note whether they are experiencing discomfort. Learning how to react at the bedside enables relatives to become accustomed to the patient’s changing condition and better help manage discharge planning and support needs . Research has shown the significant numbers of medical and nursing tasks performed by relatives at home with limited guidance . However, some care situations could have been exacerbated by COVID-19 visiting restrictions . A relative can help overcome language barriers and health literacy problems caused by clinical jargon or can assist physically weakened and/or mentally inhibited patients . Previous studies have also demonstrated that relatives are crucial to the early detection of delirium, a common, often unrecognized condition present in frail older adult inpatients diagnosed with dementia or multiple other chronic conditions and polypharmacy . The regular presence of relatives at the bedsides of those patients most at risk of delirium can reduce its onset and limit long-term functional decline . Recent data revealed that the longer the hospital length of stay (LOS), the more relatives were emotionally affected by visiting restrictions . Relatives also take on an advocacy role when they communicate practical suggestions about patients’ habits or additional needs to healthcare staff, thus facilitating patient–staff communication . Sahoo et al. and Vincent et al. (2021) reported significant associations between additional stress, affect, visiting restrictions, and LOS . The COVID-19 pandemic also changed patient discharge planning, undermining usual discharge processes. Before the COVID-19 pandemic, healthcare staff used a discharge procedure designed to bring relatives and the patient together to discuss critical information on the support that would be needed at home . This exchange increased the chances of the patient subsequently remaining at home and optimized the discharge process. Under COVID-19 visiting restrictions, these conversations altered dramatically and may have caused problems in the dialogue between healthcare staff and relatives, reducing the possibilities of ensuring consensus-based care and increasing the risks of unplanned hospital readmissions . To maintain the links between patients, relatives, and healthcare staff, the Valais Hospital offered a variety of digital and technical means to replace physical visits . However, recent studies have highlighted that video or telephone meetings with the relatives of patients in acute care settings led to fewer changes to care goals than in-person meetings . Substitute visiting methods, such as digital and multimedia applications, lowered relatives’ comprehension of the patient’s overall condition, reducing opportunities to maintain social relations . Recent research has shown that relatives’ experiences during the uncertain context of COVID-19 led to frustrations, especially among older adults . This was linked to not being able to see how their loved one was being cared for and having to put their trust in healthcare institutions . Unclear information or inconsistencies in institutional policy contributed to these uncertainties and relatives sought efficient face-to-face communication . Hoffman et al. highlighted the need for personal attention from relatives . To assess inpatients’ relatives’ experiences with regard to the visiting restrictions imposed during COVID-19’s first wave, we distributed a patient-reported experience measures (PREMs) questionnaire to all the patients hospitalized in the Valais Hospital between the end of February and mid-May, 2020, and to their relatives. The following research questions guided this research. How were the relatives subjected to visiting restrictions distributed? How were the relatives subjected to visiting restrictions affected by this situation compared to relatives not subjected to visiting restrictions? How did relatives (whether subjected to visiting restrictions or not) perceive the information they received, communication with staff, and their own involvement in the care of their loved ones? How did relatives maintain contact with their loved ones?
Design, research population, and setting Following the approval from the Human Research Ethics Committee of the Canton of Vaud (2020–02025), Valais Hospital’s data science warehouse provided the contact details of all the adult inpatients (18 years and older) discharged alive to their home or a nursing home between February 28 and May 13, 2020. These were extracted from administrative, electronic patient records in the hospital’s patient register. A paper questionnaire was sent out to these patients, including an explanation sheet describing the nature of the survey and a questionnaire for their relative, if appropriate. Patients were free to choose whether to participate. Anonymously returning the questionnaire in the attached postage-paid envelope was considered consent to participate in our study for both patients and relatives. The previously published research protocol describes the PREMs methodology used for our survey . Study framework The Quadruple Aim healthcare framework guided the study, highlighting the medical and social needs of hospitalized patients and their relatives, emphasizing the impacts of their unmet needs, and describing the importance of partnerships between the healthcare system and formal and informal caregivers . Relatives involved in care delivery have also recently become an acknowledged essential component of overall health system performance, based on the principles of patient and public involvement described in PREMs . PREMs instruments look at the care process’s impact on patients’ and relatives’ experiences, e.g., involvement in care, communication with staff, information sharing, and the overall care experience. Our PREMs questionnaire included open and closed questions to capture patients’ and relatives’ perceptions of their interactions with the healthcare system and the degree to which their needs were considered . This paper reports on the PREMs survey’s written feedback on relatives’ experiences during the COVID-19 pandemic’s first wave and the visiting restrictions imposed on them (Fig. ). The PREMs instrument Our self-reporting data-collection questionnaire was designed based on a literature review and four semi-structured exploratory interviews with previously hospitalized patients and their relatives . A returned questionnaire from the patient and relative served as a proxy for written consent to participate. The first section, including 14 closed questions, asked patients about sociodemographic data, sex, age, marital status, educational level, and their hospital trajectory as a patient, as well as about their stress level , trust in healthcare professionals (nurses and physicians) , feelings of safety , whether they had been infected by SARS-CoV-2, and perceptions about the disease’s severity during the hospitalization period . The second section included eight closed questions and one open-ended question for the discharged patient’s relative (if they were directly involved in the patient’s hospitalization) (Additional file ). Due to legal restrictions covering data protection and confidentiality, we were not allowed to collect sociodemographic data on relatives. These questions were: Were you able to visit your relative in the hospital? [Yes/No]; If not, how did you maintain contact with your loved one? [(i) Telephone with professional caregivers, (ii) email, (iii) other]. If not, how much did this affect you? [(i) I was not affected, (ii) I was slightly affected, (iii) No opinion, (iv) I was moderately affected, (v) I was very affected]; How did you perceive the information you received about the COVID-19 pandemic during your loved one’s hospital stay? [(i) Totally inadequate, ii) Inadequate, iii) Slightly inadequate, iv) No opinion, v) Just good enough, vi) Adequate, vii) Very good)]; How would you rate communication with the staff? [(i) Poor, ii) Passable, (iii) Good, (iv) Very good, (v) Excellent]. As a close relative, how did the hospital staff treat you? [(i) I was not taken into consideration at all, (ii) I was moderately taken into consideration, (iii) I was fully taken into consideration]. How serious do you think the COVID-19 pandemic is? [(i) Not at all serious, (ii) Not very serious, (iii) Slightly serious, (iv) Serious, (v) Very serious] . Would you like to add any comments about your experience of your loved one’s hospitalization during the pandemic? Data collection procedure All eligible participants received a letter by post inviting them to participate in the survey. This was followed by a reminder two weeks later. Besides the paper questionnaire, an information sheet explained the study’s background, the data sought, and our participant data protection strategy. Participants were asked to complete the paper questionnaire and return it in the prepaid envelope provided. Waiting for ethics clearance and heavy workloads meant that the data warehouse only started its information gathering in August and finished in December 2020 (Fig. ). Data analyses Data were anonymized to ensure participant anonymity and respect good research practice in this type of study, as per the Declaration of Helsinki . Data were imported into IBM SPSS® software, version 28 (IBM Corp, Armonk, New York, USA), for analyses. Our statistical power calculation was based on an alpha error of 0.01, a power-type II error of 0.99, and a mild effect size of 0.3. The minimum sample required for sufficient statistical power was 740 relatives. We analyzed the number of responses and missing values for each variable and reported them in our tables ( n = answers). Parametric properties were analyzed for the normality of their distributions and the equality of their variances using the Kolmogorov–Smirnov test. Non-parametric tests were performed for variables with non-normal distributions to compare relatives who were and were not affected by visiting restrictions. The population was described using descriptive statistics with frequencies, distributions, and leading trends. Data collected using Likert scales were analyzed using descriptive and inferential statistics. LOS was recoded as a dichotomous variable of 1–14 days and ≥ 15 days, based on the median patient LOS . Bivariate analyses were conducted using cross-tabulations between relatives impacted and not impacted by visiting restrictions during their loved one’s hospitalization. Spearman’s rank correlation measures were computed between sociodemographic variables and the closed questions. We computed a linear multivariate regression model to analyze how visiting restrictions predicted relatives’ affects, their satisfaction with information received about the COVID-19 pandemic, satisfaction with communication with staff, how well healthcare staff considered relatives, and perceptions of how serious the COVID-19 pandemic was. The model estimated each predictor’s net impact, other things being equal, and it gave predictions for the entire sample, not just specific individuals. A content analysis of relatives’ responses to the open-ended question was made using NVivo12 software (QSR International, 2021). Quantitative results were considered statistically significant when p < 0.01. All p -values were based on two-tailed tests, and all the analyses were supervised and reviewed by a biostatistician.
Following the approval from the Human Research Ethics Committee of the Canton of Vaud (2020–02025), Valais Hospital’s data science warehouse provided the contact details of all the adult inpatients (18 years and older) discharged alive to their home or a nursing home between February 28 and May 13, 2020. These were extracted from administrative, electronic patient records in the hospital’s patient register. A paper questionnaire was sent out to these patients, including an explanation sheet describing the nature of the survey and a questionnaire for their relative, if appropriate. Patients were free to choose whether to participate. Anonymously returning the questionnaire in the attached postage-paid envelope was considered consent to participate in our study for both patients and relatives. The previously published research protocol describes the PREMs methodology used for our survey .
The Quadruple Aim healthcare framework guided the study, highlighting the medical and social needs of hospitalized patients and their relatives, emphasizing the impacts of their unmet needs, and describing the importance of partnerships between the healthcare system and formal and informal caregivers . Relatives involved in care delivery have also recently become an acknowledged essential component of overall health system performance, based on the principles of patient and public involvement described in PREMs . PREMs instruments look at the care process’s impact on patients’ and relatives’ experiences, e.g., involvement in care, communication with staff, information sharing, and the overall care experience. Our PREMs questionnaire included open and closed questions to capture patients’ and relatives’ perceptions of their interactions with the healthcare system and the degree to which their needs were considered . This paper reports on the PREMs survey’s written feedback on relatives’ experiences during the COVID-19 pandemic’s first wave and the visiting restrictions imposed on them (Fig. ).
Our self-reporting data-collection questionnaire was designed based on a literature review and four semi-structured exploratory interviews with previously hospitalized patients and their relatives . A returned questionnaire from the patient and relative served as a proxy for written consent to participate. The first section, including 14 closed questions, asked patients about sociodemographic data, sex, age, marital status, educational level, and their hospital trajectory as a patient, as well as about their stress level , trust in healthcare professionals (nurses and physicians) , feelings of safety , whether they had been infected by SARS-CoV-2, and perceptions about the disease’s severity during the hospitalization period . The second section included eight closed questions and one open-ended question for the discharged patient’s relative (if they were directly involved in the patient’s hospitalization) (Additional file ). Due to legal restrictions covering data protection and confidentiality, we were not allowed to collect sociodemographic data on relatives. These questions were: Were you able to visit your relative in the hospital? [Yes/No]; If not, how did you maintain contact with your loved one? [(i) Telephone with professional caregivers, (ii) email, (iii) other]. If not, how much did this affect you? [(i) I was not affected, (ii) I was slightly affected, (iii) No opinion, (iv) I was moderately affected, (v) I was very affected]; How did you perceive the information you received about the COVID-19 pandemic during your loved one’s hospital stay? [(i) Totally inadequate, ii) Inadequate, iii) Slightly inadequate, iv) No opinion, v) Just good enough, vi) Adequate, vii) Very good)]; How would you rate communication with the staff? [(i) Poor, ii) Passable, (iii) Good, (iv) Very good, (v) Excellent]. As a close relative, how did the hospital staff treat you? [(i) I was not taken into consideration at all, (ii) I was moderately taken into consideration, (iii) I was fully taken into consideration]. How serious do you think the COVID-19 pandemic is? [(i) Not at all serious, (ii) Not very serious, (iii) Slightly serious, (iv) Serious, (v) Very serious] . Would you like to add any comments about your experience of your loved one’s hospitalization during the pandemic?
All eligible participants received a letter by post inviting them to participate in the survey. This was followed by a reminder two weeks later. Besides the paper questionnaire, an information sheet explained the study’s background, the data sought, and our participant data protection strategy. Participants were asked to complete the paper questionnaire and return it in the prepaid envelope provided. Waiting for ethics clearance and heavy workloads meant that the data warehouse only started its information gathering in August and finished in December 2020 (Fig. ).
Data were anonymized to ensure participant anonymity and respect good research practice in this type of study, as per the Declaration of Helsinki . Data were imported into IBM SPSS® software, version 28 (IBM Corp, Armonk, New York, USA), for analyses. Our statistical power calculation was based on an alpha error of 0.01, a power-type II error of 0.99, and a mild effect size of 0.3. The minimum sample required for sufficient statistical power was 740 relatives. We analyzed the number of responses and missing values for each variable and reported them in our tables ( n = answers). Parametric properties were analyzed for the normality of their distributions and the equality of their variances using the Kolmogorov–Smirnov test. Non-parametric tests were performed for variables with non-normal distributions to compare relatives who were and were not affected by visiting restrictions. The population was described using descriptive statistics with frequencies, distributions, and leading trends. Data collected using Likert scales were analyzed using descriptive and inferential statistics. LOS was recoded as a dichotomous variable of 1–14 days and ≥ 15 days, based on the median patient LOS . Bivariate analyses were conducted using cross-tabulations between relatives impacted and not impacted by visiting restrictions during their loved one’s hospitalization. Spearman’s rank correlation measures were computed between sociodemographic variables and the closed questions. We computed a linear multivariate regression model to analyze how visiting restrictions predicted relatives’ affects, their satisfaction with information received about the COVID-19 pandemic, satisfaction with communication with staff, how well healthcare staff considered relatives, and perceptions of how serious the COVID-19 pandemic was. The model estimated each predictor’s net impact, other things being equal, and it gave predictions for the entire sample, not just specific individuals. A content analysis of relatives’ responses to the open-ended question was made using NVivo12 software (QSR International, 2021). Quantitative results were considered statistically significant when p < 0.01. All p -values were based on two-tailed tests, and all the analyses were supervised and reviewed by a biostatistician.
Study participants Of 4,523 eligible participants hospitalized during the COVID-19 pandemic’s first wave, 1,341 (29.6%) returned the questionnaire. Of these, 1,312 were valid (> 50% of questions completed), with 866 relatives completing the section dedicated to them, 818 of which were analyzable (> 50% of questions completed), representing 65.5% of the valid patient responses. Sociodemographic characteristics of patients and their relatives Participants – hospitalized patients Median participant age was 64 years old (IQR 1–3 = 45–76). During the study period, 141 (10.9%) respondents were tested positive for a SARS-CoV-2 infection by the hospital laboratory, and 1,148 (89.1%) were uninfected. Discharged patients’ sociodemographic data are detailed in Table . Responding relatives Of 866 PREMs questionnaires completed by patients’ relatives, 818 (95%) were analyzable, including 106 (75%) relatives of the 141 SARS-CoV-2-infected participants. Among the 1,086 non-COVID-19 participants, 712 (87%) relatives responded to the PREMs questionnaire’s second section. We found significantly higher survey participation rates among the relatives of: patients infected with SARS-CoV-2 ( p < 0.001); older patient age groups ( p = 0.008); patients with longer LOS ( p < 0.001); and patients in certain hospital wards (intermediate care and ICU) ( p < 0.001). Visiting restrictions A total of 543 relatives were subjected—either entirely or partially—to visiting restrictions during their loved one’s hospitalization, including 92 (87%) relatives of SARS-CoV-2-infected patients and 451 (63%) relatives of non-infected patients. Relatives of SARS-CoV-2-infected patients were significantly more emotionally affected than the relatives of non-infected patients (81% vs. 61% at least moderately affected, respectively; p < 0.001) (Table ). Contrarily, no significant differences were found between how strongly relatives subjected to visiting restrictions were affected according to age group ( p = 0.815) and LOS ( p = 0.185) (Table ). Relatives’ perceptions of the severity of the COVID-19 pandemic Using the standard questionnaire and scale for risk perception during an infectious disease outbreak, as developed by the Municipal Public Health Service of Rotterdam-Rijnmond , relatives’ overall median score for the perceived severity of a SARS-CoV-2 infection was 4 (IQR 1–3 = 3–5). No significant differences were found between relatives subjected to visiting restrictions (median 4; IQR 1–3 = 3–5) and those not (median 4; IQR = 1–3 = 3–5) ( p = 0.085). Relatives’ involvement in care Consideration of relatives in the care process Overall, most relatives felt well-considered by healthcare staff ( n = 406; 54.4%) when it came to involvement in the provision of care. Given the exceptional public health situation caused by the COVID-19 pandemic, relatives waiting to hear from their loved ones felt stressed and disturbed. A smaller fraction felt less well considered ( n = 218; 29.1%) and 124 (16.6%) did not feel considered at all in the provision of care. A small fraction (< 5%) reported hospital healthcare staff to be unavailable to inform them of their loved one’s health status. Significant differences were found between patient age groups ( p < 0.001) and between relatives subjected and not subjected to visiting restrictions ( p < 0.001). No significant differences were found regarding LOS ( p = 0.060) or hospitalization ward ( p = 0.316) (Table ). Sharing information and communication between healthcare staff and relatives Despite healthcare staff’s poor availability due to extremely high workloads, most relatives felt well informed by them ( n = 426; 53.0%), with an overall median score of 6 (IQR 1–3 = 1–6). Fewer respondents felt moderately well informed ( n = 68; 8.5%) or poorly informed ( n = 309; 38.5%) by healthcare staff. Among relatives subjected to visiting restrictions, no significant differences were found regarding perceived levels of information between the sexes ( p = 0.080), between SARS-CoV-2-infected or non-infected patients ( p = 0.254), between age groups ( p = 0.248), and between different LOS ( p = 0.220). Contrarily, significant differences were found between hospitalization wards ( p < 0.001) (Additional file ). Relatives reported a reasonable overall median score of 3 out of 5 (IQR 1–3 = 3–4) on the quality of their communication with hospital healthcare staff, although relatives subjected to visiting restrictions reported significantly lower scores than those not subjected to them ( p < 0.001). One-fifth of relatives found communication poor or acceptable. No significant differences were found between relatives subjected to visiting restrictions and those not with regards to communication, LOS, and hospitalization wards (Additional file ). Among the full sample of relatives ( n = 818), 563 (69%) reported regularly communicating with their hospitalized loved ones (at least once a day), and 179 (22%) reported having at least one telephone contact with Valais Hospital staff. A small number of relatives ( n = 6) communicated with the patient by email. Other methods for maintaining contact between relatives and patients were videoconferences using FaceTime®, WhatsApp®, Zoom®, or Skype® ( n = 25), mobile phone and SMS text messages ( n = 9), exchanges at the hospital window or outside the ward ( n = 9), being hospitalized in the same hospital room ( n = 1), or communication through the family physician ( n = 3). Multivariate linear regressions of affect scores Simultaneous multiple linear regressions were calculated to investigate the best predictors of affect scores among relatives subjected to visiting restrictions. The combination of patient age in years, sex, LOS, and the hospitalization wards of medicine, surgery, psychiatry, gynecology, intermediate care/ICU, and rehabilitation/geriatrics significantly predicted affect scores ( F (9, 4.421) = 7.294; p < 0.001). The hospitalization wards of medicine ( p = 0.027) and gynecology/obstetrics ( p = 0.028) also significantly predicted relatives’ affect scores (Table ). The adjusted R 2 value was 0.105, indicating that the model explained 10.5% of the variance in the affect scores. According to Cohen, this is a mild-to-moderate effect . Relatives’ freely expressed experiences of visiting restrictions Almost one-fifth ( n = 71) of the relatives subjected to visiting restrictions described their lived experiences in our open-ended question. Relatives of patients hospitalized in gynecology/obstetrics Fathers were initially excluded from attending the mother’s initial labor, causing a lot of frustration and stress for both. Relatives understood the need for preventive measures against the SARS-CoV-2 virus, but they did not consider their loved ones as sick patients, finding the prohibition on visiting too extreme. Limitations and even prohibitions on visits by fathers were not well received, especially the time limit of 30 min. Being deprived of this unique life experience, unable to provide support to the mother or see the child’s birth and their first days of life, was a very bad experience for fathers, filled with intense regrets. The following comment summed up the disagreements with maternity ward visiting restrictions: “In the case of childbirth, the father’s place—who could have been tested before—is next to the mother and the child. Don’t you think?” (Relative-223) Neonatology Limitations and even prohibitions on visiting the neonatology ward were very badly received by relatives. Relatives prohibited from visiting the neonatology unit stated the following: “Understanding the hospital sector’s state of stress… I had expected a different appreciation of priorities... For me, hospitalization in neonatology should ensure the right to visits no matter what.” (Relative-345) Emergency department visits and the hospitalization of frail subjects The prohibition on visits also affected relatives accompanying their loved ones to urgent admissions to the emergency department. The moment of this imposed separation—leaving their loved one to the unknown—aroused very strong emotions, including worry, anxiety, stress, the fear of not seeing them again, and intense apprehension while waiting for news. They expressed these emotions as follows: “The ban on visits is traumatic for all relatives.” (Relative-87) “It is tough to leave a loved one—especially my sick wife—outside the door without accompanying her or supporting her during these difficult moments, but I understand the measures taken.” (Relative-340) Visiting restrictions were very badly received by relatives and frail patients alike, especially when involving patients with cognitive disorders or at the end of life, with whom video calls were complicated or impossible. Families reported the physical and psychological regression they observed in their loved ones due to the lack of stimulation usually provided during visits. For other patients, compensating for the prohibition on visits by using video calls, telephone calls, and text messages was greatly appreciated (for more details, see Additional file ).
Of 4,523 eligible participants hospitalized during the COVID-19 pandemic’s first wave, 1,341 (29.6%) returned the questionnaire. Of these, 1,312 were valid (> 50% of questions completed), with 866 relatives completing the section dedicated to them, 818 of which were analyzable (> 50% of questions completed), representing 65.5% of the valid patient responses.
Participants – hospitalized patients Median participant age was 64 years old (IQR 1–3 = 45–76). During the study period, 141 (10.9%) respondents were tested positive for a SARS-CoV-2 infection by the hospital laboratory, and 1,148 (89.1%) were uninfected. Discharged patients’ sociodemographic data are detailed in Table . Responding relatives Of 866 PREMs questionnaires completed by patients’ relatives, 818 (95%) were analyzable, including 106 (75%) relatives of the 141 SARS-CoV-2-infected participants. Among the 1,086 non-COVID-19 participants, 712 (87%) relatives responded to the PREMs questionnaire’s second section. We found significantly higher survey participation rates among the relatives of: patients infected with SARS-CoV-2 ( p < 0.001); older patient age groups ( p = 0.008); patients with longer LOS ( p < 0.001); and patients in certain hospital wards (intermediate care and ICU) ( p < 0.001). Visiting restrictions A total of 543 relatives were subjected—either entirely or partially—to visiting restrictions during their loved one’s hospitalization, including 92 (87%) relatives of SARS-CoV-2-infected patients and 451 (63%) relatives of non-infected patients. Relatives of SARS-CoV-2-infected patients were significantly more emotionally affected than the relatives of non-infected patients (81% vs. 61% at least moderately affected, respectively; p < 0.001) (Table ). Contrarily, no significant differences were found between how strongly relatives subjected to visiting restrictions were affected according to age group ( p = 0.815) and LOS ( p = 0.185) (Table ).
Median participant age was 64 years old (IQR 1–3 = 45–76). During the study period, 141 (10.9%) respondents were tested positive for a SARS-CoV-2 infection by the hospital laboratory, and 1,148 (89.1%) were uninfected. Discharged patients’ sociodemographic data are detailed in Table .
Of 866 PREMs questionnaires completed by patients’ relatives, 818 (95%) were analyzable, including 106 (75%) relatives of the 141 SARS-CoV-2-infected participants. Among the 1,086 non-COVID-19 participants, 712 (87%) relatives responded to the PREMs questionnaire’s second section. We found significantly higher survey participation rates among the relatives of: patients infected with SARS-CoV-2 ( p < 0.001); older patient age groups ( p = 0.008); patients with longer LOS ( p < 0.001); and patients in certain hospital wards (intermediate care and ICU) ( p < 0.001). Visiting restrictions A total of 543 relatives were subjected—either entirely or partially—to visiting restrictions during their loved one’s hospitalization, including 92 (87%) relatives of SARS-CoV-2-infected patients and 451 (63%) relatives of non-infected patients. Relatives of SARS-CoV-2-infected patients were significantly more emotionally affected than the relatives of non-infected patients (81% vs. 61% at least moderately affected, respectively; p < 0.001) (Table ). Contrarily, no significant differences were found between how strongly relatives subjected to visiting restrictions were affected according to age group ( p = 0.815) and LOS ( p = 0.185) (Table ).
A total of 543 relatives were subjected—either entirely or partially—to visiting restrictions during their loved one’s hospitalization, including 92 (87%) relatives of SARS-CoV-2-infected patients and 451 (63%) relatives of non-infected patients. Relatives of SARS-CoV-2-infected patients were significantly more emotionally affected than the relatives of non-infected patients (81% vs. 61% at least moderately affected, respectively; p < 0.001) (Table ). Contrarily, no significant differences were found between how strongly relatives subjected to visiting restrictions were affected according to age group ( p = 0.815) and LOS ( p = 0.185) (Table ).
Using the standard questionnaire and scale for risk perception during an infectious disease outbreak, as developed by the Municipal Public Health Service of Rotterdam-Rijnmond , relatives’ overall median score for the perceived severity of a SARS-CoV-2 infection was 4 (IQR 1–3 = 3–5). No significant differences were found between relatives subjected to visiting restrictions (median 4; IQR 1–3 = 3–5) and those not (median 4; IQR = 1–3 = 3–5) ( p = 0.085).
Consideration of relatives in the care process Overall, most relatives felt well-considered by healthcare staff ( n = 406; 54.4%) when it came to involvement in the provision of care. Given the exceptional public health situation caused by the COVID-19 pandemic, relatives waiting to hear from their loved ones felt stressed and disturbed. A smaller fraction felt less well considered ( n = 218; 29.1%) and 124 (16.6%) did not feel considered at all in the provision of care. A small fraction (< 5%) reported hospital healthcare staff to be unavailable to inform them of their loved one’s health status. Significant differences were found between patient age groups ( p < 0.001) and between relatives subjected and not subjected to visiting restrictions ( p < 0.001). No significant differences were found regarding LOS ( p = 0.060) or hospitalization ward ( p = 0.316) (Table ). Sharing information and communication between healthcare staff and relatives Despite healthcare staff’s poor availability due to extremely high workloads, most relatives felt well informed by them ( n = 426; 53.0%), with an overall median score of 6 (IQR 1–3 = 1–6). Fewer respondents felt moderately well informed ( n = 68; 8.5%) or poorly informed ( n = 309; 38.5%) by healthcare staff. Among relatives subjected to visiting restrictions, no significant differences were found regarding perceived levels of information between the sexes ( p = 0.080), between SARS-CoV-2-infected or non-infected patients ( p = 0.254), between age groups ( p = 0.248), and between different LOS ( p = 0.220). Contrarily, significant differences were found between hospitalization wards ( p < 0.001) (Additional file ). Relatives reported a reasonable overall median score of 3 out of 5 (IQR 1–3 = 3–4) on the quality of their communication with hospital healthcare staff, although relatives subjected to visiting restrictions reported significantly lower scores than those not subjected to them ( p < 0.001). One-fifth of relatives found communication poor or acceptable. No significant differences were found between relatives subjected to visiting restrictions and those not with regards to communication, LOS, and hospitalization wards (Additional file ). Among the full sample of relatives ( n = 818), 563 (69%) reported regularly communicating with their hospitalized loved ones (at least once a day), and 179 (22%) reported having at least one telephone contact with Valais Hospital staff. A small number of relatives ( n = 6) communicated with the patient by email. Other methods for maintaining contact between relatives and patients were videoconferences using FaceTime®, WhatsApp®, Zoom®, or Skype® ( n = 25), mobile phone and SMS text messages ( n = 9), exchanges at the hospital window or outside the ward ( n = 9), being hospitalized in the same hospital room ( n = 1), or communication through the family physician ( n = 3). Multivariate linear regressions of affect scores Simultaneous multiple linear regressions were calculated to investigate the best predictors of affect scores among relatives subjected to visiting restrictions. The combination of patient age in years, sex, LOS, and the hospitalization wards of medicine, surgery, psychiatry, gynecology, intermediate care/ICU, and rehabilitation/geriatrics significantly predicted affect scores ( F (9, 4.421) = 7.294; p < 0.001). The hospitalization wards of medicine ( p = 0.027) and gynecology/obstetrics ( p = 0.028) also significantly predicted relatives’ affect scores (Table ). The adjusted R 2 value was 0.105, indicating that the model explained 10.5% of the variance in the affect scores. According to Cohen, this is a mild-to-moderate effect . Relatives’ freely expressed experiences of visiting restrictions Almost one-fifth ( n = 71) of the relatives subjected to visiting restrictions described their lived experiences in our open-ended question. Relatives of patients hospitalized in gynecology/obstetrics Fathers were initially excluded from attending the mother’s initial labor, causing a lot of frustration and stress for both. Relatives understood the need for preventive measures against the SARS-CoV-2 virus, but they did not consider their loved ones as sick patients, finding the prohibition on visiting too extreme. Limitations and even prohibitions on visits by fathers were not well received, especially the time limit of 30 min. Being deprived of this unique life experience, unable to provide support to the mother or see the child’s birth and their first days of life, was a very bad experience for fathers, filled with intense regrets. The following comment summed up the disagreements with maternity ward visiting restrictions: “In the case of childbirth, the father’s place—who could have been tested before—is next to the mother and the child. Don’t you think?” (Relative-223) Neonatology Limitations and even prohibitions on visiting the neonatology ward were very badly received by relatives. Relatives prohibited from visiting the neonatology unit stated the following: “Understanding the hospital sector’s state of stress… I had expected a different appreciation of priorities... For me, hospitalization in neonatology should ensure the right to visits no matter what.” (Relative-345) Emergency department visits and the hospitalization of frail subjects The prohibition on visits also affected relatives accompanying their loved ones to urgent admissions to the emergency department. The moment of this imposed separation—leaving their loved one to the unknown—aroused very strong emotions, including worry, anxiety, stress, the fear of not seeing them again, and intense apprehension while waiting for news. They expressed these emotions as follows: “The ban on visits is traumatic for all relatives.” (Relative-87) “It is tough to leave a loved one—especially my sick wife—outside the door without accompanying her or supporting her during these difficult moments, but I understand the measures taken.” (Relative-340) Visiting restrictions were very badly received by relatives and frail patients alike, especially when involving patients with cognitive disorders or at the end of life, with whom video calls were complicated or impossible. Families reported the physical and psychological regression they observed in their loved ones due to the lack of stimulation usually provided during visits. For other patients, compensating for the prohibition on visits by using video calls, telephone calls, and text messages was greatly appreciated (for more details, see Additional file ).
Overall, most relatives felt well-considered by healthcare staff ( n = 406; 54.4%) when it came to involvement in the provision of care. Given the exceptional public health situation caused by the COVID-19 pandemic, relatives waiting to hear from their loved ones felt stressed and disturbed. A smaller fraction felt less well considered ( n = 218; 29.1%) and 124 (16.6%) did not feel considered at all in the provision of care. A small fraction (< 5%) reported hospital healthcare staff to be unavailable to inform them of their loved one’s health status. Significant differences were found between patient age groups ( p < 0.001) and between relatives subjected and not subjected to visiting restrictions ( p < 0.001). No significant differences were found regarding LOS ( p = 0.060) or hospitalization ward ( p = 0.316) (Table ).
Despite healthcare staff’s poor availability due to extremely high workloads, most relatives felt well informed by them ( n = 426; 53.0%), with an overall median score of 6 (IQR 1–3 = 1–6). Fewer respondents felt moderately well informed ( n = 68; 8.5%) or poorly informed ( n = 309; 38.5%) by healthcare staff. Among relatives subjected to visiting restrictions, no significant differences were found regarding perceived levels of information between the sexes ( p = 0.080), between SARS-CoV-2-infected or non-infected patients ( p = 0.254), between age groups ( p = 0.248), and between different LOS ( p = 0.220). Contrarily, significant differences were found between hospitalization wards ( p < 0.001) (Additional file ). Relatives reported a reasonable overall median score of 3 out of 5 (IQR 1–3 = 3–4) on the quality of their communication with hospital healthcare staff, although relatives subjected to visiting restrictions reported significantly lower scores than those not subjected to them ( p < 0.001). One-fifth of relatives found communication poor or acceptable. No significant differences were found between relatives subjected to visiting restrictions and those not with regards to communication, LOS, and hospitalization wards (Additional file ). Among the full sample of relatives ( n = 818), 563 (69%) reported regularly communicating with their hospitalized loved ones (at least once a day), and 179 (22%) reported having at least one telephone contact with Valais Hospital staff. A small number of relatives ( n = 6) communicated with the patient by email. Other methods for maintaining contact between relatives and patients were videoconferences using FaceTime®, WhatsApp®, Zoom®, or Skype® ( n = 25), mobile phone and SMS text messages ( n = 9), exchanges at the hospital window or outside the ward ( n = 9), being hospitalized in the same hospital room ( n = 1), or communication through the family physician ( n = 3).
Simultaneous multiple linear regressions were calculated to investigate the best predictors of affect scores among relatives subjected to visiting restrictions. The combination of patient age in years, sex, LOS, and the hospitalization wards of medicine, surgery, psychiatry, gynecology, intermediate care/ICU, and rehabilitation/geriatrics significantly predicted affect scores ( F (9, 4.421) = 7.294; p < 0.001). The hospitalization wards of medicine ( p = 0.027) and gynecology/obstetrics ( p = 0.028) also significantly predicted relatives’ affect scores (Table ). The adjusted R 2 value was 0.105, indicating that the model explained 10.5% of the variance in the affect scores. According to Cohen, this is a mild-to-moderate effect .
Almost one-fifth ( n = 71) of the relatives subjected to visiting restrictions described their lived experiences in our open-ended question.
Fathers were initially excluded from attending the mother’s initial labor, causing a lot of frustration and stress for both. Relatives understood the need for preventive measures against the SARS-CoV-2 virus, but they did not consider their loved ones as sick patients, finding the prohibition on visiting too extreme. Limitations and even prohibitions on visits by fathers were not well received, especially the time limit of 30 min. Being deprived of this unique life experience, unable to provide support to the mother or see the child’s birth and their first days of life, was a very bad experience for fathers, filled with intense regrets. The following comment summed up the disagreements with maternity ward visiting restrictions: “In the case of childbirth, the father’s place—who could have been tested before—is next to the mother and the child. Don’t you think?” (Relative-223)
Limitations and even prohibitions on visiting the neonatology ward were very badly received by relatives. Relatives prohibited from visiting the neonatology unit stated the following: “Understanding the hospital sector’s state of stress… I had expected a different appreciation of priorities... For me, hospitalization in neonatology should ensure the right to visits no matter what.” (Relative-345)
The prohibition on visits also affected relatives accompanying their loved ones to urgent admissions to the emergency department. The moment of this imposed separation—leaving their loved one to the unknown—aroused very strong emotions, including worry, anxiety, stress, the fear of not seeing them again, and intense apprehension while waiting for news. They expressed these emotions as follows: “The ban on visits is traumatic for all relatives.” (Relative-87) “It is tough to leave a loved one—especially my sick wife—outside the door without accompanying her or supporting her during these difficult moments, but I understand the measures taken.” (Relative-340) Visiting restrictions were very badly received by relatives and frail patients alike, especially when involving patients with cognitive disorders or at the end of life, with whom video calls were complicated or impossible. Families reported the physical and psychological regression they observed in their loved ones due to the lack of stimulation usually provided during visits. For other patients, compensating for the prohibition on visits by using video calls, telephone calls, and text messages was greatly appreciated (for more details, see Additional file ).
To the best of our knowledge, this research was the first to use a PREMs questionnaire to examine the impact of visiting restrictions on patients and their relatives in a hospital setting during the COVID-19 pandemic’s first wave in Switzerland. The Valais Hospital’s values, and those of its healthcare staff, recognize relatives’ important role in their loved one’s healthcare and hospital discharge trajectories. However, this was a very challenging period for patients, relatives, and staff, with unforeseen and unpredictable events, daily changes, and many restrictions. The sudden implementation of visiting restrictions destabilized the hospitalization process and relatives’ roles within that process. Obtaining an elevated response rate (75%) from the responding patients’ relatives was, therefore, not surprising as it offered them a chance to express both their positive and negative lived experiences of these extreme health circumstances. This study was specifically conducted during the COVID-19 pandemic’s first wave, and considering relatives as essential partners in care—and not just as visitors—is part of the Valais Hospital staff’s mission. Relatives of a SARS-CoV-2-infected patient were more likely to have revealed how affected they were by the visiting restrictions than were relatives of non-infected patients. Relatives expressed their perceptions of ethical and clinical issues in their responses to the open-ended question. This was not surprising and was in line with Jaswaney et al . ’s findings that visiting restrictions can be problematic, creating many ethical issues related to who can and cannot visit. The impossibility of being physically present for their hospitalized loved ones created worry, anxiety, sadness, and a perceived greater need for more information and updates on the relative’s condition, as expressed in relatives’ comments and in line with the findings of Rottenburg et al. and Sahoo et al. . Many relatives reported stress due to uncertainty, and not being allowed into the hospital created emotional worries and feelings of failing to support and protect their kin. Being present at the patient’s bedside, on the other hand, helped relatives to understand and cope with situations, as reported in the recent study by Hochendoner et al. of the relatives of ICU patients . Our findings revealed significant differences between the high impact of visiting restrictions perceived by the relatives of SARS-CoV-2-infected patients and the lower impact perceived by relatives of non-infected patients, and this effect was similar across ward types. As one might imagine, visiting restrictions strongly affected the relatives of patients in the gynecology/obstetrics, maternity, geriatrics, and general medicine wards, more so than in other hospitalization wards and in line with Hochendoner et al.’s study . This was independent of patient age group or LOS and of relatives’ perceptions of the severity of a SARS-CoV-2 infection. Hoffman et al. used the example of oxygen supplementation to express how crucial contact is with healthcare staff who can explain the patient’s situation. The COVID-19 pandemic and the stresses involved were highly disturbing for relatives waiting for news on their loved ones. Although the majority of our participating relatives did feel considered by healthcare staff, not all of them did; some expressed concerns about visiting restrictions and felt less considered or not at all considered regarding involvement in the care provided. A more detailed analysis of each hospitalization would clarify those concerns, but that was beyond this paper’s scope. In opposition to some free comments criticizing a lack of information, our quantitative results showed that most relatives felt well informed by healthcare staff, with no difference between the relatives subjected to visiting restrictions and those not. However, some hospitalization wards showed significant differences, such as maternity/obstetrics, which was unsurprising and in line with recent publications by Venkatesh et al. and Hugelius et al. . Our linear regression model confirmed this, explaining the mild-to-moderate variance in the affect scores of relatives whose loved ones were hospitalized in general medicine and gynecology/obstetrics wards. The Valais Hospital tried to replace physical visits with various digital and technical means, but these had clear limitations. Relatives subjected to visiting restrictions reported lower scores for the quality of communication than relatives who could visit. Unfortunately, relatives’ video or telephone meetings with patients in acute care settings led to fewer agreed changes to care goals with staff than did in-person meetings, as was confirmed in the recent studies by Reitzle et al., Lin et al., Sken et al., and Rose et al. . Also, despite these substitute visiting methods, in-person visiting restrictions reduced relatives’ comprehension of the patient’s overall condition and their possibilities for maintaining social relations, as confirmed by Mahery et al. . Based on relatives’ free comments, visiting restrictions were also a source of emotional distress and increased workloads for healthcare staff, who may not have agreed with hospital policies resulting in them spending a lot of time informing and communicating with relatives. This may have caused problems in the dialogue between healthcare staff and relatives and thus reduced the possibilities of ensuring consensus-based care . The Valais Hospital regularly updated its visiting restrictions, referring closely to Swiss federal and cantonal public health policies concerning SARS-CoV-2 infection risk–benefit assessment—the cornerstone of medical and pandemic policy decision-making. It nevertheless remains difficult to determine whether those visiting restrictions were effective in limiting the spread of COVID-19. Although it might be reasonable to speculate that these policies slowed its spread, based on a mechanistic understanding of the disease, visiting restrictions should be weighed against the potential harm to patients. Our study highlighted the complexities associated with the numerous factors impacted by hospital visiting restrictions. Our results advocate for a more tailored, adaptable, and patient-centered approach to visiting restrictions depending on the clinical situation. Reasonable exceptions might include allowing fathers to visit labor and delivery rooms, pediatrics wards, and ICU units. The authors endorse a nuanced approach to hospital visiting restrictions, taking into account the patient population, visitors’ use of personal protective equipment, screening measures, community disease prevalence, and other circumstances. Visiting restrictions should be clearly and transparently communicated to relatives. Patient discharge during periods with visiting restrictions is another concern, as healthcare staff are tasked with establishing a critical partnership with relatives to organize discharge planning . The Valais Hospital and its staff worked to maintain strong relationships between patients and relatives, convinced that these improve the patient experience, safety, and outcomes. Visiting restrictions aimed to protect patients and staff, but some relatives felt that they were no longer essential partners in care. Most relatives understood the rapid shift to strict visiting restrictions, given the nature of the COVID-19 crisis. Nevertheless, these policies proved very difficult for relatives, causing significant emotional stress, concerns for patient safety, and the inability to support loved ones at the bedside. Relatives and healthcare staff must remain partners in care, even when challenging circumstances put that partnership under stress. The COVID-19 pandemic evolved rapidly and continues to do so. Many directives and shifts in policy were implemented without the opportunity to engage with relatives, including a shift in language that returned relatives to their roles as visitors rather than as partners in care. Effective and appropriate communication about policy changes and how relatives and healthcare staff can continue to work together as partners in care is essential to establishing trust and positive collaboration.
To the best of our knowledge, this was the first PREMs carried out in Switzerland to include hospitalized patients’ relatives within the context of the COVID-19 pandemic. The study employed as many psychometrically validated questions as possible to investigate PREMs appropriately among relatives.
The study also had some limitations. A first limitation is the inability to interpret it outside the context of the COVID-19 pandemic’s first wave. Valais Hospital had never conducted a PREMs survey and, to the best of our knowledge, no similar studies of relatives’ experiences were conducted during this period, making comparisons with our results difficult. The survey’s self-reporting questionnaire was designed especially for the present study; however, the internal consistency of the PREMS questions on visiting restrictions was limited, and no comparison with the original calculation was available. Other significant limitations to our survey were the reliability and validity of the PREMs self-reported questionnaire employed. The internal consistency of the five unidimensional questions used in it—(i) Loved ones’ levels of affection due to visiting restrictions? (ii) How serious do you think the COVID-19 pandemic is? (iii) How did you perceive the information you received about the COVID-19 pandemic during your loved one’s hospital stay? (iv) How would you rate communication with the staff? (v) As a close relative, how did the hospital staff treat you?—was not tested. At that time, a trade-off between urgency and the scientific accuracy of using a self-reporting questionnaire did not allow us the time to test the PREMs questionnaire’s reliability, especially these unidimensional questions. Moreover, the questionnaire’s limited validity could not be assessed or attenuated by correlating its scores and results with a similar instrument as this did not exist when the survey was launched during the COVID-19 pandemic’s first wave. Another limitation was the delay of 4 to 6 months between patients’ hospitalization and their self-reported survey responses. Furthermore, since well before the COVID-19 pandemic, the Valais Hospital had systematically invited patients to share their opinions and rate their satisfaction with the hospital’s organization and performance; the present survey did not investigate relatives’ satisfaction so as to avoid redundancies, and this could be considered a limitation. Studies based on PREMs are usually regarded as a low level of evidence, as survey completion may lack rigor and the accuracy of the information provided cannot be verified. In addition, the content of the concepts explored has still not been standardized, and we could have missed some relevant experiences among relatives. To respect healthcare’s Quadruple Aim, further research among healthcare professionals should complement this study. Based on our results and in line with the existing international literature published after the COVID-19 pandemic’s first-wave visiting restrictions, restricting visits by all the relatives of hospitalized patients is not recommendable . Future policies must clearly incorporate patients’ and relatives’ insights on this topic. Detailed evaluations of restrictions based on hospital settings (e.g., emergency departments, maternity, psychiatry, and surgery wards) are needed to quantify the relevant risks of visitor absence.
The present study described relatives’ experiences of visiting restrictions, how they were affected by these, and their perceptions of the severity of a SARS-CoV-2 infection and of information flow and communication during the COVID-19 pandemic’s first wave. About two-thirds of responding relatives were moderately emotionally affected by the visiting restrictions, and most felt well-considered by the healthcare staff. Responses to our survey’s open question showed the unique aspects of each relative’s experiences of their loved one’s hospitalization. Our patient-reported experience measures survey (PREMs) data revealed COVID-19’s impact on the social determinants of health among patients’ relatives, thus helping to identify opportunities for improving patient-centered care throughout the following waves of this ongoing crisis and perhaps after it. Although the PREMs questionnaire collected interesting data on relatives’ experiences of visiting restrictions during the COVID-19 pandemic’s first wave, our results should be interpreted with caution considering the regional nature of the health conditions examined and the limitations in the consistency of our ad hoc questionnaire. Future research will need to focus on embedding the collection of PREMs more broadly throughout healthcare institutions, increasing the use of their findings by patients, relatives, clinicians, and policymakers, and facilitating comparisons of patient-reported experiences internationally.
Additional file 1. Additional file 2. Relatives’ perceptions about the information received on their hospitalized loved one. Additional file 3. Relatives’ evaluation of the quality of communication. Additional file 4. Content analysis of relatives’ comments about visiting restrictions as applied across different hospitalization units and departments ( n = 71).
|
A framework for medical physics compensation in an academic department | f9ce3657-5924-4e9b-a806-9f221549f866 | 11540009 | Internal Medicine[mh] | INTRODUCTION Compensation is an important aspect of career planning and professional growth for medical physicists. A robust compensation plan will attract and retain talented people and incentivize activities consistent with the goals of the institution. Within radiation oncology, physician compensation is often tied to relative value units (RVUs) but current billing models do not adequately capture clinical physicists’ effort. The annual AAPM Professional Survey provides useful aggregate salary data and salary data are available for select public institutions, but to our knowledge, there is no existing literature describing the structure of a medical physics compensation model. We therefore present a model developed for clinical medical physicists in an academic radiation oncology department which we believe will be useful for medical physicists and radiation oncology leaders. Prior to implementation of this new model in our department, physicist compensation roughly followed the AAPM salary scale based on years of experience, with additional stipends for American Board of Radiology (ABR) certification, departmental leadership responsibilities, and medical school academic promotions (Assistant Professor, Associate Professor, and Professor, for PhD‐level physicists only). There typically exists a range of interest and opportunity for clinical medical physicists to pursue academic promotion, so while there was a mechanism to reward more academically productive physicists, no specific pathway existed to reward those who were more clinically focused. The model described in this paper is intended to provide a framework for career growth and a compensation ladder for medical physicists with clinical responsibilities in an academic department. METHODS The goals for the new model were: create a market competitive plan to support recruitment and retention of top physics talent, incentivize clinical effort, innovation, citizenship and professional service (e.g., internal and external committee service, teaching, and mentorship) and academic achievement, provide compensation growth opportunities separate from medical school promotions, and create consistent, transparent, and fair metrics applicable to all clinical physicists (either MS or PhD) at both main campus and network locations. In constructing the model, we consulted with medical physics groups from peer institutions as well as publicly available salary data. , Taken together, these goals were meant to create a platform for salary growth for both clinically focused and academically active physicists, and reward productive performance above and beyond basic clinical service based on more than simply years of experience. The model was created for a group of medical physicists at a large academic hospital that values clinical care, research, and teaching. The model parameters and structure were developed jointly by physics and departmental administrative leadership, with input and support from physician and institutional leadership, and Human Resources. Model values were iterated to achieve a consensus agreement between improved compensation and feasible budget values. There were 34 physicists included in the model, including three network locations, with years of experience ranging from 1 to 36 years. Physics faculty with research responsibilities but no clinical effort was included in the model. RESULTS The compensation plan is based on a salary increment unit Δ$. The components are: 1) Base salary: 10Δ$ The base salary is independent of the number of years of experience, plus credits for 2) ABR certification: 0 or 2Δ$ 3) Clinical tier: 0, 1, 2 or 3Δ$ The clinical tier credit is based on the components shown in Table , with salary increases associated with Tiers II–IV. Consideration for movement to the higher tier is dependent on sustained, proven performance at the higher tier's expectations, but not every item needs to be achieved to advance to the next level. Tiers are assigned by the Physics Division Chief and Director of Clinical Physics with the expectation that progress through the tiers would be discussed with each physicist at least annually. 4) Leadership: 0, 1, 2, or 3Δ$ Leadership credits are awarded for each of three levels: (1) service lead (e.g., brachytherapy or treatment planning lead), basic staff supervision (e.g., medical physics assistants), or educational leadership (e.g., residency program director), (2) assistant director or director, or 3) managing director. 5) Academic level: 0, 1, 2 or 3Δ$ Academic credits are based on promotion to Assistant, Associate, or Full Professor, with the promotion process governed by the guidelines of the Medical School. The various components of the compensation model are show in Figure . The salary increment unit Δ $ increases annually by the cost‐of‐living increase (y%) set by the hospital for all employees. Absent any other changes due to tier, leadership, or academic rank, an employee's salary would increase annually by y%. The compensation model was implemented at the beginning of a fiscal year. Each physicist was assigned a tier rating and credits for the various model components. The modeled salary was compared to the existing salary (plus the annual cost of living increase) and higher of the two was selected. No physicist salaries were lowered as a result of the new compensation model. DISCUSSION A robust compensation plan has been designed those rewards and incentivizes the diversity of effort in an academic medical physics group and aligns with the clinical, research, and teaching goals of the department. To our knowledge, this is the first publication of a medical physics compensation plan in the literature. The model provides discrete objectives for both clinical and research‐based advancement and rewards citizenship/professional service activities both within the department and professional societies. Aside from academic promotion, the compensation structure for PhD‐ and MS‐level physicists is the same, which does result in a lower maximum compensation for MS‐level physicists. Note that physicists holding a Doctor in Medical Physics (DMP) degree can be employed by the hospital but would not be eligible for academic appointment per medical school guidelines. Compensation growth (through credits) is available for both clinical and academic achievements, which is notable as integrating a high clinical load with academic activities (e.g., publishing) can be challenging, just as clinical contributions may be less for the more research‐focused physicist. Designing and implementing a new compensation plan can be challenging and the model described here includes several limitations. First, it should be recognized that although salary is certainly a major component, it is only one part of overall compensation. Institutional policies, including benefits and retirement plan options were not addressed in our model and are not within the department's control. Another limitation is that although metrics for advancing through the tiers are provided, a quantitative scoring system is not used, and the overall tier assessment is still somewhat qualitative. We do recommend that departments consider as many objective measures as possible when implementing a tier system. The tiers, furthermore, are broad, so progressing through the tiers is expected to take several years. Two fictional examples are provided to better illustrate how the compensation model would be applied. First, consider Physicist‐1, who is an ABR‐certified PhD‐level physicist with academic rank of Assistant Professor based primarily on a publishing record consisting of mostly first‐author scholarship and a strong local reputation as a clinical physics expert. This physicist independently handles clinical tasks with little guidance from other physicists, has a sustained record of independent and productive contributions to recent clinical projects, actively mentors' physics and dosimetry students and peer physicists, and participates in several departmental and AAPM committees. Physcist‐1 is assigned Tier II, with a salary of 14Δ (10Δ base, 2Δ for ABR, 1Δ for Tier II, plus 1Δ for academic rank). Second, consider Physicist‐2, who is an ABR‐certified MS‐level physicist who is a “go‐to” person for peer physicists, physicians, dosimetrists, and therapists, provides supervisory guidance for clinical procedures, is the point person for various software tools, proactively and constructively identifies clinical gaps and has led successful high‐impact clinical projects, serves as a committee leader within the department and within AAPM, is a sought‐after mentor and teaches students/residents, and has formal leadership of a clinical physics service. Physicist‐2 is assigned Tier IV, with a salary of 16Δ (10Δ base, 2Δ for ABR, 3Δ for Tier IV, plus 1Δ for leadership). The tier system provides opportunities and challenges. A major challenge is that physicist and leadership expectations of clinical tier level may not match, and the introduction of a “rating” system may require a culture adjustment within the group. First, physicists may receive a tier rating that indicates that their modeled salary may be lower than their actual salary. While their salary would not decrease, this feedback may be challenging. Second, a physicist's modeled salary may be above their current salary, but the physicist may expect a higher tier. The tier system, therefore, creates an opportunity for discussion between each physicist and leadership, including a review of specific milestones that could lead progressing to the next tier and therefore a salary increase. These discussions will be incorporated into the standard “Annual Career Conferences” required for all hospital professional staff. It is critical that the tier metrics be as clear as possible to ensure fairness and that all understand what is expected to reach the next level. Although years of experience certainly informs each physicist's knowledge and clinical skill, the compensation plan does not explicitly recognize year of experience, but rather specific and sustained contributions to the department's goals. It is not expected that physicists would regress to a lower tier, but it is possible. Clinical tiers, however, are evaluated over several years and short‐term fluctuations in effort or accomplishments should not affect the overall tier assignment. The model is based on a discrete salary unit Δ $. Salaries are therefore within discrete bins, and it is expected that changes to the next bin will occur every few years due to tier changes, more (or less) leadership responsibilities, or academic promotions. This choice was made because salary changes (other than cost of living increases) must be budgeted, and it would be a challenge to re‐budget (and re‐justify) the salaries of all employees every year. The downside is that employees on either side of a bin may be slightly over‐ or underpaid. The value of Δ $ had to reflect this trade‐off between well‐defined and manageable salary metrics on the one hand, and a fine‐grained individual salary structure on the other. The compensation model was set such that the department's salary scale compared favorably with the AAPM salary survey, with a trade‐off between increased compensation and internal budget constraints. The AAPM should be commended for publishing these helpful data, however, they are self‐reported and therefore, in our experience, weighted less by hospital administrators. Further, the data are retrospective, so simply prospectively matching the salary survey results in lagging behind the market. Publicly available salary data are also helpful , in benchmarking salaries. It should be noted that the newly released individualized AAPM salary calculator was not available when the model was developed but should be a useful tool in the future. As noted previously, the total salary will increase annually by the published hospital annual cost of living increase. However, this may not be sufficient to keep pace with changes in the marketplace. We, therefore, plan to review and adjust the base and credit values after several years in collaboration with administration and budget permitting. We further recognize that the model is most relevant to an academic setting, where physicists have varying clinical responsibilities, are encouraged to be active academically, and a promotion path is available through an affiliated medical school. The model was also developed in a therapy medical physics environment but could also apply to diagnostic or nuclear medicine groups. One additional challenge could arise when onboarding new physicists to the group. When converting the group to a new compensation plan, leadership should be sufficiently informed to assign tier ratings appropriately, based on past performance, but this is not possible for experienced physicists joining the group. Since the compensation model has no explicit component for years of experience, the initial tier rating would be an estimate, based on an assessment of the incoming candidate's track record, with specific feedback provided on expectations and pathways for achieving tier progression. Academic level, where applicable, would need to be separately considered by the medical school's promotion committee. We believe, however, that the right approach is to hire for a specific role and set the clinical tier based on, as best as can be determined, the career achievements of the physicist, and not to have the existing budget influence the initial tier level. Lastly, although anecdotal feedback to the compensation changes was positive, physicist satisfaction was not explicitly measured. CONCLUSION A compensation model has been designed and implemented for a large academic medical physics group, providing a framework for salary growth associated with clinical and academic achievements, leadership, and citizenship. All authors contributed equally to this article. The authors declare no conflicts of interest. |
Platelet-rich fibrin as a hemostatic agent in dental extractions in patients taking anticoagulants or antiplatelet medication: a systematic review | 854ef4d1-b959-4225-b26b-913e16475ab6 | 11467017 | Dentistry[mh] | Platelet-rich fibrin (PRF) has become a versatile and widely used agent in dentistry and medicine . Its properties for enhancing and supporting wound healing are suitable for and commonly used in socket and ridge preservation and periodontal treatments, and to minimize pain and postoperative discomfort in oral surgery . Autologous platelet concentrates (APC) have a long history and have undergone an evolution from platelet-rich plasma (PRP) and platelet-rich in growth factors (PRGF) to PRF . Compared to other platelet products, Choukroun’s PRF, invented in 2001, is the only fully autologous fibrin without the addition of any anticoagulants, and it can be applied in several forms, such as liquid, gel, plugs, or membranes . Its ability to coagulate is accompanied by a higher share and prolonged release of growth factors, such as TGF-ß1, VEGF, IL-1β, IGF-1, and PDGF-AB . It therefore combines all the advantages of a stable blood clot, just without the red blood cells attached. The increasing share of patients taking anticoagulants or antiplatelet medication (AP) presents a challenge in oral surgery to provide treatment with a low risk of thromboembolic incidents, and on the other hand, ensure a low incidence of postoperative bleeding episodes . There is a strong trend to continue blood thinning medications during minor oral surgeries and to perform these procedures in outpatient settings . Various agents can be used as hemostatic plugs or dressings to induce hemostasis and prevent postoperative bleeding episodes: tranexamic acid, xenogeneic gelatin sponges or collagen, or chitosan dressings manufactured from freeze-dried shrimp shells . Most of these are well tolerated, but since they are not fully autologous, foreign body reactions and allergies have been reported . In the search for a fully autologous hemostatic material, there have been attempts to use PRF as a concentrate rich in thrombocytes . Based on which antithrombic medication the patient takes, coagulation is altered, so preparation protocols must be adapted as described by Marinho et al. . While clotting of the PRF plug still works in patients on antithrombic medication, Ockerman et al. found that the PRF membranes of patients under oral anticoagulation seemed to be weaker and contain fewer leukocytes; however, patients on AP medication showed no difference from the control group not on any medication . Concerning the macroscopic and microscopic fibrin architecture of PRF, Bootkrajang et al. found no difference between patients on warfarin and healthy controls . The aim of this systematic review was to evaluate whether PRF is an effective hemostatic agent to prevent postoperative bleeding after dental extractions in patients under anticoagulation or AP therapy.
Protocol development and eligibility criteria This review was conducted and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) . The protocol was registered at the International Prospective Register of Systematic Reviews (PROSPERO) with registration number CRD42024562289 . A protocol including all aspects of a systematic review methodology was developed prior to the initiation of this review. This included the definition of a focused question, a PICOS (patient, intervention, comparison, outcome, and study design) question, a defined search strategy, study inclusion criteria, determination of outcome measures, screening methods, data extraction, and analysis, and data synthesis. Defining the focused question The following focused question was defined: “Is PRF effective as a hemostatic agent in dental extractions in patients under antiplatelet or anticoagulation therapy?” PICOS question P Among patients taking anticoagulation or antiplatelet therapy undergoing dental extractions. I Does the use of PRF as a hemostatic agent. C When compared to other hemostatic agents or control sites. O Result in changes in hemostasis, bleeding, and postoperative pain. S Clinical studies in humans. Search strategy Two authors (MSK and AM) independently performed an electronic search in several databases, including PubMed, EMBASE, Cochrane Library, and SCOPUS. Articles published up to June 1st, 2024, were considered. No language or time restrictions were applied in the search. Search terms The electronic search strategy used the following combination of key words: (“hemostasis” OR “haemostasis” OR “hemostatic” OR “haemostatic” OR “Dental Extraction” OR “Extraction” OR “Tooth removal” OR “Teeth removal” OR “postoperative bleeding”) AND (“Leukocyte platelet-rich-fibrin” OR “platelet-rich-fibrin” OR “LPRF” OR “L-PRF” OR “Advanced platelet-rich-fibrin” OR “APRF” OR “A-PRF” OR “A-PRF+”). Additionally, the reference lists of review articles and the articles included in the present review were screened. Study selection and inclusion criteria The study selection criteria were studies in German or English. Only clinical studies in humans using autologous PRF were included. Studies using other platelet concentrates, such as PRP or PRGF, were excluded, as were studies evaluating the hemostatic effect of PRF in patients not taking anticoagulants or antiplatelet medication. Screening and selection of studies The titles and abstracts of the selected studies were independently screened by two reviewers (MSK and AM) based on the question, “Is PRF effective as a hemostatic agent in dental extractions in patients under antiplatelet or anticoagulation therapy?” Discrepancies were solved by discussion between two authors (MSK and AM) and a judge (MO). Cohen’s Kappa coefficient was calculated as a measure of agreement between the two reviewers. Subsequently, full-text articles were obtained if the answer to the screening was “yes” or “uncertain.” Data extraction and analysis The following data were extracted: author(s), year of publication, type of study, number of patients, treatment and control groups, type of medication, primary outcome measurement, and significance value. All studies were classified according to the study design to provide an overview of all studies matching the search criteria. Afterwards, the outcomes were compared in separate tables and discussed. Due to heterogeneity of the study protocols a comprehensive statistical analysis of their outcomes was not possible.
This review was conducted and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) . The protocol was registered at the International Prospective Register of Systematic Reviews (PROSPERO) with registration number CRD42024562289 . A protocol including all aspects of a systematic review methodology was developed prior to the initiation of this review. This included the definition of a focused question, a PICOS (patient, intervention, comparison, outcome, and study design) question, a defined search strategy, study inclusion criteria, determination of outcome measures, screening methods, data extraction, and analysis, and data synthesis.
The following focused question was defined: “Is PRF effective as a hemostatic agent in dental extractions in patients under antiplatelet or anticoagulation therapy?”
P Among patients taking anticoagulation or antiplatelet therapy undergoing dental extractions. I Does the use of PRF as a hemostatic agent. C When compared to other hemostatic agents or control sites. O Result in changes in hemostasis, bleeding, and postoperative pain. S Clinical studies in humans.
Two authors (MSK and AM) independently performed an electronic search in several databases, including PubMed, EMBASE, Cochrane Library, and SCOPUS. Articles published up to June 1st, 2024, were considered. No language or time restrictions were applied in the search.
The electronic search strategy used the following combination of key words: (“hemostasis” OR “haemostasis” OR “hemostatic” OR “haemostatic” OR “Dental Extraction” OR “Extraction” OR “Tooth removal” OR “Teeth removal” OR “postoperative bleeding”) AND (“Leukocyte platelet-rich-fibrin” OR “platelet-rich-fibrin” OR “LPRF” OR “L-PRF” OR “Advanced platelet-rich-fibrin” OR “APRF” OR “A-PRF” OR “A-PRF+”). Additionally, the reference lists of review articles and the articles included in the present review were screened.
The study selection criteria were studies in German or English. Only clinical studies in humans using autologous PRF were included. Studies using other platelet concentrates, such as PRP or PRGF, were excluded, as were studies evaluating the hemostatic effect of PRF in patients not taking anticoagulants or antiplatelet medication.
The titles and abstracts of the selected studies were independently screened by two reviewers (MSK and AM) based on the question, “Is PRF effective as a hemostatic agent in dental extractions in patients under antiplatelet or anticoagulation therapy?” Discrepancies were solved by discussion between two authors (MSK and AM) and a judge (MO). Cohen’s Kappa coefficient was calculated as a measure of agreement between the two reviewers. Subsequently, full-text articles were obtained if the answer to the screening was “yes” or “uncertain.”
The following data were extracted: author(s), year of publication, type of study, number of patients, treatment and control groups, type of medication, primary outcome measurement, and significance value. All studies were classified according to the study design to provide an overview of all studies matching the search criteria. Afterwards, the outcomes were compared in separate tables and discussed. Due to heterogeneity of the study protocols a comprehensive statistical analysis of their outcomes was not possible.
Selection of studies The database searches identified 1782 articles, and after removing duplicates, 789 remained (Fig. ). After the two independent researchers screened the titles and abstracts, 13 were found to meet the inclusion criteria and were selected for full-text analysis (inter-reviewer agreement κ = 0.822). After detailed evaluation, two articles were dismissed based on the exclusion criteria, as they used PRF as a hemostatic agent but not in patients taking anticoagulants or AP medication . Finally, 11 studies were considered relevant and were included in this systematic review. Study characteristics Classifying the types of studies included, there were three clinical studies without a control group compared to PRF: Sammartino et al. , de Almeida Barros Mourão et al. , and Berton et al. . One study by Harfoush et al. was a controlled clinical study without randomization, and seven studies were randomized clinical trials (RCTs): Eldibany et al. , Sarkar et al. , Giudice et al. , Munawar et al. , Brancaccio et al. , Rajendra et al. , and Kyyak et al. . Of these, two were split-mouth studies , and only one was blinded . The patient cohorts included ranged from 20 to 300 patients, with a total number of 864 patients and at least 1148 teeth extracted. PRF was compared to dry gauze, chitosan, hemostatic plugs, gelatin sponges, tranexamic acid, and control sites with only stitches (Table ). Four studies included patients under AP medication, four studies involved patients taking vitamin K antagonists (VKA), and three studies included patients under direct oral anticoagulants (DOAC). One study did not include any information about the anticoagulant (Table ). The PRF protocols varied from Leukocyte-platelet-rich fibrin (L-PRF) protocols with normal or prolonged centrifugation time to Advanced platelet-rich fibrin (A-PRF/A-PRF+) , and in six studies, the preparation protocol or type of PRF was not further classified . All studies reported only mild to moderate bleeding events that could mostly be solved by local compression. PRF showed superior results for hemostasis compared to dry gauze , cellulose , and stitches only . In other studies, there were no significant differences between PRF and a chitosan dressing , tranexamic acid , or gelatin concerning postoperative bleeding episodes. Sarkar et al. and Rajendra et al. found that hemostasis was faster in the chitosan group than in the PRF group . Risk of bias (RoB) in the individual studies The selected studies were individually screened using version 2 of the Cochrane tool for risk of bias in randomized trials (RoB 2). Five showed a low RoB, and six presented a moderate RoB (Table ). The overall quality was good, and most concerns were due to a lack of randomization and the extent of the outcome data (Fig. ).
The database searches identified 1782 articles, and after removing duplicates, 789 remained (Fig. ). After the two independent researchers screened the titles and abstracts, 13 were found to meet the inclusion criteria and were selected for full-text analysis (inter-reviewer agreement κ = 0.822). After detailed evaluation, two articles were dismissed based on the exclusion criteria, as they used PRF as a hemostatic agent but not in patients taking anticoagulants or AP medication . Finally, 11 studies were considered relevant and were included in this systematic review.
Classifying the types of studies included, there were three clinical studies without a control group compared to PRF: Sammartino et al. , de Almeida Barros Mourão et al. , and Berton et al. . One study by Harfoush et al. was a controlled clinical study without randomization, and seven studies were randomized clinical trials (RCTs): Eldibany et al. , Sarkar et al. , Giudice et al. , Munawar et al. , Brancaccio et al. , Rajendra et al. , and Kyyak et al. . Of these, two were split-mouth studies , and only one was blinded . The patient cohorts included ranged from 20 to 300 patients, with a total number of 864 patients and at least 1148 teeth extracted. PRF was compared to dry gauze, chitosan, hemostatic plugs, gelatin sponges, tranexamic acid, and control sites with only stitches (Table ). Four studies included patients under AP medication, four studies involved patients taking vitamin K antagonists (VKA), and three studies included patients under direct oral anticoagulants (DOAC). One study did not include any information about the anticoagulant (Table ). The PRF protocols varied from Leukocyte-platelet-rich fibrin (L-PRF) protocols with normal or prolonged centrifugation time to Advanced platelet-rich fibrin (A-PRF/A-PRF+) , and in six studies, the preparation protocol or type of PRF was not further classified . All studies reported only mild to moderate bleeding events that could mostly be solved by local compression. PRF showed superior results for hemostasis compared to dry gauze , cellulose , and stitches only . In other studies, there were no significant differences between PRF and a chitosan dressing , tranexamic acid , or gelatin concerning postoperative bleeding episodes. Sarkar et al. and Rajendra et al. found that hemostasis was faster in the chitosan group than in the PRF group .
The selected studies were individually screened using version 2 of the Cochrane tool for risk of bias in randomized trials (RoB 2). Five showed a low RoB, and six presented a moderate RoB (Table ). The overall quality was good, and most concerns were due to a lack of randomization and the extent of the outcome data (Fig. ).
There is a growing range of uses for PRF in all fields of dentistry, and keeping reasonable and evidence-based indications in perspective can be challenging. The aim of this review was to evaluate whether PRF is an effective hemostatic agent in dental extractions in patients under antiplatelet or anticoagulation therapy. To our knowledge, there have been two former reviews looking into this topic, one by Filho et al. that included three studies in 2021 , and one by Campana et al. that summarized six studies but included different forms of APCs (PRF and PRP) . Since the number of studies using PRF as a hemostatic agent has increased, a contemporary look at the results of further investigations is needed for a knowledge update. Different forms of medication bring different risks for postoperative bleeding episodes with them but can also alter the fibrin clotting process . Some studies have adapted their PRF protocols to a longer centrifugation time to adjust for patients taking VKA or AP medication . Overall, PRF derived from any protocol seemed to be feasible as a hemostatic agent, since the bleeding incidents described were all moderate and could be handled by compression. Although the studies by Sarkar et al. and Rajendra et al. that investigated patients taking AP therapy found faster hemostasis in wounds with chitosan compared to PRF, the bleeding was stabilized in 2–3 min in both studies, which is still a favorable result. In the study by Eldibany et al., none of the groups (chitosan vs. PRF) showed delayed bleeding, but the patients treated with chitosan showed more alveolitis, delayed healing, and greater pain . Likewise, Sarkar et al. found better wound healing and less pain in the PRF group compared to the chitosan group, which presents PRF as slightly inferior in time to bleeding control, but superior in patient comfort and cost . Giudice at al. , Munawar et al. , Harfoush et al. , and Brancaccio et al. , who also evaluated the use of PRF in patients on antiplatelet found significantly less bleeding in sockets with A-PRF + than in control sockets with only stitches , dry gauze , or a control group with tranexamic acid . Nevertheless, it must be stated that the risk for postoperative bleeding is low overall in patients under AP monotherapy . Patients taking VKA or DOAC have a higher incidence of postoperative bleeding after dental extractions . Sammartino et al., Eldibany et al., and Harfoush et al. evaluated the use of PRF in patients taking warfarin . In Sammartino et al.’s study, two postoperative bleeding episodes occurred in patients with an INR of 3.7 (mean INR 3.16), which could be solved with local compression of the wound . Eldibany et al. reported no delayed bleeding at all, but the mean INR was lower in this study cohort (mean INR 2.28), which is a critical factor according to Febbo et al., who found a significantly higher risk for postoperative hemorrhage in patients with an INR ≥ 3 . The study by Harfoush et al. found a higher incidence of bleeding > 20 min in patients without PRF compared to the PRF group in a study cohort with a mean INR of 2.4 . Although the INR value seems to be the most important parameter in bleeding control after dental extractions, all three studies showed that PRF might be an additional benefit in stabilizing the coagulum and with its hemostatic effect. Looking at studies using PRF in patients under DOAC, de Almeida et al. found no bleeding incidents overall , while Kyyak et al. and Berton et al. (2022) described mild oozing, which could be managed by compression. Of these three studies, only Kyyak et al. compared the PRF group to a control group (gelatin sponge), finding no significant difference between the groups . As only one randomized clinical trial has suggested that PRF is not inferior to the use of gelatin as a hemostatic agent, the effect of PRF in DOAC patients is difficult to anticipate, and more studies with a control group are needed. Additionally, the number and type of extractions, as well as mucosal incisions, also have an influence on the postoperative bleeding risk . None of the studies included in this review analyzed correlations between extractions, osteotomies, and anterior or posterior extractions with bleeding outcomes, which is an important limitation concerning comparability. Overall, it is not easy to retrace the isolated effect of PRF on postoperative hemorrhage. Only three studies compared the use of PRF with a control group treated with stitches or compression; three compared it to chitosan, one to tranexamic acid, and one to a gelatin sponge. Three studies did not include a control group. Moreover, the studies used different protocols for the preparation of PRF. This effect might be marginal concerning postoperative bleeding episodes, since all PRF protocols lead to coagulation and hence resemble stable blood clots. However, the amount of growth factor and its effect on wound healing may have differed. At least two studies, by Giudice at al. and Brancaccio et al. , compared A-PRF to L-PRF. Giudice et al. found that A-PRF was superior to stitches alone concerning postoperative bleeding and wound healing after one and two weeks . In contrast, Brancaccio et al. found similar bleeding rates in the L-PRF and A-PRF groups but better wound healing in the L-PRF group . Due to heterogeneity of the studies a comprehensive statistical analysis of their outcomes was not possible, which must be addressed as a limitation of this review. Future randomized studies with a standardized PRF protocol comparing PRF to control sites and to different hemostatic agents should be performed. It is important to address the kind of medication and the extent of the operation (e.g., anterior or posterior teeth, extractions or osteotomies) to make studies comparable and to conclude the hemostatic potential of PRF in relation to other agents.
PRF is known to enhance soft tissue healing and reduce postoperative pain. As a fully autologous platelet concentrate, a PRF clot can also support hemostasis after dental extractions in patients taking antiplatelet or anticoagulation therapy. Despite the use of different protocols and control groups, PRF treatment seems to be superior to only stitches and inferior to chitosan dressings concerning the time of hemostasis. Still, randomized clinical studies comparing PRF as a hemostatic agent to a control group are lacking, and further research evaluating the use of PRF in the context of the extent of the extraction is needed.
|
An update on the novel approaches towards skills assessment of ophthalmology residents in the Indian scenario | 025a71fe-aa13-45b6-a2b6-fa290f4954e1 | 9240543 | Ophthalmology[mh] | Under the National Medical Council, universities in India have suggested regular assessments in the form of annual examinations at the end of each year, including theory, practical, and viva voce. Nevertheless, there is no uniformity in the practice of the same and no emphasis on documentation of having conducted these examinations. Moreover, the thrust is on the final university examination, including theoretical assessments, practical assessments, and viva-voce. This is true of the National Board of Examinations as well. In India, residency training is usually based on Halsted’s apprentice model. This model involves a discovery-based learning mode, where a resident attempts a procedure and “discovers” how the procedure is done. However, with the present knowledge of the teaching-learning process and the fact that tolerance toward medical error has decreased, this mode has become less acceptable. Thus, a formal method of formative assessment is necessary. Formative assessments, at present, are done in the form of journal presentations and clinical case presentations, which are graded. However, it is neither universally done nor does the university mandate it. Maintenance of logbooks to document the various clinical and laboratory procedures done during residency is encouraged. It also includes the case presentations, journal presentations, and seminars that the resident has presented. A logbook only gives information about the resident’s experience and not of his/her expertise. In addition, when practiced, they do not carry any grades or any marks, and the onus finally lies in the final university exams conducted at the end of the residency. There is no standardized model of assessment across colleges/institutes and universities/boards. Therefore, the level of competency of students cannot be assessed and compared objectively. This assessment strategy has been followed for many years now. However, the COVID-19 pandemic heralded a paradigm shift in our teaching and assessment methodologies. For the first time, the clinical assessment was modified to exclude patients out of the assessment area. Further, residents were assessed based on clinical scenarios. Finally, their clinical knowledge and judgment were assessed rather than based on the demonstration of skills. Thus, newer methods of assessments also need to be looked into and assimilated. Competency-based medical education has been applied in undergraduate medical education in India since 2019. This method focuses on the development of competencies required to fulfill patients’ needs in a real-life situation. It emphasizes continued training of the student until the competency is achieved. This method assesses each student in an objective, measurable standard and is independent of the performance of other students. This method can also be incorporated into the postgraduate ophthalmology curriculum. In this method, assessments are done repetitively and in a criterion-referenced manner in the likeness of or actual clinical setting. The guidelines for competency-based postgraduate training in ophthalmology only provide broad activities under which the resident has to be assessed but fails to provide the assessment methodology. Various assessment tools need to be incorporated in our rubric to optimize learning for our ophthalmology residents in the present scenario. This article discusses the tools available for assessment and elaborates the caveats and nuances. The assessment tools will be discussed under the following subheadings: Assessment of clinical skills Assessment of surgical skills Composite tools Assessment of clinical skills Clinical skills are the cornerstone of any medical or surgical specialty. It involves a conglomeration of communication and examination skills in view of history taking and clinical examination, respectively. In addition, organization and collation of information, arriving at a differential diagnosis, and forming a management plan are crucial. Assessment of these skills is essential. In the present scenario, this is being done during the university examinations in the form of a summative assessment. The following are the various tools described in assessing the clinical skills which can be employed easily for formative assessments. Directly Observed Procedural Skills and Video-Observed Procedural Skills This tool assesses the trainee’s ability to apply his knowledge and skills in performing a particular procedure and provides an immediate assessment of the skill performed. Sethi et al . conducted a study on the utility of this method in teaching interns. The core areas focused on were visual acuity assessment, torchlight examination of the anterior segment (difficulty level: 1), direct ophthalmoscopy, and ocular movements (difficulty level: 2). It was seen that repeated use of the directly observed procedural skills (DOPS) method during the internship program improved the clinical skills of the stakeholders. This method can also be used to assess surgical skills as adopted and proved as an effective method by Hassanpour et al . in their study on assessment of resident performed trabeculectomy. The Royal College of Ophthalmologists has a standardized DOPS assessment score for many clinical skills, which can be easily adapted to the residents’ program. An example of a clinical rating scale used for IOP evaluation is shown in Supplement 1 . There are similar scales for various clinical skills that can be modified to suit Indian clinical scenarios and used. The templates of these scores can be accessed on the website Resources - The Royal College of Ophthalmologists (rcophth.ac.UK). The DOPS method requires a significant amount of time investment, and residents being aware of being observed may affect their performance. A similar tool is video-observed procedural skills (VOPS), wherein instead of direct observation, the procedure done by the resident is videotaped and then assessed by the faculty. In the assessment of surgical skills, it was shown that VOPS is a feasible and valid assessment method and had a good correlation when compared to DOPS grades. Ophthalmology Clinical Evaluation Exercise This tool was designed by the International Council of Ophthalmology and has been implemented in several languages. The resident is assessed on 33 parameters during the process of history taking, examination, and clinical case presentation. The residents are graded as below expectations, meets some expectations, meets all expectations, or exceeds expectations. The advantage of this tool is that it has been proven reliable and valid and has the advantages of both clinical evaluation exercise (CEX) in being comprehensive and of a mini CEX in reviewing real-time situations and less time-consuming and providing immediate feedback to the residents. The disadvantage is that it has not been internationally developed. Therefore, cultural differences have not been factored in. Palis et al . developed a modified version of the ophthalmology CEX (OCEX). A modified 3-point Dreyfus scale was used in this rubric, which included novice, beginner, and competent stages. The aspects that were assessed were interview skills, examination, interpersonal and communication skills, and case presentation. The parameters that were tested are shown in . An essential aspect of the modified OCEX is the addition of pertinent negative history as negative history can be a valuable model of arriving at the diagnosis. This mini-CEX was also found to be valid and reliable. Pediatric Examination Assessment Rubric toolkit Pediatric ophthalmologic examination requires proficiency in many skills. To provide a means to assess the complex set of skills, Langue et al . developed a comprehensive rubric called the pediatric examination assessment rubric (PEAR) toolkit. In this rubric, 12 examination skills pertinent to pediatric ophthalmological examination were assessed using videographic recordings. The clinical encounters included visual acuity examination, anterior segment examination, intraocular pressure measurement, retinoscopy, fundus examination, strabismus evaluation, and measurement of stereoacuity. In addition to the aforementioned parameters, the resident was assessed based on their rapport with the patient and his/her family. This tool was found to have minimal inter-rater variability and fair reliability. Though this tool was designed to assess pediatric ophthalmological examination clinical skills, the rubric can be used to design assessment tools for other subspecialties with some modifications. Clinical assessment scores can be developed for each clinical skill and residents assessed accordingly. In addition, periodic reviews will help residents hone their skills by establishing a feedback system that will help residents correct their mistakes early in residency and learn skills optimally. Assessment of surgical skills Surgical skills are not very rigorously or structurally assessed in the existing assessment modules in the Indian arena of postgraduate ophthalmology training. A standardized toolbox needs to be assimilated into our present system of ophthalmology residency for an objective and unbiased assessment. In addition, attention needs to be paid to changing scenarios such as the COVID-19 pandemic, where innovative methods of assessment need incorporation. Automated tools have also arrived at the assessment scene in ophthalmology residency, providing the advantage of being repeatable, reliable, and devoid of human bias. Furthermore, with the COVID-19 pandemic making social distancing an imperative, these techniques ensure the safety of the assessors, assessee, and patients. The following are some of the tools available to assess surgical skills objectively. Objective assessment of skills in intraocular surgery The objective assessment of skills in intraocular surgery (OASIS) scoring was developed at the Harvard Medical School to assess residents’ competency in phacoemulsification. It included three aspects: preoperative, intraoperative, and postoperative. The intraoperative aspect was further divided into the following thrust areas: phacoemulsification technique used, total phacoemulsification time, amount of irrigation fluid used, the resident’s surgical time, total time in the operating room, location of the incision, use of limbal relaxing incisions, type of blade, and instruments used. The OASIS database allows for evaluating postoperative astigmatism, rates of complications in individual residents, and the various cohorts of patients that were operated upon by the residents, such as pseudoexfoliation. This assessment tool is purely objective and hence has no scope for inter-rater variability. It is a one-page standardized form that is less time-consuming and has no financial constraints on the residents or clinicians, thus making it an effective and affordable tool. Global Rating Assessment of Skills in Intraocular Surgery The Surgical Education Research Group, University of Toronto, developed a more comprehensive tool named global rating assessment of skills in intraocular surgery (GRASIS) that included the objective and subjective aspects of surgical skills training. GRASIS includes the objective parameters of the OASIS tool, and in conjunction with it, has a one-page subjective assessment. The assessed parameters are the manner of treatment of intraocular structures, time, motion, and energy applied on the intraocular structures, eye position and microscope use, instrument handling, and use of the non-dominant hand. Further, the resident is also assessed on knowledge of equipment used for phacoemulsification and vitrectomy, operation flow, and specific procedures. In addition to this, the residents’ interaction with the scrub nurse and handling of unexpected events are assessed. Based on this, an overall score is given. This subjective assessment pays attention to the resident’s surgical knowledge, surgical preparedness, and interpersonal skills. Objective Structured Assessment of Cataract Surgical Skill Saleh et al . described a tool named objective structured assessment of cataract surgical skill (OSACSS) that focused on both global and phacoemulsification-specific competencies. Surgical videos that were taped when the residents performed cataract surgery were assessed based on 14 cataract-specific stems and six global indicators. In the study that led up to the defining of OSACSS, it was found that when residents performed 250 or more surgeries, the tool was not able to identify differences in competencies. However, in the group of residents that had performed less than 250 surgeries, the competencies were much better in those who had performed 50 or more surgeries. It is a useful tool during the early days of residency training. Scoring of the residents’ performance could be done by the faculty and/or the trainees. However, Casswell et al . found that the senior trainees’ self-assessment correlated better with faculty assessment than the junior trainees’ self-assessment. Imperial College Surgical Assessment Device The tool, Imperial College surgical assessment device (ICSAD), uses a motion-sensing device to assess a resident’s suturing technique on a model eye by using an operating microscope with standardized instruments. A single passive receiver is attached to the index finger of the resident, and the parameters, namely total path length, time, and the number of individual hand movements, are analyzed. In addition, a video is captured, and two independent observers assess the parameters. This tool correlates with the objective structured assessment of technical skills (OSATS) tool in assessing the suturing competency. International Council of Ophthalmology- Ophthalmology Surgical Competency Assessment Rubric The disadvantage of the previously mentioned tools is that they have been developed locally, keeping the relevant culture in mind. An internationally developed tool transcends borders and allows for easy adaptation. In addition, these tools pertain primarily to cataract surgery. Tools specific to other ophthalmic surgeries are essential to assess holistic learning during the ophthalmology resident program. With this in mind, the International Council of Ophthalmology- ophthalmology surgical competency assessment rubric (ICO-OSCAR) tool was developed. The OSCAR rubrics have been developed for various surgeries, such as extracapsular cataract extraction, lateral tarsal strip surgery, pediatric cataract surgery, phacoemulsification, ptosis, small-incision cataract surgery (SICS), strabismus, trabeculectomy, and vitrectomy. There are also tools available for procedures such as panretinal photocoagulation. The ICO-OSCAR is a standardized, internationally valid tool for the educator (and the resident) to evaluate competence in performing a specific procedure objectively. In this rubric, the surgical procedure is broken down into its individual steps, and the proficiency is graded based on the 4-point Dreyfus scale, viz., novice, beginner, advanced beginner, and competent. Each step is described in the tool, and the preceptor has to circle the observed performance description given. This has to be done immediately after the learner performs the procedure in order to be able to give timely, structured, and specific feedback, thus enhancing the quality of the learning process. At the end of this assessment, an improvement plan has to be made so that the learner improves upon the deficiencies that were seen during the surgical procedure. The tool has been translated into various languages, such as Mandarin Chinese, French, Portuguese, Russian, Spanish, Thai, and Vietnamese, for use in the countries where the above are known. These tools are available online and in the form of an ICO-OSCAR application. shows the various rubrics available in the ICO-OSCAR tool. Self-assessment and peer assessment Cheon et al . described the use of ICO-OSCAR by residents for peer and self-assessment. In their study, it was found that peer assessment was as efficient as assessment by teachers, while self-assessment was not as consistent. Thus, peer assessment can be an addendum to the armamentarium of assessment tools. This was corroborated by a study done by Srikumaran et al. , where it was found that self-assessment was an inaccurate representation of the trainee’s proficiency. Assessment scale of corneal rupture suturing Zhang et al . described this scale to assess residents’ proficiency in performing suturing in eyes with a corneal rupture. Porcine eyes were used, and the residents were required to suture an L-shaped corneal tear under an operating microscope. This process was videotaped and assessed by the faculty. This comprehensive assessment involved the following aspects: preoperative preparation, microscope use, instrument handling, hand-eye coordination, suturing technique, wound closure, and postoperative clean-up. This tool was found to be reliable and repeatable. However, this tool does not ascribe to real-world situations and cannot describe the resident’s judgment. Eye surgical skills assessment test The eye surgical skills assessment test (ESSAT) was developed by Fisher et al . to assess students’ proficiency before they enter the operating room. Their skillsets are tested in a wet laboratory mode. There are three stations, which include skin suturing, muscle recession, phacoemulsification/wound construction, and assessment of the suturing technique. The resident’s performance is videotaped, and the faculty assesses the residents’ performance based on a station-specific checklist and a global rating scale of performance. Instead of this, an assessment tool involving the Eyesi® simulator may also be considered, as enunciated by Le et al . Eyesi® simulator as an assessment tool This simulated assessment model correlates well with the real-life metrics and can thus work as an effective tool to assess surgical competencies. Eyesi® assessment scores correlated well with real-life cataract surgery assessment scores. However, as the motion tracking rubric can have inter-individual variations, it is wise to use this assessment tool along with the other tools to gain a better picture of the acquisition of competencies by the resident. These tools, which objectively assess the surgical skills, can be modified to suit the Indian ophthalmology surgical scenario. Most universities provide guidelines regarding the number of surgeries and type of surgeries that each resident has to be proficient in by the end of his/her residency. Along with this recommendation, if tools to assess surgical skills are mandated, then the quality of residents will also become comparable across different universities and colleges. This would help in the standardization of the residency program across India. Composite tools These tools fall in line to some extent with the ACGME guidelines and test the resident’s interpersonal skills, communication skills, professionalism, and system-based practice. ICO-360-degree evaluation This is a comprehensive evaluation of a resident’s all-round performance in the ophthalmology setup. The assessment is done by peers, coworkers, patients, and faculty. The various parameters that are tested are professionalism, interpersonal and communication skills, and system-based practice. National curriculum for ophthalmology residency training Developed by Grover et al . under the aegis of the All India Ophthalmological Society, the curriculum gives guidelines for the assessment of ophthalmology residents as well. Formative and summative assessments form integral parts of the assessment prescript. The proposition is that formative assessments would include assessing personal attributes, clinical skills and performance, academic activities, and practical assessments after each clinical posting viz., the subspecialties such as orbit and oculoplasty, cornea, retina, pediatric ophthalmology and strabismus, and glaucoma. A summative assessment would comprise theory examinations that are to be conducted at the end of 1 year, 2 years, and 2 years and 9 months. In addition, summative assessments would include the following: Logbook Theory examinations divided into four papers for ease of assessment Practical examinations which comprise Clinicals- One long case, two short cases, two fundus cases, one refraction case, and one OCEX case Viva Voce- Instruments, pathology, microbiology specimens, drugs, imaging modalities, visual fields, and other ophthalmic diagnostic charts. On-call assessment tool This tool was designed by Golnik et al. , where a retrospective chart audit was done of the residents on call charts. This was assessed with a tool that comprised testing of timeliness of consultation, history, examination, assessment and plan, and urgency rating. The residents’ performance was assessed as satisfactory, borderline, and unsatisfactory. Tool to assess integrated clinical communications skill As much as assessing clinical and surgical skills is essential, assessing a student’s communication skills is equally important. It has been said that more than a patient needing to know how much a doctor knows, it is vital that the patient knows how much a doctor cares. In this regard, various tools have been tested but more so in the space of undergraduate medical education. A case in point is the tool devised by Brouwers et al . that was used among undergraduate medical students but can be applied to ophthalmology residents. Students were taught the various aspects of communications based on the biopsychosocial model during their third year in the undergraduate medical course. At the end of the course, an objective structured clinical examination (OSCE) was conducted, including two stations dedicated to communication skills. Various aspects were assessed, including verbal and non-verbal communication. The National Medical Council has developed a module known as the Attitude, Ethics, and Communication (AETCOM) module for undergraduate students. In this module, the student’s active participation in planned focused group discussions, small group discussions, and skill lab sessions are assessed by a trained evaluator and forms part of the formative assessment. Summative assessment is conducted in the form of theory questions on attitudes, ethics, and communication in the year-end examinations. Modules that pertain to ophthalmology can be formulated, with formative and summative assessments that test the resident’s competency in the above parameters. A peek into the future Eye movements and surgical proficiency Brouwers et al . conducted a study that involved residents performing simulated surgical tasks while their eye movements were recorded. It was seen that eye movement data can be used to ascertain whether the resident had beginner and intermediate proficiency in microsurgical skills. Though this study has not been done specifically in the ophthalmology setting, it provides an innovative approach to assessing surgical skills. Further studies would be required to apply this model in the ophthalmic microsurgical setting. Wireless sensor glove for surgical skills assessment This is a unique approach to assess surgical skills. The study was done to assess skills in laparoscopic surgeons and requires modification in the rubric for ophthalmological microsurgeries. A glove was designed that could transfer via a wireless mode the data collected from the sensors to a base station fitted on a computer or laptop. Hand gestures that are used while performing the task were compared between novice and expert surgeons. Exploring this tool for the assessment of ophthalmic surgeons would be very innovative and helpful. Machine learning and deep learning With artificial intelligence becoming ubiquitous in its applicability, an ophthalmology skill assessment is no stranger to its possibilities. In a study done by Yu et al. , ten phases of resident and faculty performed cataract surgeries were assessed by videotaping them. Convoluted neural networks (CNNs) and recurrent neural networks (RNNs) were used to assess the parameters, including side port incision, main incision, capsulorhexis, hydrodissection, phacoemulsification, cortical removal, lens insertion, ophthalmic viscosurgical device removal, and wound closure. The steps were noted for the number of attempts made and any failed steps. Various algorithms were tested and compared. It was found that model instrument labels and video images were the best way to assess the various steps. Nonetheless, further research is required in this direction to find and refine such automated testing tools in the setting of ophthalmology residency. Caveats Extensive studies have been done on tools that involve assessing cataract surgery, while tools that assess residents’ performance of other ophthalmic surgeries are not well researched. Currently, the focus of research is on surgical skills; clinical skills assessment requires further scrutiny. These scoring tools are considered to be time and cost-intensive. We need to adopt tools that are effective and easy to implement in the Indian scenario, taking into account the surgical skills and the clinical skill assessment. These assessments should be objective, efficient in terms of time and reliability.
Clinical skills are the cornerstone of any medical or surgical specialty. It involves a conglomeration of communication and examination skills in view of history taking and clinical examination, respectively. In addition, organization and collation of information, arriving at a differential diagnosis, and forming a management plan are crucial. Assessment of these skills is essential. In the present scenario, this is being done during the university examinations in the form of a summative assessment. The following are the various tools described in assessing the clinical skills which can be employed easily for formative assessments. Directly Observed Procedural Skills and Video-Observed Procedural Skills This tool assesses the trainee’s ability to apply his knowledge and skills in performing a particular procedure and provides an immediate assessment of the skill performed. Sethi et al . conducted a study on the utility of this method in teaching interns. The core areas focused on were visual acuity assessment, torchlight examination of the anterior segment (difficulty level: 1), direct ophthalmoscopy, and ocular movements (difficulty level: 2). It was seen that repeated use of the directly observed procedural skills (DOPS) method during the internship program improved the clinical skills of the stakeholders. This method can also be used to assess surgical skills as adopted and proved as an effective method by Hassanpour et al . in their study on assessment of resident performed trabeculectomy. The Royal College of Ophthalmologists has a standardized DOPS assessment score for many clinical skills, which can be easily adapted to the residents’ program. An example of a clinical rating scale used for IOP evaluation is shown in Supplement 1 . There are similar scales for various clinical skills that can be modified to suit Indian clinical scenarios and used. The templates of these scores can be accessed on the website Resources - The Royal College of Ophthalmologists (rcophth.ac.UK). The DOPS method requires a significant amount of time investment, and residents being aware of being observed may affect their performance. A similar tool is video-observed procedural skills (VOPS), wherein instead of direct observation, the procedure done by the resident is videotaped and then assessed by the faculty. In the assessment of surgical skills, it was shown that VOPS is a feasible and valid assessment method and had a good correlation when compared to DOPS grades. Ophthalmology Clinical Evaluation Exercise This tool was designed by the International Council of Ophthalmology and has been implemented in several languages. The resident is assessed on 33 parameters during the process of history taking, examination, and clinical case presentation. The residents are graded as below expectations, meets some expectations, meets all expectations, or exceeds expectations. The advantage of this tool is that it has been proven reliable and valid and has the advantages of both clinical evaluation exercise (CEX) in being comprehensive and of a mini CEX in reviewing real-time situations and less time-consuming and providing immediate feedback to the residents. The disadvantage is that it has not been internationally developed. Therefore, cultural differences have not been factored in. Palis et al . developed a modified version of the ophthalmology CEX (OCEX). A modified 3-point Dreyfus scale was used in this rubric, which included novice, beginner, and competent stages. The aspects that were assessed were interview skills, examination, interpersonal and communication skills, and case presentation. The parameters that were tested are shown in . An essential aspect of the modified OCEX is the addition of pertinent negative history as negative history can be a valuable model of arriving at the diagnosis. This mini-CEX was also found to be valid and reliable. Pediatric Examination Assessment Rubric toolkit Pediatric ophthalmologic examination requires proficiency in many skills. To provide a means to assess the complex set of skills, Langue et al . developed a comprehensive rubric called the pediatric examination assessment rubric (PEAR) toolkit. In this rubric, 12 examination skills pertinent to pediatric ophthalmological examination were assessed using videographic recordings. The clinical encounters included visual acuity examination, anterior segment examination, intraocular pressure measurement, retinoscopy, fundus examination, strabismus evaluation, and measurement of stereoacuity. In addition to the aforementioned parameters, the resident was assessed based on their rapport with the patient and his/her family. This tool was found to have minimal inter-rater variability and fair reliability. Though this tool was designed to assess pediatric ophthalmological examination clinical skills, the rubric can be used to design assessment tools for other subspecialties with some modifications. Clinical assessment scores can be developed for each clinical skill and residents assessed accordingly. In addition, periodic reviews will help residents hone their skills by establishing a feedback system that will help residents correct their mistakes early in residency and learn skills optimally.
This tool assesses the trainee’s ability to apply his knowledge and skills in performing a particular procedure and provides an immediate assessment of the skill performed. Sethi et al . conducted a study on the utility of this method in teaching interns. The core areas focused on were visual acuity assessment, torchlight examination of the anterior segment (difficulty level: 1), direct ophthalmoscopy, and ocular movements (difficulty level: 2). It was seen that repeated use of the directly observed procedural skills (DOPS) method during the internship program improved the clinical skills of the stakeholders. This method can also be used to assess surgical skills as adopted and proved as an effective method by Hassanpour et al . in their study on assessment of resident performed trabeculectomy. The Royal College of Ophthalmologists has a standardized DOPS assessment score for many clinical skills, which can be easily adapted to the residents’ program. An example of a clinical rating scale used for IOP evaluation is shown in Supplement 1 . There are similar scales for various clinical skills that can be modified to suit Indian clinical scenarios and used. The templates of these scores can be accessed on the website Resources - The Royal College of Ophthalmologists (rcophth.ac.UK). The DOPS method requires a significant amount of time investment, and residents being aware of being observed may affect their performance. A similar tool is video-observed procedural skills (VOPS), wherein instead of direct observation, the procedure done by the resident is videotaped and then assessed by the faculty. In the assessment of surgical skills, it was shown that VOPS is a feasible and valid assessment method and had a good correlation when compared to DOPS grades.
This tool was designed by the International Council of Ophthalmology and has been implemented in several languages. The resident is assessed on 33 parameters during the process of history taking, examination, and clinical case presentation. The residents are graded as below expectations, meets some expectations, meets all expectations, or exceeds expectations. The advantage of this tool is that it has been proven reliable and valid and has the advantages of both clinical evaluation exercise (CEX) in being comprehensive and of a mini CEX in reviewing real-time situations and less time-consuming and providing immediate feedback to the residents. The disadvantage is that it has not been internationally developed. Therefore, cultural differences have not been factored in. Palis et al . developed a modified version of the ophthalmology CEX (OCEX). A modified 3-point Dreyfus scale was used in this rubric, which included novice, beginner, and competent stages. The aspects that were assessed were interview skills, examination, interpersonal and communication skills, and case presentation. The parameters that were tested are shown in . An essential aspect of the modified OCEX is the addition of pertinent negative history as negative history can be a valuable model of arriving at the diagnosis. This mini-CEX was also found to be valid and reliable.
Pediatric ophthalmologic examination requires proficiency in many skills. To provide a means to assess the complex set of skills, Langue et al . developed a comprehensive rubric called the pediatric examination assessment rubric (PEAR) toolkit. In this rubric, 12 examination skills pertinent to pediatric ophthalmological examination were assessed using videographic recordings. The clinical encounters included visual acuity examination, anterior segment examination, intraocular pressure measurement, retinoscopy, fundus examination, strabismus evaluation, and measurement of stereoacuity. In addition to the aforementioned parameters, the resident was assessed based on their rapport with the patient and his/her family. This tool was found to have minimal inter-rater variability and fair reliability. Though this tool was designed to assess pediatric ophthalmological examination clinical skills, the rubric can be used to design assessment tools for other subspecialties with some modifications. Clinical assessment scores can be developed for each clinical skill and residents assessed accordingly. In addition, periodic reviews will help residents hone their skills by establishing a feedback system that will help residents correct their mistakes early in residency and learn skills optimally.
Surgical skills are not very rigorously or structurally assessed in the existing assessment modules in the Indian arena of postgraduate ophthalmology training. A standardized toolbox needs to be assimilated into our present system of ophthalmology residency for an objective and unbiased assessment. In addition, attention needs to be paid to changing scenarios such as the COVID-19 pandemic, where innovative methods of assessment need incorporation. Automated tools have also arrived at the assessment scene in ophthalmology residency, providing the advantage of being repeatable, reliable, and devoid of human bias. Furthermore, with the COVID-19 pandemic making social distancing an imperative, these techniques ensure the safety of the assessors, assessee, and patients. The following are some of the tools available to assess surgical skills objectively. Objective assessment of skills in intraocular surgery The objective assessment of skills in intraocular surgery (OASIS) scoring was developed at the Harvard Medical School to assess residents’ competency in phacoemulsification. It included three aspects: preoperative, intraoperative, and postoperative. The intraoperative aspect was further divided into the following thrust areas: phacoemulsification technique used, total phacoemulsification time, amount of irrigation fluid used, the resident’s surgical time, total time in the operating room, location of the incision, use of limbal relaxing incisions, type of blade, and instruments used. The OASIS database allows for evaluating postoperative astigmatism, rates of complications in individual residents, and the various cohorts of patients that were operated upon by the residents, such as pseudoexfoliation. This assessment tool is purely objective and hence has no scope for inter-rater variability. It is a one-page standardized form that is less time-consuming and has no financial constraints on the residents or clinicians, thus making it an effective and affordable tool. Global Rating Assessment of Skills in Intraocular Surgery The Surgical Education Research Group, University of Toronto, developed a more comprehensive tool named global rating assessment of skills in intraocular surgery (GRASIS) that included the objective and subjective aspects of surgical skills training. GRASIS includes the objective parameters of the OASIS tool, and in conjunction with it, has a one-page subjective assessment. The assessed parameters are the manner of treatment of intraocular structures, time, motion, and energy applied on the intraocular structures, eye position and microscope use, instrument handling, and use of the non-dominant hand. Further, the resident is also assessed on knowledge of equipment used for phacoemulsification and vitrectomy, operation flow, and specific procedures. In addition to this, the residents’ interaction with the scrub nurse and handling of unexpected events are assessed. Based on this, an overall score is given. This subjective assessment pays attention to the resident’s surgical knowledge, surgical preparedness, and interpersonal skills. Objective Structured Assessment of Cataract Surgical Skill Saleh et al . described a tool named objective structured assessment of cataract surgical skill (OSACSS) that focused on both global and phacoemulsification-specific competencies. Surgical videos that were taped when the residents performed cataract surgery were assessed based on 14 cataract-specific stems and six global indicators. In the study that led up to the defining of OSACSS, it was found that when residents performed 250 or more surgeries, the tool was not able to identify differences in competencies. However, in the group of residents that had performed less than 250 surgeries, the competencies were much better in those who had performed 50 or more surgeries. It is a useful tool during the early days of residency training. Scoring of the residents’ performance could be done by the faculty and/or the trainees. However, Casswell et al . found that the senior trainees’ self-assessment correlated better with faculty assessment than the junior trainees’ self-assessment. Imperial College Surgical Assessment Device The tool, Imperial College surgical assessment device (ICSAD), uses a motion-sensing device to assess a resident’s suturing technique on a model eye by using an operating microscope with standardized instruments. A single passive receiver is attached to the index finger of the resident, and the parameters, namely total path length, time, and the number of individual hand movements, are analyzed. In addition, a video is captured, and two independent observers assess the parameters. This tool correlates with the objective structured assessment of technical skills (OSATS) tool in assessing the suturing competency. International Council of Ophthalmology- Ophthalmology Surgical Competency Assessment Rubric The disadvantage of the previously mentioned tools is that they have been developed locally, keeping the relevant culture in mind. An internationally developed tool transcends borders and allows for easy adaptation. In addition, these tools pertain primarily to cataract surgery. Tools specific to other ophthalmic surgeries are essential to assess holistic learning during the ophthalmology resident program. With this in mind, the International Council of Ophthalmology- ophthalmology surgical competency assessment rubric (ICO-OSCAR) tool was developed. The OSCAR rubrics have been developed for various surgeries, such as extracapsular cataract extraction, lateral tarsal strip surgery, pediatric cataract surgery, phacoemulsification, ptosis, small-incision cataract surgery (SICS), strabismus, trabeculectomy, and vitrectomy. There are also tools available for procedures such as panretinal photocoagulation. The ICO-OSCAR is a standardized, internationally valid tool for the educator (and the resident) to evaluate competence in performing a specific procedure objectively. In this rubric, the surgical procedure is broken down into its individual steps, and the proficiency is graded based on the 4-point Dreyfus scale, viz., novice, beginner, advanced beginner, and competent. Each step is described in the tool, and the preceptor has to circle the observed performance description given. This has to be done immediately after the learner performs the procedure in order to be able to give timely, structured, and specific feedback, thus enhancing the quality of the learning process. At the end of this assessment, an improvement plan has to be made so that the learner improves upon the deficiencies that were seen during the surgical procedure. The tool has been translated into various languages, such as Mandarin Chinese, French, Portuguese, Russian, Spanish, Thai, and Vietnamese, for use in the countries where the above are known. These tools are available online and in the form of an ICO-OSCAR application. shows the various rubrics available in the ICO-OSCAR tool. Self-assessment and peer assessment Cheon et al . described the use of ICO-OSCAR by residents for peer and self-assessment. In their study, it was found that peer assessment was as efficient as assessment by teachers, while self-assessment was not as consistent. Thus, peer assessment can be an addendum to the armamentarium of assessment tools. This was corroborated by a study done by Srikumaran et al. , where it was found that self-assessment was an inaccurate representation of the trainee’s proficiency. Assessment scale of corneal rupture suturing Zhang et al . described this scale to assess residents’ proficiency in performing suturing in eyes with a corneal rupture. Porcine eyes were used, and the residents were required to suture an L-shaped corneal tear under an operating microscope. This process was videotaped and assessed by the faculty. This comprehensive assessment involved the following aspects: preoperative preparation, microscope use, instrument handling, hand-eye coordination, suturing technique, wound closure, and postoperative clean-up. This tool was found to be reliable and repeatable. However, this tool does not ascribe to real-world situations and cannot describe the resident’s judgment. Eye surgical skills assessment test The eye surgical skills assessment test (ESSAT) was developed by Fisher et al . to assess students’ proficiency before they enter the operating room. Their skillsets are tested in a wet laboratory mode. There are three stations, which include skin suturing, muscle recession, phacoemulsification/wound construction, and assessment of the suturing technique. The resident’s performance is videotaped, and the faculty assesses the residents’ performance based on a station-specific checklist and a global rating scale of performance. Instead of this, an assessment tool involving the Eyesi® simulator may also be considered, as enunciated by Le et al . Eyesi® simulator as an assessment tool This simulated assessment model correlates well with the real-life metrics and can thus work as an effective tool to assess surgical competencies. Eyesi® assessment scores correlated well with real-life cataract surgery assessment scores. However, as the motion tracking rubric can have inter-individual variations, it is wise to use this assessment tool along with the other tools to gain a better picture of the acquisition of competencies by the resident. These tools, which objectively assess the surgical skills, can be modified to suit the Indian ophthalmology surgical scenario. Most universities provide guidelines regarding the number of surgeries and type of surgeries that each resident has to be proficient in by the end of his/her residency. Along with this recommendation, if tools to assess surgical skills are mandated, then the quality of residents will also become comparable across different universities and colleges. This would help in the standardization of the residency program across India.
The objective assessment of skills in intraocular surgery (OASIS) scoring was developed at the Harvard Medical School to assess residents’ competency in phacoemulsification. It included three aspects: preoperative, intraoperative, and postoperative. The intraoperative aspect was further divided into the following thrust areas: phacoemulsification technique used, total phacoemulsification time, amount of irrigation fluid used, the resident’s surgical time, total time in the operating room, location of the incision, use of limbal relaxing incisions, type of blade, and instruments used. The OASIS database allows for evaluating postoperative astigmatism, rates of complications in individual residents, and the various cohorts of patients that were operated upon by the residents, such as pseudoexfoliation. This assessment tool is purely objective and hence has no scope for inter-rater variability. It is a one-page standardized form that is less time-consuming and has no financial constraints on the residents or clinicians, thus making it an effective and affordable tool.
The Surgical Education Research Group, University of Toronto, developed a more comprehensive tool named global rating assessment of skills in intraocular surgery (GRASIS) that included the objective and subjective aspects of surgical skills training. GRASIS includes the objective parameters of the OASIS tool, and in conjunction with it, has a one-page subjective assessment. The assessed parameters are the manner of treatment of intraocular structures, time, motion, and energy applied on the intraocular structures, eye position and microscope use, instrument handling, and use of the non-dominant hand. Further, the resident is also assessed on knowledge of equipment used for phacoemulsification and vitrectomy, operation flow, and specific procedures. In addition to this, the residents’ interaction with the scrub nurse and handling of unexpected events are assessed. Based on this, an overall score is given. This subjective assessment pays attention to the resident’s surgical knowledge, surgical preparedness, and interpersonal skills.
Saleh et al . described a tool named objective structured assessment of cataract surgical skill (OSACSS) that focused on both global and phacoemulsification-specific competencies. Surgical videos that were taped when the residents performed cataract surgery were assessed based on 14 cataract-specific stems and six global indicators. In the study that led up to the defining of OSACSS, it was found that when residents performed 250 or more surgeries, the tool was not able to identify differences in competencies. However, in the group of residents that had performed less than 250 surgeries, the competencies were much better in those who had performed 50 or more surgeries. It is a useful tool during the early days of residency training. Scoring of the residents’ performance could be done by the faculty and/or the trainees. However, Casswell et al . found that the senior trainees’ self-assessment correlated better with faculty assessment than the junior trainees’ self-assessment.
The tool, Imperial College surgical assessment device (ICSAD), uses a motion-sensing device to assess a resident’s suturing technique on a model eye by using an operating microscope with standardized instruments. A single passive receiver is attached to the index finger of the resident, and the parameters, namely total path length, time, and the number of individual hand movements, are analyzed. In addition, a video is captured, and two independent observers assess the parameters. This tool correlates with the objective structured assessment of technical skills (OSATS) tool in assessing the suturing competency.
The disadvantage of the previously mentioned tools is that they have been developed locally, keeping the relevant culture in mind. An internationally developed tool transcends borders and allows for easy adaptation. In addition, these tools pertain primarily to cataract surgery. Tools specific to other ophthalmic surgeries are essential to assess holistic learning during the ophthalmology resident program. With this in mind, the International Council of Ophthalmology- ophthalmology surgical competency assessment rubric (ICO-OSCAR) tool was developed. The OSCAR rubrics have been developed for various surgeries, such as extracapsular cataract extraction, lateral tarsal strip surgery, pediatric cataract surgery, phacoemulsification, ptosis, small-incision cataract surgery (SICS), strabismus, trabeculectomy, and vitrectomy. There are also tools available for procedures such as panretinal photocoagulation. The ICO-OSCAR is a standardized, internationally valid tool for the educator (and the resident) to evaluate competence in performing a specific procedure objectively. In this rubric, the surgical procedure is broken down into its individual steps, and the proficiency is graded based on the 4-point Dreyfus scale, viz., novice, beginner, advanced beginner, and competent. Each step is described in the tool, and the preceptor has to circle the observed performance description given. This has to be done immediately after the learner performs the procedure in order to be able to give timely, structured, and specific feedback, thus enhancing the quality of the learning process. At the end of this assessment, an improvement plan has to be made so that the learner improves upon the deficiencies that were seen during the surgical procedure. The tool has been translated into various languages, such as Mandarin Chinese, French, Portuguese, Russian, Spanish, Thai, and Vietnamese, for use in the countries where the above are known. These tools are available online and in the form of an ICO-OSCAR application. shows the various rubrics available in the ICO-OSCAR tool.
Cheon et al . described the use of ICO-OSCAR by residents for peer and self-assessment. In their study, it was found that peer assessment was as efficient as assessment by teachers, while self-assessment was not as consistent. Thus, peer assessment can be an addendum to the armamentarium of assessment tools. This was corroborated by a study done by Srikumaran et al. , where it was found that self-assessment was an inaccurate representation of the trainee’s proficiency.
Zhang et al . described this scale to assess residents’ proficiency in performing suturing in eyes with a corneal rupture. Porcine eyes were used, and the residents were required to suture an L-shaped corneal tear under an operating microscope. This process was videotaped and assessed by the faculty. This comprehensive assessment involved the following aspects: preoperative preparation, microscope use, instrument handling, hand-eye coordination, suturing technique, wound closure, and postoperative clean-up. This tool was found to be reliable and repeatable. However, this tool does not ascribe to real-world situations and cannot describe the resident’s judgment.
The eye surgical skills assessment test (ESSAT) was developed by Fisher et al . to assess students’ proficiency before they enter the operating room. Their skillsets are tested in a wet laboratory mode. There are three stations, which include skin suturing, muscle recession, phacoemulsification/wound construction, and assessment of the suturing technique. The resident’s performance is videotaped, and the faculty assesses the residents’ performance based on a station-specific checklist and a global rating scale of performance. Instead of this, an assessment tool involving the Eyesi® simulator may also be considered, as enunciated by Le et al .
This simulated assessment model correlates well with the real-life metrics and can thus work as an effective tool to assess surgical competencies. Eyesi® assessment scores correlated well with real-life cataract surgery assessment scores. However, as the motion tracking rubric can have inter-individual variations, it is wise to use this assessment tool along with the other tools to gain a better picture of the acquisition of competencies by the resident. These tools, which objectively assess the surgical skills, can be modified to suit the Indian ophthalmology surgical scenario. Most universities provide guidelines regarding the number of surgeries and type of surgeries that each resident has to be proficient in by the end of his/her residency. Along with this recommendation, if tools to assess surgical skills are mandated, then the quality of residents will also become comparable across different universities and colleges. This would help in the standardization of the residency program across India.
These tools fall in line to some extent with the ACGME guidelines and test the resident’s interpersonal skills, communication skills, professionalism, and system-based practice.
This is a comprehensive evaluation of a resident’s all-round performance in the ophthalmology setup. The assessment is done by peers, coworkers, patients, and faculty. The various parameters that are tested are professionalism, interpersonal and communication skills, and system-based practice. National curriculum for ophthalmology residency training Developed by Grover et al . under the aegis of the All India Ophthalmological Society, the curriculum gives guidelines for the assessment of ophthalmology residents as well. Formative and summative assessments form integral parts of the assessment prescript. The proposition is that formative assessments would include assessing personal attributes, clinical skills and performance, academic activities, and practical assessments after each clinical posting viz., the subspecialties such as orbit and oculoplasty, cornea, retina, pediatric ophthalmology and strabismus, and glaucoma. A summative assessment would comprise theory examinations that are to be conducted at the end of 1 year, 2 years, and 2 years and 9 months. In addition, summative assessments would include the following: Logbook Theory examinations divided into four papers for ease of assessment Practical examinations which comprise Clinicals- One long case, two short cases, two fundus cases, one refraction case, and one OCEX case Viva Voce- Instruments, pathology, microbiology specimens, drugs, imaging modalities, visual fields, and other ophthalmic diagnostic charts. On-call assessment tool This tool was designed by Golnik et al. , where a retrospective chart audit was done of the residents on call charts. This was assessed with a tool that comprised testing of timeliness of consultation, history, examination, assessment and plan, and urgency rating. The residents’ performance was assessed as satisfactory, borderline, and unsatisfactory. Tool to assess integrated clinical communications skill As much as assessing clinical and surgical skills is essential, assessing a student’s communication skills is equally important. It has been said that more than a patient needing to know how much a doctor knows, it is vital that the patient knows how much a doctor cares. In this regard, various tools have been tested but more so in the space of undergraduate medical education. A case in point is the tool devised by Brouwers et al . that was used among undergraduate medical students but can be applied to ophthalmology residents. Students were taught the various aspects of communications based on the biopsychosocial model during their third year in the undergraduate medical course. At the end of the course, an objective structured clinical examination (OSCE) was conducted, including two stations dedicated to communication skills. Various aspects were assessed, including verbal and non-verbal communication. The National Medical Council has developed a module known as the Attitude, Ethics, and Communication (AETCOM) module for undergraduate students. In this module, the student’s active participation in planned focused group discussions, small group discussions, and skill lab sessions are assessed by a trained evaluator and forms part of the formative assessment. Summative assessment is conducted in the form of theory questions on attitudes, ethics, and communication in the year-end examinations. Modules that pertain to ophthalmology can be formulated, with formative and summative assessments that test the resident’s competency in the above parameters.
Developed by Grover et al . under the aegis of the All India Ophthalmological Society, the curriculum gives guidelines for the assessment of ophthalmology residents as well. Formative and summative assessments form integral parts of the assessment prescript. The proposition is that formative assessments would include assessing personal attributes, clinical skills and performance, academic activities, and practical assessments after each clinical posting viz., the subspecialties such as orbit and oculoplasty, cornea, retina, pediatric ophthalmology and strabismus, and glaucoma. A summative assessment would comprise theory examinations that are to be conducted at the end of 1 year, 2 years, and 2 years and 9 months. In addition, summative assessments would include the following: Logbook Theory examinations divided into four papers for ease of assessment Practical examinations which comprise Clinicals- One long case, two short cases, two fundus cases, one refraction case, and one OCEX case Viva Voce- Instruments, pathology, microbiology specimens, drugs, imaging modalities, visual fields, and other ophthalmic diagnostic charts.
This tool was designed by Golnik et al. , where a retrospective chart audit was done of the residents on call charts. This was assessed with a tool that comprised testing of timeliness of consultation, history, examination, assessment and plan, and urgency rating. The residents’ performance was assessed as satisfactory, borderline, and unsatisfactory.
As much as assessing clinical and surgical skills is essential, assessing a student’s communication skills is equally important. It has been said that more than a patient needing to know how much a doctor knows, it is vital that the patient knows how much a doctor cares. In this regard, various tools have been tested but more so in the space of undergraduate medical education. A case in point is the tool devised by Brouwers et al . that was used among undergraduate medical students but can be applied to ophthalmology residents. Students were taught the various aspects of communications based on the biopsychosocial model during their third year in the undergraduate medical course. At the end of the course, an objective structured clinical examination (OSCE) was conducted, including two stations dedicated to communication skills. Various aspects were assessed, including verbal and non-verbal communication. The National Medical Council has developed a module known as the Attitude, Ethics, and Communication (AETCOM) module for undergraduate students. In this module, the student’s active participation in planned focused group discussions, small group discussions, and skill lab sessions are assessed by a trained evaluator and forms part of the formative assessment. Summative assessment is conducted in the form of theory questions on attitudes, ethics, and communication in the year-end examinations. Modules that pertain to ophthalmology can be formulated, with formative and summative assessments that test the resident’s competency in the above parameters.
Eye movements and surgical proficiency Brouwers et al . conducted a study that involved residents performing simulated surgical tasks while their eye movements were recorded. It was seen that eye movement data can be used to ascertain whether the resident had beginner and intermediate proficiency in microsurgical skills. Though this study has not been done specifically in the ophthalmology setting, it provides an innovative approach to assessing surgical skills. Further studies would be required to apply this model in the ophthalmic microsurgical setting. Wireless sensor glove for surgical skills assessment This is a unique approach to assess surgical skills. The study was done to assess skills in laparoscopic surgeons and requires modification in the rubric for ophthalmological microsurgeries. A glove was designed that could transfer via a wireless mode the data collected from the sensors to a base station fitted on a computer or laptop. Hand gestures that are used while performing the task were compared between novice and expert surgeons. Exploring this tool for the assessment of ophthalmic surgeons would be very innovative and helpful. Machine learning and deep learning With artificial intelligence becoming ubiquitous in its applicability, an ophthalmology skill assessment is no stranger to its possibilities. In a study done by Yu et al. , ten phases of resident and faculty performed cataract surgeries were assessed by videotaping them. Convoluted neural networks (CNNs) and recurrent neural networks (RNNs) were used to assess the parameters, including side port incision, main incision, capsulorhexis, hydrodissection, phacoemulsification, cortical removal, lens insertion, ophthalmic viscosurgical device removal, and wound closure. The steps were noted for the number of attempts made and any failed steps. Various algorithms were tested and compared. It was found that model instrument labels and video images were the best way to assess the various steps. Nonetheless, further research is required in this direction to find and refine such automated testing tools in the setting of ophthalmology residency.
Brouwers et al . conducted a study that involved residents performing simulated surgical tasks while their eye movements were recorded. It was seen that eye movement data can be used to ascertain whether the resident had beginner and intermediate proficiency in microsurgical skills. Though this study has not been done specifically in the ophthalmology setting, it provides an innovative approach to assessing surgical skills. Further studies would be required to apply this model in the ophthalmic microsurgical setting.
This is a unique approach to assess surgical skills. The study was done to assess skills in laparoscopic surgeons and requires modification in the rubric for ophthalmological microsurgeries. A glove was designed that could transfer via a wireless mode the data collected from the sensors to a base station fitted on a computer or laptop. Hand gestures that are used while performing the task were compared between novice and expert surgeons. Exploring this tool for the assessment of ophthalmic surgeons would be very innovative and helpful.
With artificial intelligence becoming ubiquitous in its applicability, an ophthalmology skill assessment is no stranger to its possibilities. In a study done by Yu et al. , ten phases of resident and faculty performed cataract surgeries were assessed by videotaping them. Convoluted neural networks (CNNs) and recurrent neural networks (RNNs) were used to assess the parameters, including side port incision, main incision, capsulorhexis, hydrodissection, phacoemulsification, cortical removal, lens insertion, ophthalmic viscosurgical device removal, and wound closure. The steps were noted for the number of attempts made and any failed steps. Various algorithms were tested and compared. It was found that model instrument labels and video images were the best way to assess the various steps. Nonetheless, further research is required in this direction to find and refine such automated testing tools in the setting of ophthalmology residency.
Extensive studies have been done on tools that involve assessing cataract surgery, while tools that assess residents’ performance of other ophthalmic surgeries are not well researched. Currently, the focus of research is on surgical skills; clinical skills assessment requires further scrutiny. These scoring tools are considered to be time and cost-intensive. We need to adopt tools that are effective and easy to implement in the Indian scenario, taking into account the surgical skills and the clinical skill assessment. These assessments should be objective, efficient in terms of time and reliability.
Assessment is an important aspect of training as it is one of the tools that give feedback to the learner and helps the teacher modify the training process. Summative assessments aid in understanding the proficiency of the resident at the end of the period of residency, while formative assessment provides us with an opportunity to change the teaching method considering each student’s progress. There are a variety of tools that assess the diverse skills that a resident is expected to acquire during his/her residency. These tools need incorporation into the present system of residency training in India. Both clinical and surgical skills require regular assessment in order to enhance the learning process of the resident. With the COVID-19 pandemic at the fore, novel approaches for skills assessment need to be imbibed into the present system to allow for safer modes of assessment while maintaining objectivity and ease of assessment. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
|
Prognostic impact of psychoeducation program completion on inpatients with schizophrenia: a pilot cohort study | 9e4270db-6e6c-4f26-93b4-baee70600e61 | 11748585 | Patient Education as Topic[mh] | Schizophrenia is a severe chronic mental disorder characterized by heterogeneous clusters of behavioral, emotional, and cognitive symptoms that are frequently intractable to current psychiatric therapies. As a result, patients require long-term monitoring and pharmacotherapy, including antipsychotic drugs, to improve symptoms and prevent recurrence. In addition, outpatients with schizophrenia experience many additional life challenges, including poor family support, problems with interpersonal relationships, difficulty finding employment, poverty, and side effects of medication that lead to poor treatment adherence . Inpatient care and outpatient management must include interventions that can improve general social functioning. Several studies have reported that inpatients receiving psychoeducation demonstrate better treatment adherence and lower recurrence rates in the short-term and medium- to long-term . Therefore, greater access to effective psychoeducation programs may reduce readmission rates, improve outpatient quality of life (QOL), and lessen the burden on patients with schizophrenia. However, almost all previous studies on the efficacy of psychoeducation have included only patients completing the program; thus, how noncompletion affects patient prognosis is currently unknown. It has also proven difficult in such studies to closely monitor changes in the psychiatric condition of discharged patients and evaluate the ways in which psychoeducation is used in daily life. Moreover, concrete and common criteria for recurrence or readmission have not been specified in many previous studies , but interruptions in social life, including hospitalization, may be traumatic regardless of the reason. Shin-Abuyama hospital offers a psychoeducation program for inpatients aimed at preventing recurrence and readmission for schizophrenia, similar to many other psychiatric institutions. However, in the real-world clinical setting, unlike in research fields, a certain percentage (sometimes approximately half) of patients were unable to complete the program owing to early discharge or other treatment schedules. Therefore, it is an urgent task to clarify how the completion or noncompletion of psychoeducation programs affects the long-term prognosis of patients, but to the best of our knowledge, there are no such previous studies. Thus, this study aimed to clarify how the completion or noncompletion of a psychoeducation program affects all-cause discontinuation in outpatient treatment over a 5-year follow-up period after discharge via a single-center pilot cohort study. Study design This is a pilot prospective observational cohort study conducted at the psychiatric acute care ward of Shin-Abuyama Hospital, Osaka Institute of Clinical Psychiatry, Osaka, Japan, from 1st August 2016 to 31st October 2023. The inclusion period of the study was from 1st August 2016 to 31st July 2017 and the follow-up period was set at five years after discharge. Selection of study participants Study participants were recruited as follows. First, potential participants were prescreened by two or more registered nurses responsible for administering the psychoeducation program. Second, among the prescreened patients, the attending psychiatrist made the final decision about participation according to the following criteria: 1) a low risk of self-harm or harm due to psychiatric symptoms; 2) sufficient verbal communication skills; and 3) the ability to pay attention during the 60-min session. Finally, eligible participants were screened from among the program participants who met the following inclusion criteria: 1) had a diagnosis of schizophrenia (F20) according to the International Statistical Classification of Diseases and Related Health Problems, Tenth Edition (ICD-10) ; 2) were admitted to the acute psychiatric ward of Shin-Abuyama Hospital during the inclusion period; 3) participated in a psychoeducation program for inpatients from the first session; and 4) were able to provide informed consent. The exclusion criteria were as follows: 1) diagnosis of a cooccurring psychiatric disorder, 2) never attended a psychoeducational session, 3) failure to be discharged after the program, or 4) withdrawal of consent. We defined participants who attended all sessions as the completion group (CG) and those who missed one or more, albeit not all, sessions as the noncompletion group (NG). Psychoeducation program for inpatients The psychoeducation program for inpatients at Shin-Abuyama Hospital consists of five semistructured group sessions (Table ) based on the Japanese Psychoeducation Promotion Guidelines toolkit . All five sessions were set up by different experts, and the importance of different interventions was the focus of the lectures. Session 1 was conducted by a psychiatrist, Session 2 by an occupational therapist, Session 3 by a mental health worker, Session 4 by a pharmacist, and Session 5 by a nurse. In addition, two or three nurses attended all the sessions as coleaders. If inpatients with schizophrenia did not attend the session, the reasons (worsening of the medical condition, ward transfer, refusal to participate, training for discharge or discharged) were recorded in their medical record and generally categorized as negative (worsening of the medical condition, ward transfer or refusal to participate) or positive reason (training for discharge or discharged). Variables Participant characteristics Age, sex, housing status (home, institution and homeless), living with family or not, marital status (married, divorced and single), employment status (general employment, disabled employment, in employment training and unemployed), highest level of education (university or graduate school, senior high school and junior high school), age of onset (first episode), duration (years) of illness, type of hospitalization (involuntary admission or not), duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose of antipsychotic medication , the rates occupational therapy sessions attended, active participation in occupational therapy sessions (cut-off ≥ 50% or not), the rates of psychoeducation sessions attended. Outcome The primary outcome in this study was the duration of outpatient treatment (DOT) . We defined the day of discharge as day 0 and the day of all-cause discontinuation of outpatient treatment (e.g., readmission, recurrence, suicide, or interruption of regular outpatient hospital visits) as the DOT end day. All participants were followed up for 5 years (1825 days) after discharge via medical records and, if there was difficulty, by phone. In cases where the defining event for DOT could not be determined precisely, we defined the day after the last outpatient visit as censoring and used it for the analysis. The secondary outcomes were 1) comparative risk (hazard ratio) of all-cause discontinuation of outpatient treatment, 2) the proportion of events precipitating DOT ending each year after discharge, 3) the correlation of DOT with the rates of psychoeducation sessions attended, 4) changes in Global Assessment of Functioning (GAF) scores and QOL dimension scores after the program compared with baseline. GAF scores were collected from medical records. And we adopted the Japanese version of the Schizophrenia Quality of Life Scale (J-SQLS) as a self-rating index of QOL . J-SQLS is a disease-specific subjective QOL rating scale used here as an alternative index of the effect of inpatient treatment. The scale includes 30 items in total with each item scored from 0 to 4. The lower the score, the better the condition. And this scale consists of three subscales: “motivation/energy” (ME), “psychological/social relations” (PS), and “symptoms/side effects” (SS); ME (7 items) assesses motivation and activity levels such as "like to plan ahead", "tend to stay at home and do not go out" and "able to catty out daily activities”. PS (15 items) assesses psychological aspects, including feelings of loneliness, anxiety and depression such as "worry about thing", "feel lonely" and "feel people avoid me". SS (8 items) assesses issues related to medication side effects characteristic of schizophrenia, such as "sleep is disturbed", "get muscle twitches" and "get dizzy spells". J-SQLS was administered both before and after the psychoeducation program, as well as changes in subscores after the intervention, were compared between the CG and NG. And 5) Comparison with CG on different reasons (positive or negative) for NG. Study size calculations Since no previous studies exist, hazard ratios were assumed, and sample sizes were calculated on the basis of clinical realities. A hazard ratio of 0.3 between the CG and NG, an allocation ratio of 1:1, an inclusion period of 1 year, a follow-up period of 5 years, a log-rank test, 80% statistical power, and a type I error rate of p = 0.05 were adopted, and the sample size was calculated to be 14 participants per group, for a total of 28 participants . To eliminate the possibility of sampling bias, we attempted to recruit as many participants as possible within the inclusion period. Statistical analysis Statistical analyses were performed via SPSS Statistics version 27 (IBM Corp., Armonk, NY, USA). Patient age, age of onset (first episode), duration (years) of illness, duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose in the psychiatric acute care ward, the rates of occupational therapy sessions attended, the rates of psychoeducation sessions attended, GAF score, and J-SQLS score were compared between groups by Student’s t test or Welch’s t test, as indicated, after verifying homoscedasticity. The proportions of sex, housing status, living with family or not and type of hospitalization (involuntary admission or not) were compared via Fisher’s exact test. The proportions of marital status, employment status and highest level of education were compared via Fisher-Freeman-Halton’s exact test. DOT was analyzed via the Kaplan–Meier method and compared between groups via the log-rank test. The hazard ratio for the differential risk of discontinuation of outpatient treatment during follow-up was calculated via the multivariate Cox proportional hazard regression model. To reveal how responsiveness to pharmacotherapy and differences between psychoeducation and treatment attitudes affect prognosis, we selected the following independent variables: duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose, the rates of occupational therapy session attended and psychoeducation program completion. All these four variables were checked for each interaction using linear multiple regression analysis, and those with no significant interaction were adopted as independent variables for the Cox regression analysis. Disruption event occurrence rates for each year were compared via Fisher's exact test because fewer than five such events occurred during each year. Correlations between DOT and the rates of sessions attended were analyzed via Pearson’s method. For the GAF and J-SQLS subscores, comparisons between groups were made using independent t tests for before and after the programme and for the changes. Of the NG groups, groups were divided into two groups for this reason of NG (positive or negative) and between groups comparisons were performed to reveal the difference of effect to prognosis. And we adopted to compare Mann–Whitney u test for quantitative date and Fisher’s exact test or Fisher-Freeman-Halton’s exact test for categorical data. DOT was analyzed via the Kaplan–Meier method and compared between groups (positive or negative reason) via the log-rank test. All the statistical comparisons were two-tailed, and the statistical significance level was set at p = 0.05. In the case of missing values, those values were excluded, and only the obtained data were analyzed. Ethical considerations This study was conducted with the approval of the Institutional Review Board of Shin-Abuyama Hospital (2016–1) and conformed to the requirements of the latest version of the Declaration of Helsinki. The following ethical considerations were incorporated into the study design, enrollment criteria, follow-up, and analysis: primacy of individual patient wishes, guarantee against therapeutic disadvantages, freedom to withdraw consent, protection of personal data, purpose of use, and disposal of personal data. All participants provided their written consent after receiving a full explanation of the study procedures, long-term follow-up, analysis of participants and patient rights at the time of recruitment. This is a pilot prospective observational cohort study conducted at the psychiatric acute care ward of Shin-Abuyama Hospital, Osaka Institute of Clinical Psychiatry, Osaka, Japan, from 1st August 2016 to 31st October 2023. The inclusion period of the study was from 1st August 2016 to 31st July 2017 and the follow-up period was set at five years after discharge. Study participants were recruited as follows. First, potential participants were prescreened by two or more registered nurses responsible for administering the psychoeducation program. Second, among the prescreened patients, the attending psychiatrist made the final decision about participation according to the following criteria: 1) a low risk of self-harm or harm due to psychiatric symptoms; 2) sufficient verbal communication skills; and 3) the ability to pay attention during the 60-min session. Finally, eligible participants were screened from among the program participants who met the following inclusion criteria: 1) had a diagnosis of schizophrenia (F20) according to the International Statistical Classification of Diseases and Related Health Problems, Tenth Edition (ICD-10) ; 2) were admitted to the acute psychiatric ward of Shin-Abuyama Hospital during the inclusion period; 3) participated in a psychoeducation program for inpatients from the first session; and 4) were able to provide informed consent. The exclusion criteria were as follows: 1) diagnosis of a cooccurring psychiatric disorder, 2) never attended a psychoeducational session, 3) failure to be discharged after the program, or 4) withdrawal of consent. We defined participants who attended all sessions as the completion group (CG) and those who missed one or more, albeit not all, sessions as the noncompletion group (NG). The psychoeducation program for inpatients at Shin-Abuyama Hospital consists of five semistructured group sessions (Table ) based on the Japanese Psychoeducation Promotion Guidelines toolkit . All five sessions were set up by different experts, and the importance of different interventions was the focus of the lectures. Session 1 was conducted by a psychiatrist, Session 2 by an occupational therapist, Session 3 by a mental health worker, Session 4 by a pharmacist, and Session 5 by a nurse. In addition, two or three nurses attended all the sessions as coleaders. If inpatients with schizophrenia did not attend the session, the reasons (worsening of the medical condition, ward transfer, refusal to participate, training for discharge or discharged) were recorded in their medical record and generally categorized as negative (worsening of the medical condition, ward transfer or refusal to participate) or positive reason (training for discharge or discharged). Participant characteristics Age, sex, housing status (home, institution and homeless), living with family or not, marital status (married, divorced and single), employment status (general employment, disabled employment, in employment training and unemployed), highest level of education (university or graduate school, senior high school and junior high school), age of onset (first episode), duration (years) of illness, type of hospitalization (involuntary admission or not), duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose of antipsychotic medication , the rates occupational therapy sessions attended, active participation in occupational therapy sessions (cut-off ≥ 50% or not), the rates of psychoeducation sessions attended. Age, sex, housing status (home, institution and homeless), living with family or not, marital status (married, divorced and single), employment status (general employment, disabled employment, in employment training and unemployed), highest level of education (university or graduate school, senior high school and junior high school), age of onset (first episode), duration (years) of illness, type of hospitalization (involuntary admission or not), duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose of antipsychotic medication , the rates occupational therapy sessions attended, active participation in occupational therapy sessions (cut-off ≥ 50% or not), the rates of psychoeducation sessions attended. The primary outcome in this study was the duration of outpatient treatment (DOT) . We defined the day of discharge as day 0 and the day of all-cause discontinuation of outpatient treatment (e.g., readmission, recurrence, suicide, or interruption of regular outpatient hospital visits) as the DOT end day. All participants were followed up for 5 years (1825 days) after discharge via medical records and, if there was difficulty, by phone. In cases where the defining event for DOT could not be determined precisely, we defined the day after the last outpatient visit as censoring and used it for the analysis. The secondary outcomes were 1) comparative risk (hazard ratio) of all-cause discontinuation of outpatient treatment, 2) the proportion of events precipitating DOT ending each year after discharge, 3) the correlation of DOT with the rates of psychoeducation sessions attended, 4) changes in Global Assessment of Functioning (GAF) scores and QOL dimension scores after the program compared with baseline. GAF scores were collected from medical records. And we adopted the Japanese version of the Schizophrenia Quality of Life Scale (J-SQLS) as a self-rating index of QOL . J-SQLS is a disease-specific subjective QOL rating scale used here as an alternative index of the effect of inpatient treatment. The scale includes 30 items in total with each item scored from 0 to 4. The lower the score, the better the condition. And this scale consists of three subscales: “motivation/energy” (ME), “psychological/social relations” (PS), and “symptoms/side effects” (SS); ME (7 items) assesses motivation and activity levels such as "like to plan ahead", "tend to stay at home and do not go out" and "able to catty out daily activities”. PS (15 items) assesses psychological aspects, including feelings of loneliness, anxiety and depression such as "worry about thing", "feel lonely" and "feel people avoid me". SS (8 items) assesses issues related to medication side effects characteristic of schizophrenia, such as "sleep is disturbed", "get muscle twitches" and "get dizzy spells". J-SQLS was administered both before and after the psychoeducation program, as well as changes in subscores after the intervention, were compared between the CG and NG. And 5) Comparison with CG on different reasons (positive or negative) for NG. Since no previous studies exist, hazard ratios were assumed, and sample sizes were calculated on the basis of clinical realities. A hazard ratio of 0.3 between the CG and NG, an allocation ratio of 1:1, an inclusion period of 1 year, a follow-up period of 5 years, a log-rank test, 80% statistical power, and a type I error rate of p = 0.05 were adopted, and the sample size was calculated to be 14 participants per group, for a total of 28 participants . To eliminate the possibility of sampling bias, we attempted to recruit as many participants as possible within the inclusion period. Statistical analyses were performed via SPSS Statistics version 27 (IBM Corp., Armonk, NY, USA). Patient age, age of onset (first episode), duration (years) of illness, duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose in the psychiatric acute care ward, the rates of occupational therapy sessions attended, the rates of psychoeducation sessions attended, GAF score, and J-SQLS score were compared between groups by Student’s t test or Welch’s t test, as indicated, after verifying homoscedasticity. The proportions of sex, housing status, living with family or not and type of hospitalization (involuntary admission or not) were compared via Fisher’s exact test. The proportions of marital status, employment status and highest level of education were compared via Fisher-Freeman-Halton’s exact test. DOT was analyzed via the Kaplan–Meier method and compared between groups via the log-rank test. The hazard ratio for the differential risk of discontinuation of outpatient treatment during follow-up was calculated via the multivariate Cox proportional hazard regression model. To reveal how responsiveness to pharmacotherapy and differences between psychoeducation and treatment attitudes affect prognosis, we selected the following independent variables: duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose, the rates of occupational therapy session attended and psychoeducation program completion. All these four variables were checked for each interaction using linear multiple regression analysis, and those with no significant interaction were adopted as independent variables for the Cox regression analysis. Disruption event occurrence rates for each year were compared via Fisher's exact test because fewer than five such events occurred during each year. Correlations between DOT and the rates of sessions attended were analyzed via Pearson’s method. For the GAF and J-SQLS subscores, comparisons between groups were made using independent t tests for before and after the programme and for the changes. Of the NG groups, groups were divided into two groups for this reason of NG (positive or negative) and between groups comparisons were performed to reveal the difference of effect to prognosis. And we adopted to compare Mann–Whitney u test for quantitative date and Fisher’s exact test or Fisher-Freeman-Halton’s exact test for categorical data. DOT was analyzed via the Kaplan–Meier method and compared between groups (positive or negative reason) via the log-rank test. All the statistical comparisons were two-tailed, and the statistical significance level was set at p = 0.05. In the case of missing values, those values were excluded, and only the obtained data were analyzed. This study was conducted with the approval of the Institutional Review Board of Shin-Abuyama Hospital (2016–1) and conformed to the requirements of the latest version of the Declaration of Helsinki. The following ethical considerations were incorporated into the study design, enrollment criteria, follow-up, and analysis: primacy of individual patient wishes, guarantee against therapeutic disadvantages, freedom to withdraw consent, protection of personal data, purpose of use, and disposal of personal data. All participants provided their written consent after receiving a full explanation of the study procedures, long-term follow-up, analysis of participants and patient rights at the time of recruitment. Participant characteristics A total of 72 patients with schizophrenia were admitted to the ward during the inclusion period, 61 of whom prescreened for psychoeducation program by registered nurses and 50 of whom prescreened by psychiatrist after that. Of these, 38 participated in the program and 36 met the eligibility criteria, 33 of whom consented to the study. Furthermore, of these 33 participants, one was excluded because of withdrawal of consent. Among the 32 eligible participants, 18 were CGs, and 14 were NGs and were followed for up to 5 years. One CG participant was lost to follow-up at day 1543 due to relocation, resulting in a final sample of 17 CG and 14 NG participants (Fig. ). Of the 32 inpatients enrolled in the study, 18 completed the psychoeducation program (CG), and 14 did not (NG) (Table ). There were no significant group differences (CG vs. NG) in age, sex ratio, housing status ratio, living with family ratio, marital status ratio, employment status ratio, highest level of education ratio, age of onset (first episode), duration (years) of illness, type of hospitalization (involuntary admission or not), duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose, the rates of occupational therapy sessions attended and active participation in occupational therapy sessions or not, except for the rates of psychoeducation session attended (Table ). Program attendance, the reason for noncompletion, DOT and events by participants are summarized in Table . To ensure the anonymity of participants, age is given as age-ranges in Table . And all NG groups had an event within the observation period. There were 6 negative reasons for noncompletion (3 worsening the medical condition, 3 ward transfer) and 8 positive reasons (1 discharged, 7 training for discharge), with no obvious “refusals to attend”. Primary outcome (DOT) The final analysis included 18 CG and 14 NG participants. Survival analysis revealed significantly longer DOT in the remaining CG patients than in the remaining NG patients (918.2 (174.3) days, 95% CI: 576.7–1259.8 vs. 225.5 (35.7) days, 95% CI: 155.5–295.5; p = 0.001 by log-rank test) (Fig. ). Secondary outcomes In the linear multiple regression analysis, no significant interactions were found for any combination of the four variables; duration of hospitalization and chrolpromazine equivalent ( p = 0.978), duration of hospitalization and the rates of occupational therapy sessions attended ( p = 0.722), duration of hospitalization and psychoeducation program completion ( p = 0.143), chrolpromazine equivalent and the rates of occupational therapy sessions attended ( p = 0.753), chrolpromazine equivalent and psychoeducation program completion ( p = 0.635), and the rates of occupational therapy sessions attended and psychoeducation program completion ( p = 0.453) (Table ). In a multivariate Cox proportional hazard analysis using duration (days) of hospitalization (HR = 0.992, p = 0.395), chrolpromazine equivalent (HR = 1.000, p = 0.633), the rates of occupational therapy sessions attended (HR = 0,999, p = 0894), and psychoeducation program completion as independent variables revealed a significant difference in program completion (HR = 4.450, p = 0.002). Participants who did not complete the psychoeducation program were at 4.450-fold greater risk of all-cause discontinuation of outpatient treatment than participants who attended all sessions (Table ). For all participants, discontinuation of outpatient treatment was attributed to “readmission due to recurrence” (Table ). The cumulative number of readmissions was significantly greater among program noncompleters than among program completers by the end of each follow-up year (Table ). There was a significantly weak correlation between DOT and the rates of psychoeducation program sessions attended (Pearson's r = 0.384, p = 0.030, 95% CI: 0.001–0.646) (Fig. ). Between-group comparisons before the program, after the program, and changes in GAF and J-SQLS subscores (ME, PS and SS) revealed no significant differences (Table ). In comparison between two groups (positive or negative reason) of NG, no significant group differences were found for all variables (Table ). And in survival analysis, there were no significant difference between two groups (median 107.0 (70.0) days, 95% CI: 0.0–244.2 vs. median 199.0 (83.3) days, 95% CI: 35.8–362.2; p = 0.505 by log-rank test). A total of 72 patients with schizophrenia were admitted to the ward during the inclusion period, 61 of whom prescreened for psychoeducation program by registered nurses and 50 of whom prescreened by psychiatrist after that. Of these, 38 participated in the program and 36 met the eligibility criteria, 33 of whom consented to the study. Furthermore, of these 33 participants, one was excluded because of withdrawal of consent. Among the 32 eligible participants, 18 were CGs, and 14 were NGs and were followed for up to 5 years. One CG participant was lost to follow-up at day 1543 due to relocation, resulting in a final sample of 17 CG and 14 NG participants (Fig. ). Of the 32 inpatients enrolled in the study, 18 completed the psychoeducation program (CG), and 14 did not (NG) (Table ). There were no significant group differences (CG vs. NG) in age, sex ratio, housing status ratio, living with family ratio, marital status ratio, employment status ratio, highest level of education ratio, age of onset (first episode), duration (years) of illness, type of hospitalization (involuntary admission or not), duration (days) of hospitalization in the psychiatric acute care ward, chlorpromazine equivalent dose, the rates of occupational therapy sessions attended and active participation in occupational therapy sessions or not, except for the rates of psychoeducation session attended (Table ). Program attendance, the reason for noncompletion, DOT and events by participants are summarized in Table . To ensure the anonymity of participants, age is given as age-ranges in Table . And all NG groups had an event within the observation period. There were 6 negative reasons for noncompletion (3 worsening the medical condition, 3 ward transfer) and 8 positive reasons (1 discharged, 7 training for discharge), with no obvious “refusals to attend”. The final analysis included 18 CG and 14 NG participants. Survival analysis revealed significantly longer DOT in the remaining CG patients than in the remaining NG patients (918.2 (174.3) days, 95% CI: 576.7–1259.8 vs. 225.5 (35.7) days, 95% CI: 155.5–295.5; p = 0.001 by log-rank test) (Fig. ). In the linear multiple regression analysis, no significant interactions were found for any combination of the four variables; duration of hospitalization and chrolpromazine equivalent ( p = 0.978), duration of hospitalization and the rates of occupational therapy sessions attended ( p = 0.722), duration of hospitalization and psychoeducation program completion ( p = 0.143), chrolpromazine equivalent and the rates of occupational therapy sessions attended ( p = 0.753), chrolpromazine equivalent and psychoeducation program completion ( p = 0.635), and the rates of occupational therapy sessions attended and psychoeducation program completion ( p = 0.453) (Table ). In a multivariate Cox proportional hazard analysis using duration (days) of hospitalization (HR = 0.992, p = 0.395), chrolpromazine equivalent (HR = 1.000, p = 0.633), the rates of occupational therapy sessions attended (HR = 0,999, p = 0894), and psychoeducation program completion as independent variables revealed a significant difference in program completion (HR = 4.450, p = 0.002). Participants who did not complete the psychoeducation program were at 4.450-fold greater risk of all-cause discontinuation of outpatient treatment than participants who attended all sessions (Table ). For all participants, discontinuation of outpatient treatment was attributed to “readmission due to recurrence” (Table ). The cumulative number of readmissions was significantly greater among program noncompleters than among program completers by the end of each follow-up year (Table ). There was a significantly weak correlation between DOT and the rates of psychoeducation program sessions attended (Pearson's r = 0.384, p = 0.030, 95% CI: 0.001–0.646) (Fig. ). Between-group comparisons before the program, after the program, and changes in GAF and J-SQLS subscores (ME, PS and SS) revealed no significant differences (Table ). In comparison between two groups (positive or negative reason) of NG, no significant group differences were found for all variables (Table ). And in survival analysis, there were no significant difference between two groups (median 107.0 (70.0) days, 95% CI: 0.0–244.2 vs. median 199.0 (83.3) days, 95% CI: 35.8–362.2; p = 0.505 by log-rank test). To the best of our knowledge, this is the first study comparing the long-term (5 years) efficacy of inpatient psychoeducation for schizophrenia management after hospital release between participants attending all sessions and participants missing one or more (but not all) sessions. Indeed, this is possibly the first pilot cohort study to examine the prognostic impact of the frequency of psychoeducation participation in a small but real-world setting. Therefore, the NG demonstrated an approximately 4.5-fold greater risk of all-cause discontinuation of outpatient treatment as well as earlier readmission for disease relapse. Despite the small sample size in this study, there were no differences in characteristic factors related to schizophrenia treatment in the two groups, including housing, living, marriage, employment and education. Although no significant factors other than psychoeducation completion were found, the possibility cannot be excluded that this may indicate the existence of potential factors affecting psychoeducation completion. On the other hand, no interaction was found for participation in occupational therapy. the results may have been contributed to by the acquisition of knowledge and skills through lectures and/or cognitive-behavioral therapeutic interventions in psychoeducation, rather than occupational therapy. Therefore, it may be difficult to say that, at least, only good (or bad) treatment attitudes have an effect on prognosis. In addition, we have examined the possibility that reasons (positive or negative) for NG might affect prognosis, but it is also clear that noncompletion for positive reasons for treatment does not necessarily predict a positive outcome. Although these may also support the need for psychoeducation program completion, it is essential to consider some potential factors for the reasons leading to completion or noncompletion of psychoeducation and their association with prognosis. However, although the results of this study suggested that psychoeducation improved long-term outcomes, there were no significant differences between the two groups in QOL or GAF in the short-term. The results of this study are not consistent with those of a previous study in the short-term . A recent meta-analysis concluded that complete psychoeducation reduced the recurrence/readmission rate among patients with schizophrenia spectrum disorders in the acute phase . These interventions had similar preconditions as those of the current study. This discrepancy in results may indicate that this study may have failed to employ appropriate measures of the short-term effects of psychoeducation, as well as missing potential and important factors that influence completion of psychoeducation. Nevertheless, as psychoeducation programs could have a positive impact on prognosis, it will be essential to provide such programs with sufficient flexibility so that they can be completed by all inpatients. Robinson et al. reported that more than half of patients with schizophrenic disorders (63.1%) relapsed within 3 years after the first hospitalization, a rate similar to that of our CG. In contrast, all patients who did not complete the psychoeducation program relapsed within 2 years of discharge. Patients with poor medication adherence as outpatients are 2.4 times more likely to be rehospitalized than those with good medication adherence, and nonadherence is the major cause of relapse or recurrence . The lower recurrence rate among patients completing our psychoeducation program cannot be explained by the characteristics of the participants in the NG (Table ); however, the study could suggest that the program may have contributed to some change in life or treatment attitude related to improve medication adherence. The efficacy of psychoeducation for preventing recurrence has been verified by multiple studies . In addition, psychoeducation was reported to improve patient QOL . Although this study did not reveal direct improvements in QOL, it would be necessary to clarify how the completion of psychoeducation contributed to patients' QOL and other factors that stabilized their lives after discharge. There are several distinct models of psychoeducation for patients with schizophrenia, such as programs including the participation of families as well as patients , programs delivered exclusively for outpatients or continuing in an outpatient setting , and community-based programs . Programs adopting a combination of educational, behavioral, and emotional strategies are highly effective at maintaining medication adherence and reducing recurrence and readmission , whereas psychoeducational interventions without behavioral elements or support services may not be as effective . In this study, the target, method, frequency, and environment of the intervention differed from those of the aforementioned studies; thus, it is difficult to fully explain the comparison of efficacy rates and the reason for prolonged DOT. Nonetheless, completion of these programs appears essential for full efficacy . The only clear difference between completers and noncompleters is the amount of knowledge acquired , but a causal relationship between the amount of knowledge and recurrence risk has not been demonstrated. Similarly, while poor medication adherence is strongly associated with recurrence , no causal relationship has been established between the contents of psychoeducation and adherence. The multiaxial approach to psychoeducation programs by five professions (psychiatrists, occupational therapists, mental health workers, pharmacists and nurses) may have influenced the results of this study and further confirmed a possible correlation between the rates of sessions attended and DOT. These findings strongly support the hypothesis that the completion of psychoeducation programs is essential for relapse prevention and suggest that more program attendance improves outpatient outcomes. In the future, expanding the sample size and further analyzing the relationship between missed content and recurrence may be useful in developing and improving psychoeducation programs. Limitations This study is a pilot study; however, owing to the small sample size, a larger (possibly multicenter) study is warranted. Small sample size could be affected sampling bias and lead statistical errors. And this study have some other major limitations. First, it is the presence of potential confounding factors. We have not been able to adequately identify a range of potential confounding factors related to schizophrenia treatment. And this study could only partially clarify which elements of the patients were affected by psychoeducation program completion and how the reasons for program noncompletion affected their prognosis. It would have been necessary to consider the influence of the patient's characteristics, including their attitude to treatment, and their supportive environment. Second, It is about the setting of independent variables in multivariate analyses. There are several well-known factors that may affect schizophrenia treatment or prognosis. In this study, we focused on interventions in inpatient treatment that could influence psychoeducation. Due to sample size limitations, we had to give up some important variables. Although there were no obvious differences between the two groups among the basic attribute variables employed in this study, the possibility that the choice of variables may have influenced the results cannot be excluded. Third, it is differences in delivery among leaders of the psychoeducation program. Although our psychoeducation program was created on the basis of a toolkit, there is no nation-wide program in Japan for training psychoeducation practitioners. There are also major differences in program content and emphasis (e.g., number of sessions) among centers. To solve this problem, establishing a standardized program and supervision system to ensure the homogeneity of the program's effectiveness are essential. Forth, there may be marked differences in cognitive abilities among patients, further compounding heterogeneity in the outcome within and among treatment centers. Although there was no statistically significant difference in the GAF score between the completion and noncompletion groups, it has been reported that disease severity can hinder program completion. More vulnerable patients may not be able to fully learn and use the coping strategies included in the program to prevent recurrence . Such cases may require more pervasive monitoring rather than relying on the benefits of psychoeducation. It is also critical to identify and validate the most therapeutically effective elements of the program for emphasis, especially for cases with limited cognitive capacity. Finally, we were not able to verify whether the prognosis improved as a result of the completion of psychoeducation and adherence to medication or whether the prognosis improved as a result of improved attitudes toward medication or insight into the disease due to the completion of psychoeducation. To solve this problem, additional studies that include assessments of personality and cognitive function would be appropriate. Future perspective It has been reported that medication adherence may decrease with time following schizophrenia diagnosis , whereas the risk of death may increase . These findings suggest that interventions aimed at improving adherence should instead be instituted or repeated during this critical period. There is also evidence that psychoeducation is not effective for patients at onset . Therefore, first hospitalization is an appropriate time for psychoeducation despite challenges in some cases, such as early release or poor patient condition. Future studies should expand the sample size to examine how program completion is related to DOT, along with attitudes toward medication, insight into the disease and medication adherence. This study is a pilot study; however, owing to the small sample size, a larger (possibly multicenter) study is warranted. Small sample size could be affected sampling bias and lead statistical errors. And this study have some other major limitations. First, it is the presence of potential confounding factors. We have not been able to adequately identify a range of potential confounding factors related to schizophrenia treatment. And this study could only partially clarify which elements of the patients were affected by psychoeducation program completion and how the reasons for program noncompletion affected their prognosis. It would have been necessary to consider the influence of the patient's characteristics, including their attitude to treatment, and their supportive environment. Second, It is about the setting of independent variables in multivariate analyses. There are several well-known factors that may affect schizophrenia treatment or prognosis. In this study, we focused on interventions in inpatient treatment that could influence psychoeducation. Due to sample size limitations, we had to give up some important variables. Although there were no obvious differences between the two groups among the basic attribute variables employed in this study, the possibility that the choice of variables may have influenced the results cannot be excluded. Third, it is differences in delivery among leaders of the psychoeducation program. Although our psychoeducation program was created on the basis of a toolkit, there is no nation-wide program in Japan for training psychoeducation practitioners. There are also major differences in program content and emphasis (e.g., number of sessions) among centers. To solve this problem, establishing a standardized program and supervision system to ensure the homogeneity of the program's effectiveness are essential. Forth, there may be marked differences in cognitive abilities among patients, further compounding heterogeneity in the outcome within and among treatment centers. Although there was no statistically significant difference in the GAF score between the completion and noncompletion groups, it has been reported that disease severity can hinder program completion. More vulnerable patients may not be able to fully learn and use the coping strategies included in the program to prevent recurrence . Such cases may require more pervasive monitoring rather than relying on the benefits of psychoeducation. It is also critical to identify and validate the most therapeutically effective elements of the program for emphasis, especially for cases with limited cognitive capacity. Finally, we were not able to verify whether the prognosis improved as a result of the completion of psychoeducation and adherence to medication or whether the prognosis improved as a result of improved attitudes toward medication or insight into the disease due to the completion of psychoeducation. To solve this problem, additional studies that include assessments of personality and cognitive function would be appropriate. It has been reported that medication adherence may decrease with time following schizophrenia diagnosis , whereas the risk of death may increase . These findings suggest that interventions aimed at improving adherence should instead be instituted or repeated during this critical period. There is also evidence that psychoeducation is not effective for patients at onset . Therefore, first hospitalization is an appropriate time for psychoeducation despite challenges in some cases, such as early release or poor patient condition. Future studies should expand the sample size to examine how program completion is related to DOT, along with attitudes toward medication, insight into the disease and medication adherence. Noncompletion of an inpatient psychoeducation program could resulted in a significantly shorter duration of uninterrupted outpatient treatment and earlier symptom recurrence. These results may suggest that the completion of psychoeducation programs and related potential factors have a positive effect on patient prognosis. All efforts should be made to allow inpatients the opportunity to complete psychoeducation programs as early as possible after the onset of illness despite time constraints and other challenges to prevent relapse or recurrence. |
Distinctive blood and salivary proteomics signatures in Qatari individuals at high risk for cardiovascular disease | 3a3601ec-0807-4f08-b64b-84518ce000c6 | 11790934 | Biochemistry[mh] | Cardiovascular disease (CVD) is a group of conditions affecting the heart and blood vessels, such as heart failure, hypertension, stroke, coronary heart disease, and atherosclerosis . CVD stands as a leading cause of mortality worldwide, responsible for approximately one-third of all deaths, and is recognized as the primary noncommunicable disease . Furthermore, the CVD direct medical cost is predicted to increase to $818 billion by 2030 compared to $273 billion in 2010 in the United States . Various risk factors contribute to the development of CVD, including increased body mass index (BMI), smoking, diabetes, high levels of low-density lipoprotein cholesterol, bad dietary habits , and inflammation . The prevalence of these risk factors differs among populations . Data from the planning and statistical authorities in Qatar indicate that CVD is among the leading causes of mortality in 2020, contributing to 29% of all deaths in the country . This is mainly attributed to the prevalent risk factors for CVD in Qatar . A recent study showed that one in every five Qatari subjects is either prediabetic or diabetic, and one in three is hypertensive . These risk factors are predicted to substantially increase by 2050 . A recent study examined the expected burden of CVD on diabetes over the next 10 years in Qatar and predicted a direct cost of 11.40 billion US$ and an indirect cost to surpass 8.30 billion US$ . Despite these concerning statistics, published studies investigating CVD risk among the Qatari and Arab populations at large are still scarce . The early identification of individuals who are at high risk for developing CVD is crucial for early and cost-effective intervention , . Therefore, there is a high demand for diagnostic biomarkers as they help in the early detection of diseases . In our previous study, we assessed the association between the salivary microbiome and CVD risk using a large cohort of Qatar Genome Project (QGP) participants . We showed significant differences in the salivary microbiome composition between HR and LR CVD subjects . Recent advancements in protein assays allowed high throughput proteomic profiling, enabling the rapid discovery of new biomarkers by examining large numbers of proteins involved in various biological pathways , . Among the pioneering high-throughput proteomic platforms utilized extensively in epidemiological and clinical investigations is the SOMAscan platform . This platform employs single-stranded RNA or DNA sequences, termed aptamers, capable of recognizing epitopes on folded proteins . With the capacity to analyze approximately 7,000 proteins in a relatively small sample volume, the SOMAscan platform offers exceptional sensitivity and reproducibility . Its application has proven instrumental in identifying protein signatures linked to various diseases and potential biomarkers, including those associated with glomerulonephritis , cancer , Parkinson’s disease , asthma , and systemic sclerosis among others. In CVD, SOMAscan was used to search for new CVD biomarkers in population-based studies like the Heart and Soul study (USA) , Framingham Heart Study (USA) , , Jackson Heart Study (US) , in addition to other cohorts from Iceland and Italy (InCHIANTI study) . These studies have identified new CVD biomarkers , – but were mainly done on populations with European ancestry , , , . It is also worth noting that in CVD, large-scale proteomic profiling is mainly performed in the blood, and the proteome of other body fluids remains lacking . To date, approximately 3,000 human saliva proteins have been identified, encompassing various enzymes, immunoglobulins, glycoproteins, and hormones, which collectively contribute to the maintenance of oral cavity homeostasis . Saliva, often regarded as a "mirror of the gut" holds promise for diagnostic applications due to its diverse molecular composition derived from local blood supply, microbes, and cellular constituents . Moreover, saliva is an accessible and non-invasive sample that is easy to collect, and as a result, salivary biomarkers can be applied for developing rapid diagnostic tools . Despite these advantages, aptamer-based methods in saliva have primarily focused on cardiac markers such as troponins and myoglobin, while a comprehensive proteomic analysis for CVD protein signatures remains sparse . The present study aims to leverage the SOMAscan proteomic panel to uncover biomarkers associated with CVD risk in both saliva and plasma samples from the Qatari population, thus laying the groundwork for identifying non-invasive salivary biomarkers for CVD risk and enhancing our understanding of proteomic signatures linked to CVD risk in saliva.
Ethical statement Approval for the study was obtained from the Institutional Review Board (IRB) of Sidra Medicine under protocol #1510001907, and from Qatar Biobank (QBB) under protocol #E/2018/QBB-RES-ACC-0063/0022. Prior to sample collection, all study participants signed an informed consent, and the experiments were conducted in accordance with the approved guidelines and in accordance with the Declaration of Helsinki. Study population and clinical data Cardiovascular disease (CVD) risk scores were computed to assess the risk of suffering a heart attack over the next 10 years, using the Cox proportional-hazards regression, as detailed in our previous report . From the same study cohort, we randomly selected 50 subjects with low-risk to develop CVD (CVD score < 10) (CVD-LR) and 50 subjects with high-risk (HR) (CVD score > 20) (CVD-HR) . The study included Qatari participants aged 18–64 years. Participants were excluded if they had recent antibiotics use (within three months) or suffered from chronic diseases (e.g., gastroesophageal reflux disease, Crohn’s disease, thyroid disease or cancer). De-identified samples, along with anthropometric and clinical data for all study subjects, were collected from QBB. In brief, enrolled subjects were advised to fast for at least 8 h before the collection of samples. Matched plasma and saliva samples were collected from the same subjects following QBB standardized sample collection protocol , . Around 60 ml of blood was collected and used for routine blood tests. Then, the remaining was aliquoted and stored at −80°C , . For saliva, about 5 mL of unstimulated saliva was collected in a falcon tube, divided into aliquots of 0.4 mL, and stored at −80°C . SOMAscan proteomics The salivary and plasma proteome was characterized using the SOMAscan platform, which uses single-stranded DNA-based protein affinity reagents called SOMAmers (Slow Off-rate Modified Aptamers), as detailed in previous studies – . In essence, each SOMAmer® reagent selectively binds to a specific target protein, totaling approximately 1317 proteins. The SOMAscan assay involves distributing SOMAmers into various sample dilution bins tailored to the analyzed matrix. These diverse distributions and dilution schemes are designed to ensure that analyte concentrations fall within the linear range of the assay for each SOMAmer. In conventional matrices such as plasma and serum, SOMAmers are split into 0.005%, 1%, and 40% dilution bins. However, for non-traditional matrices like saliva, specific dilution bins are not predefined, and samples are typically analyzed at a single dilution, with all SOMAmers assayed accordingly. To establish the optimal dilution for saliva samples, we conducted SOMAScan assays using pooled and individual saliva samples serially diluted from 40% to 0.3125%. This process facilitated the identification of the optimal saliva dilution, determined to be 10% diluted in assay buffer, resulting in average assay values falling within the mid-range of the dynamic range for each SOMAmer. Throughout the SOMAScan assay, adherence to the manufacturer’s cell and tissue protocol instructions was maintained. Relative fluorescence unit values obtained from SOMAscan were normalized against the hybridization control to correct for any systematic effects introduced during hybridization. The hybridization control factor was determined by pooling all samples from different plates. Median normalization was applied across all samples within the arrays, ensuring a successful assessment of signal intensity variance based on the hybridization controls. SOMAscan data analysis The raw fluorescence data of 1,317 proteins were first normalized via quantile normalization using the “normalizeBetweenArrays” from the limma package (v3.56.2) . UMAP analysis indicated that patient’s BMI and age have a very strong confounding effect in segregating samples. Consequently, a differential expression analysis was conducted using limma, with consideration given to age and BMI within the design matrix. We used the ‘lmFit’ function for multiple linear regression, followed by the ‘ebays’ function with parameter ‘robus = TRUE’ to compute moderate t-statistics, F-statistics, and log-odd ratios. P -values were adjusted using Benjamini & Hochberg method. Proteins showing p -values < 0.05 and at least 50%-fold change between low and high CVD risk patients in either plasma or saliva were selected. Additionally, differentially expressed proteins demonstrating a consistent trend of expression change between the two tissues and significant statistical changes in both were selected as the initial biomarker candidates. For visualization purposes, the effects of age and BMI were first regressed out. Samples were then clustered into two groups based on UMAP representations reflecting low and high age/BMI values. The resulting cluster IDs were utilized to regress out the effects of age and BMI using the ‘removeBatchEffect’ function from the limma package. Functional enrichment analysis The enrichGO function from the R/Bioconductor clusterProfiler package (v4.8.3) was used to perform Gene Ontology (GO) and pathway enrichment analysis focusing on Biological Process ontologies. Only GO terms exhibiting an adjusted p-value < 0.05 were included in the analysis. Then, GO enrichment plots were generated utilizing the ggplot2 package. Estimation of protein biomarkers importance using machine learning models The following machine learning models: Random Forest (RF) , Elastic-net (eNet) , Partial least squares via mixOmics (pls) , XGBoost , generalized linear model (GLM) and Radial Basis Function (RBF) kernel SVM were respectively used to determine the predictive importance of each marker in classifying CVD-HR and CVD-LR patients. We used the tidymodels R package to train the different models . Each model was trained using repeated cross validation (5-folds and 10 repeats in each fold). To avoid label unbalancing during training, the different cross-validation subsets were generated in a stratified manner. Hyper parameter tuning was done using a grid search algorithm. The RF, eNet, and pls models had the highest performance in all tissues (plasma, saliva) and were selected to calculate the mean importance of each marker in the three models. The variable importance of each model was scaled to be within [0,1] Statistical analysis and visualization The demographic and clinical data of the study cohort were analyzed using GraphPad Prism (10.1.2). Mann–Whitney U tests were utilized to compare variables, including age, BMI, systolic and diastolic blood pressure, glucose level, HbA1C, lipid profile, total protein, albumin, urea, and creatinine. Next, the Chi-square test was employed to compare the impact of smoking and sex between the CVD-HR group and the CVD-LR group. Statistical significance was set at p -values less than 0.05. All statistical analysis were conducted using R version 4.3.1, with the limma package (version 3.56.2) . The visualization of the results was carried out using ggplot2 and ComplexHeatmap R packages . Venn diagrams were created using Intervene .
Approval for the study was obtained from the Institutional Review Board (IRB) of Sidra Medicine under protocol #1510001907, and from Qatar Biobank (QBB) under protocol #E/2018/QBB-RES-ACC-0063/0022. Prior to sample collection, all study participants signed an informed consent, and the experiments were conducted in accordance with the approved guidelines and in accordance with the Declaration of Helsinki.
Cardiovascular disease (CVD) risk scores were computed to assess the risk of suffering a heart attack over the next 10 years, using the Cox proportional-hazards regression, as detailed in our previous report . From the same study cohort, we randomly selected 50 subjects with low-risk to develop CVD (CVD score < 10) (CVD-LR) and 50 subjects with high-risk (HR) (CVD score > 20) (CVD-HR) . The study included Qatari participants aged 18–64 years. Participants were excluded if they had recent antibiotics use (within three months) or suffered from chronic diseases (e.g., gastroesophageal reflux disease, Crohn’s disease, thyroid disease or cancer). De-identified samples, along with anthropometric and clinical data for all study subjects, were collected from QBB. In brief, enrolled subjects were advised to fast for at least 8 h before the collection of samples. Matched plasma and saliva samples were collected from the same subjects following QBB standardized sample collection protocol , . Around 60 ml of blood was collected and used for routine blood tests. Then, the remaining was aliquoted and stored at −80°C , . For saliva, about 5 mL of unstimulated saliva was collected in a falcon tube, divided into aliquots of 0.4 mL, and stored at −80°C .
The salivary and plasma proteome was characterized using the SOMAscan platform, which uses single-stranded DNA-based protein affinity reagents called SOMAmers (Slow Off-rate Modified Aptamers), as detailed in previous studies – . In essence, each SOMAmer® reagent selectively binds to a specific target protein, totaling approximately 1317 proteins. The SOMAscan assay involves distributing SOMAmers into various sample dilution bins tailored to the analyzed matrix. These diverse distributions and dilution schemes are designed to ensure that analyte concentrations fall within the linear range of the assay for each SOMAmer. In conventional matrices such as plasma and serum, SOMAmers are split into 0.005%, 1%, and 40% dilution bins. However, for non-traditional matrices like saliva, specific dilution bins are not predefined, and samples are typically analyzed at a single dilution, with all SOMAmers assayed accordingly. To establish the optimal dilution for saliva samples, we conducted SOMAScan assays using pooled and individual saliva samples serially diluted from 40% to 0.3125%. This process facilitated the identification of the optimal saliva dilution, determined to be 10% diluted in assay buffer, resulting in average assay values falling within the mid-range of the dynamic range for each SOMAmer. Throughout the SOMAScan assay, adherence to the manufacturer’s cell and tissue protocol instructions was maintained. Relative fluorescence unit values obtained from SOMAscan were normalized against the hybridization control to correct for any systematic effects introduced during hybridization. The hybridization control factor was determined by pooling all samples from different plates. Median normalization was applied across all samples within the arrays, ensuring a successful assessment of signal intensity variance based on the hybridization controls.
The raw fluorescence data of 1,317 proteins were first normalized via quantile normalization using the “normalizeBetweenArrays” from the limma package (v3.56.2) . UMAP analysis indicated that patient’s BMI and age have a very strong confounding effect in segregating samples. Consequently, a differential expression analysis was conducted using limma, with consideration given to age and BMI within the design matrix. We used the ‘lmFit’ function for multiple linear regression, followed by the ‘ebays’ function with parameter ‘robus = TRUE’ to compute moderate t-statistics, F-statistics, and log-odd ratios. P -values were adjusted using Benjamini & Hochberg method. Proteins showing p -values < 0.05 and at least 50%-fold change between low and high CVD risk patients in either plasma or saliva were selected. Additionally, differentially expressed proteins demonstrating a consistent trend of expression change between the two tissues and significant statistical changes in both were selected as the initial biomarker candidates. For visualization purposes, the effects of age and BMI were first regressed out. Samples were then clustered into two groups based on UMAP representations reflecting low and high age/BMI values. The resulting cluster IDs were utilized to regress out the effects of age and BMI using the ‘removeBatchEffect’ function from the limma package.
The enrichGO function from the R/Bioconductor clusterProfiler package (v4.8.3) was used to perform Gene Ontology (GO) and pathway enrichment analysis focusing on Biological Process ontologies. Only GO terms exhibiting an adjusted p-value < 0.05 were included in the analysis. Then, GO enrichment plots were generated utilizing the ggplot2 package.
The following machine learning models: Random Forest (RF) , Elastic-net (eNet) , Partial least squares via mixOmics (pls) , XGBoost , generalized linear model (GLM) and Radial Basis Function (RBF) kernel SVM were respectively used to determine the predictive importance of each marker in classifying CVD-HR and CVD-LR patients. We used the tidymodels R package to train the different models . Each model was trained using repeated cross validation (5-folds and 10 repeats in each fold). To avoid label unbalancing during training, the different cross-validation subsets were generated in a stratified manner. Hyper parameter tuning was done using a grid search algorithm. The RF, eNet, and pls models had the highest performance in all tissues (plasma, saliva) and were selected to calculate the mean importance of each marker in the three models. The variable importance of each model was scaled to be within [0,1]
The demographic and clinical data of the study cohort were analyzed using GraphPad Prism (10.1.2). Mann–Whitney U tests were utilized to compare variables, including age, BMI, systolic and diastolic blood pressure, glucose level, HbA1C, lipid profile, total protein, albumin, urea, and creatinine. Next, the Chi-square test was employed to compare the impact of smoking and sex between the CVD-HR group and the CVD-LR group. Statistical significance was set at p -values less than 0.05. All statistical analysis were conducted using R version 4.3.1, with the limma package (version 3.56.2) . The visualization of the results was carried out using ggplot2 and ComplexHeatmap R packages . Venn diagrams were created using Intervene .
Characteristics of the Study Cohort The baseline demographic and clinical characteristics of the study cohort comprising a total of 100 individuals are listed in Table . Based on cardiovascular risk score, we selected 50 subjects categorized as CVD high-risk (CVD-HR) and 50 individuals classified as CVD low-risk (CVD-LR). Overall, the CVD-HR subjects had an average age of 55.32 ± 6.7 compared to 43.06 ± 7.6 in the CVD-LR group. Furthermore, a significantly higher proportion of smokers was observed in the CVD-HR group compared to the CVD-LR group. The CVD-HR group displayed markedly elevated systolic and diastolic blood pressure, glucose levels, HbA1C, lipid profile parameters, urea, and creatinine levels compared to the CVD-LR subjects. Our findings also revealed specific biases in various clinical characteristics, with systolic and diastolic blood pressure demonstrating a positive correlation with CVD-HR. Moreover, BMI and age were identified as contributors to patient segregation, as illustrated in Figure S1. Therefore, we controlled for age and BMI during the differential expression analysis of plasma and saliva proteomic profiles. The plasma and salivary proteomes show differentially expressed proteins in CVD-HR and CVD-LR subjects Differential expression analysis using plasma samples of 50 CVD-HR subjects in comparison to 50 CVD-LR subjects encompassing a total of 1,317 proteins detected using SOMAscan. Among these proteins, in the plasma, a subset of 207 proteins exhibited significant differences ( p -value < 0.05) between CVD-HR and CVD-LR (Figure S2). Subsequently, proteins displaying both p -values < 0.05 and at least a 50% fold-change (|FC|> 1.5) between the CVD-HR and CVD-LR groups were selected for visualization in a heatmap, as depicted in Fig. a. Approximately 44 plasma proteins (21 increased and 23 decreased) demonstrated a significant differential expression with at least a 50% ( |FC|> 1.5) fold-change in plasma of CVD-HR and CVD-LR groups (Fig. a). On the other hand, a total of 94 proteins exhibited significant differences ( p -value < 0.05) in saliva samples when comparing the two groups (Figure S2). Among these, 25 salivary proteins demonstrated significant differential expression with at least a 50%-fold-change (18 increased and 7 decreased) between the CVD-HR and CVD-LR groups, as illustrated in Fig. b. Identification of common CVD-risk biomarkers in plasma and saliva The differentially expressed proteins in CVD-HR group were further examined to search for common CVD-risk biomarkers between plasma and saliva. We found eight proteins that showed correlated enrichment in both the plasma and saliva of CVD-HR subjects (Figs. and ). These potential biomarkers include Plexin B2 (PLXNB2), LDL receptor-related protein 1B (LRP1B), GDNF Family Receptor Alpha 1 (GFRA1), acid phosphatase 5, tartrate resistant (ACP5), Chemokine (C–C motif) ligand 15 (CCL15), Complement Component 1, R Subcomponent (C1R), proteasome activator subunit 3 (PSME3) and kallikrein 5 (KLK5). The PLXNB2, LRP1B, GFRA1, ACP5, C1R, and CCL15 were upregulated in both saliva and plasma of CVD-HR compared to the CVD-LR groups (Figs. and ). On the other hand, PSME3 was the only downregulated protein in the CVD-HR group in both plasma and saliva (Figs. and ). Interestingly, KLK5 showed a difference in the direction of change between saliva and plasma, as shown in Figs. and . We next examined whether taking anti-diabetic, antihypertensive, or antilipidemic medications will influence the shared CVD-risk biomarkers (Figure S3). All the shared CVD-risk biomarkers showed significant differential expression in saliva and plasma samples of CVD-HR after correction for treatment, except plasma KLK5, which was significantly different in saliva but not in plasma samples (Figure S3). The prediction performance of CVD-risk biomarkers using machine learning models To accurately examine the diagnostic ability of the selected biomarkers to distinguish CVD-HR from CVD-LR and identify a non-invasive biomarker that can be measured either in saliva or plasma we run an unbiased machine learning (ML) analysis. First, we started by identifying the best performing ML models on our data. Hence, we compared six ML models: Random Forest (RF), Elastic-net (eNET), mixOmics (pls), XGBoost, generalized linear model (GLM), and SVM (RBF) using 1,317 proteins or the 8 selected markers as input and using 70% of samples for training and the other 30% as testing set (see methods). In plasma, the unbiased model trained on the 1,317 proteins (all features) gave better predictive power compared to the model trained using only the eight shared markers (selected features), as illustrated in Fig. a. However, interestingly, the restricted model (selected features) is still highly accurate and shows an AUC > 0.75, indicating that the previously selected markers still hold a very strong predictive power. Alternatively, the selected features model demonstrated better predictive power to the models using all the features, as shown in Fig. b. This suggests that the selected markers have a better predictive potential in saliva tissue, potentially indicating their suitability as non-invasive biomarkers. Moreover, the RF, eNet, and PLS models had the highest performance in both plasma and saliva, with an average AUC > 0.8 in plasma (Fig. a) and AUC > 0.7 in saliva (Fig. b). Next, we calculated the mean importance of each marker in the unbiased version of these three models (Table ). The variable importance of each model was scaled to be within [0,1] before averaging. Among the eight common CVD-risk biomarkers, the top three predictive biomarkers in plasma were LRP1B (median importance = 0.876309), PLXNB2 (median importance = 0.352254), and CCL15 (median importance = 0.328339), respectively (Fig. c and Table S1). Meanwhile, in saliva, the top three predictive biomarkers were C1R (median importance = 0.387032), LRP1B (median importance = 0.375685), and PLXNB2 (median importance = 0.266522) (Fig. d and Table S1). Across all the differentially expressed proteins in plasma and saliva of the CVD-HR group, plasma LRP1B (median importance = 0.876309) was the strongest CVD-risk predictive biomarker, followed by 14–3-3 protein (YWHAE) (median importance = 0.804179) and saliva Protein S100-A7 (S100A7) (median importance = 0.541921) (Table S1). Pathway enrichment analysis for the differentially expressed proteins To get insights into pathways involved in the differentially expressed proteins, the gene Ontology (GO) gene sets were analyzed by the enrichGO function from the R/Bioconductor cluster Profiler. The analysis revealed eight pathways enriched in plasma proteins of the CVD-HR group, as shown in Fig. a. The extracellular matrix organization and the extracellular structure organization were the most enriched pathways for the differentially expressed biomarkers in the plasma of the CVD-HR group. Similarly, for the saliva differential proteins, ten pathways were also enriched in the CVD-HR group (Fig. b), namely, the humoral immune response.
The baseline demographic and clinical characteristics of the study cohort comprising a total of 100 individuals are listed in Table . Based on cardiovascular risk score, we selected 50 subjects categorized as CVD high-risk (CVD-HR) and 50 individuals classified as CVD low-risk (CVD-LR). Overall, the CVD-HR subjects had an average age of 55.32 ± 6.7 compared to 43.06 ± 7.6 in the CVD-LR group. Furthermore, a significantly higher proportion of smokers was observed in the CVD-HR group compared to the CVD-LR group. The CVD-HR group displayed markedly elevated systolic and diastolic blood pressure, glucose levels, HbA1C, lipid profile parameters, urea, and creatinine levels compared to the CVD-LR subjects. Our findings also revealed specific biases in various clinical characteristics, with systolic and diastolic blood pressure demonstrating a positive correlation with CVD-HR. Moreover, BMI and age were identified as contributors to patient segregation, as illustrated in Figure S1. Therefore, we controlled for age and BMI during the differential expression analysis of plasma and saliva proteomic profiles.
Differential expression analysis using plasma samples of 50 CVD-HR subjects in comparison to 50 CVD-LR subjects encompassing a total of 1,317 proteins detected using SOMAscan. Among these proteins, in the plasma, a subset of 207 proteins exhibited significant differences ( p -value < 0.05) between CVD-HR and CVD-LR (Figure S2). Subsequently, proteins displaying both p -values < 0.05 and at least a 50% fold-change (|FC|> 1.5) between the CVD-HR and CVD-LR groups were selected for visualization in a heatmap, as depicted in Fig. a. Approximately 44 plasma proteins (21 increased and 23 decreased) demonstrated a significant differential expression with at least a 50% ( |FC|> 1.5) fold-change in plasma of CVD-HR and CVD-LR groups (Fig. a). On the other hand, a total of 94 proteins exhibited significant differences ( p -value < 0.05) in saliva samples when comparing the two groups (Figure S2). Among these, 25 salivary proteins demonstrated significant differential expression with at least a 50%-fold-change (18 increased and 7 decreased) between the CVD-HR and CVD-LR groups, as illustrated in Fig. b.
The differentially expressed proteins in CVD-HR group were further examined to search for common CVD-risk biomarkers between plasma and saliva. We found eight proteins that showed correlated enrichment in both the plasma and saliva of CVD-HR subjects (Figs. and ). These potential biomarkers include Plexin B2 (PLXNB2), LDL receptor-related protein 1B (LRP1B), GDNF Family Receptor Alpha 1 (GFRA1), acid phosphatase 5, tartrate resistant (ACP5), Chemokine (C–C motif) ligand 15 (CCL15), Complement Component 1, R Subcomponent (C1R), proteasome activator subunit 3 (PSME3) and kallikrein 5 (KLK5). The PLXNB2, LRP1B, GFRA1, ACP5, C1R, and CCL15 were upregulated in both saliva and plasma of CVD-HR compared to the CVD-LR groups (Figs. and ). On the other hand, PSME3 was the only downregulated protein in the CVD-HR group in both plasma and saliva (Figs. and ). Interestingly, KLK5 showed a difference in the direction of change between saliva and plasma, as shown in Figs. and . We next examined whether taking anti-diabetic, antihypertensive, or antilipidemic medications will influence the shared CVD-risk biomarkers (Figure S3). All the shared CVD-risk biomarkers showed significant differential expression in saliva and plasma samples of CVD-HR after correction for treatment, except plasma KLK5, which was significantly different in saliva but not in plasma samples (Figure S3).
To accurately examine the diagnostic ability of the selected biomarkers to distinguish CVD-HR from CVD-LR and identify a non-invasive biomarker that can be measured either in saliva or plasma we run an unbiased machine learning (ML) analysis. First, we started by identifying the best performing ML models on our data. Hence, we compared six ML models: Random Forest (RF), Elastic-net (eNET), mixOmics (pls), XGBoost, generalized linear model (GLM), and SVM (RBF) using 1,317 proteins or the 8 selected markers as input and using 70% of samples for training and the other 30% as testing set (see methods). In plasma, the unbiased model trained on the 1,317 proteins (all features) gave better predictive power compared to the model trained using only the eight shared markers (selected features), as illustrated in Fig. a. However, interestingly, the restricted model (selected features) is still highly accurate and shows an AUC > 0.75, indicating that the previously selected markers still hold a very strong predictive power. Alternatively, the selected features model demonstrated better predictive power to the models using all the features, as shown in Fig. b. This suggests that the selected markers have a better predictive potential in saliva tissue, potentially indicating their suitability as non-invasive biomarkers. Moreover, the RF, eNet, and PLS models had the highest performance in both plasma and saliva, with an average AUC > 0.8 in plasma (Fig. a) and AUC > 0.7 in saliva (Fig. b). Next, we calculated the mean importance of each marker in the unbiased version of these three models (Table ). The variable importance of each model was scaled to be within [0,1] before averaging. Among the eight common CVD-risk biomarkers, the top three predictive biomarkers in plasma were LRP1B (median importance = 0.876309), PLXNB2 (median importance = 0.352254), and CCL15 (median importance = 0.328339), respectively (Fig. c and Table S1). Meanwhile, in saliva, the top three predictive biomarkers were C1R (median importance = 0.387032), LRP1B (median importance = 0.375685), and PLXNB2 (median importance = 0.266522) (Fig. d and Table S1). Across all the differentially expressed proteins in plasma and saliva of the CVD-HR group, plasma LRP1B (median importance = 0.876309) was the strongest CVD-risk predictive biomarker, followed by 14–3-3 protein (YWHAE) (median importance = 0.804179) and saliva Protein S100-A7 (S100A7) (median importance = 0.541921) (Table S1).
To get insights into pathways involved in the differentially expressed proteins, the gene Ontology (GO) gene sets were analyzed by the enrichGO function from the R/Bioconductor cluster Profiler. The analysis revealed eight pathways enriched in plasma proteins of the CVD-HR group, as shown in Fig. a. The extracellular matrix organization and the extracellular structure organization were the most enriched pathways for the differentially expressed biomarkers in the plasma of the CVD-HR group. Similarly, for the saliva differential proteins, ten pathways were also enriched in the CVD-HR group (Fig. b), namely, the humoral immune response.
In summary, the pursuit of a reliable CVD biomarker detectable in bodily fluids presents a substantial promise for cardiovascular risk assessment, diagnostic accuracy, management strategy guidance, and prognosis prediction , . Nonetheless, there remains an urgent requirement for non-invasive biomarkers capable of accurately predicting CVD risk . Current biomarkers such as troponin, creatinine kinase, and myoglobin primarily rely on antibody-based detection methods . Despite their high sensitivity and selectivity, antibody-based diagnostics are often costly and subject to batch-to-batch variability , . Aptamer-based platforms have emerged as a compelling alternative to address the limitations inherent in antibody-based detection . While aptamer-based technology has identified novel CVD biomarkers from plasma or serum samples, data pertaining to saliva-based CVD protein signatures , , , , , particularly employing high-throughput proteomic methods like SOMAscan, are currently scarce , . In the present study, we utilized the aptamer-based SOMAscan platform to analyze plasma and saliva samples obtained from the QGP participants. Through this approach, we assessed 1,317 proteins and identified unique protein signatures associated with increased CVD risk in the Qatari population. Then, using machine learning models we evaluated the predictive power of the identified CVD-risk biomarkers. These findings hold considerable potential for the advancement of promising non-invasive CVD biomarkers and furnish invaluable insights into the proteomic alterations observed in plasma and saliva concerning CVD risk. In plasma, a larger number of proteins (207 proteins) showed association with CVD-HR compared to saliva (94 proteins) (Figure S2). Notably, upon comparing the CVD-HR proteomic signatures between plasma and saliva, distinct proteins linked with CVD risk were identified in each fluid (Figure S2). Predominantly, the differentially expressed CVD markers in both plasma and saliva belonged to the category of inflammatory proteins or were implicated in inflammatory processes (Table S1). Inflammation stands as a recognized risk factor in CVD pathogenesis , and inflammatory proteins, such as cytokines, are present in the saliva . However, it is plausible that the secretory function of the salivary glands may regulate the levels of inflammatory markers within saliva , . Interestingly, saliva protein markers seem to more prominently reflect local inflammation compared to systemic inflammation, as evidenced by plasma markers . This discrepancy may account for the distinct CVD-HR signatures observed in plasma and saliva as previously highlighted . In our study, we identified eight candidate CVD-HR protein biomarkers shared between plasma and saliva, capable of distinguishing between CVD-HR and CVD-LR groups (Figs. and ). These shared biomarkers include LRP1B, C1R, CCL15, KLK5, GFRA1, PLXNB2, ACP5, and PSME3 (Fig. ). Among these candidates, LRP1B or LDL receptor-related protein 1B emerges as the most promising biomarker, exhibiting the best predictive value (median importance = 0.876309) (Table S1). LRP1B belongs to the LDL receptor family and is prominently expressed in the brain, thyroid, and salivary glands . LRP1B exhibits binding affinity to various extracellular proteins implicated in blood coagulation and lipoprotein metabolism, such as fibrinogen and lipoproteins carrying apoE . Interestingly, our study underscores a notable elevation in saliva fibrinogen levels and plasma ApoE protein within the CVD-HR group (Table S1). Numerous investigations have elucidated the association of the LRP1B gene with obesity . Interestingly, the LRP1B gene was reported in some genome-wide association studies (GWAS), with findings linking it to systolic blood pressure, particularly in Chinese and sub-Saharan African populations , . LRP1B harbors an intronic single nucleotide polymorphism (SNP) linked to blood pressure regulation, and has a notable interaction effect with smoking . LRP1B is abundantly expressed in the medial layer of coronary arteries, and genetic variations in LRP1B have been linked to the risk of coronary artery aneurysms in Kawasaki disease among Taiwanese cohorts . Additionally, LRP1B protein plays a role in Alzheimer’s disease by modulating the cellular trafficking and localization of the amyloid precursor protein . Furthermore, a significant increase in LRP1B protein level was reported in the serum of women with systemic sclerosis . In the InCHIANTI population study from Italy, LRP1B was inversely associated with cardiovascular health . The current study reports the association of LRP1B protein expression in saliva and plasma with CVD-HR (Fig. ). Moreover, our data highlights LRP1B as a potential biomarker for CVD risk, boasting the highest predictive accuracy (Fig. ). C1R or the Complement Component 1, R subcomponent is a proteolytic subunit in the C1 complex , an integral initiator of the classical pathway of the complement system . The complement system plays a key role in the immune system , . It’s involved in the inflammatory mechanism leading to the development of atherosclerosis . Expression levels of complement proteins, including C1R, have been found to be elevated in atherosclerotic plaques . Furthermore, local complement system activation can lead to neutrophil chemotaxis towards clot formation sites in acute myocardial infarction, with C1 protein detected within plasma clots . Activation of C1 has been associated with remote ischemic conditioning in animal models of ischemic stroke . C1R was also found to increase in circulating exosomes from ischemic stroke patients . Our findings reveal the upregulation of C1R in both plasma and saliva samples from individuals at high risk for CVD (Fig. ). Moreover, employing machine learning models, we observed that C1R exhibited superior predictive capabilities in saliva compared to plasma (Fig. ), thereby positioning it as a potential non-invasive biomarker for CVD risk assessment. Another promising biomarker to predict high CVD-risk identified by the current study is kallikrein 5 or kallikrein-related peptidase 5 (KLK5). KLK5 is a member of the Kallikrein-related peptidases (KLKs) family which comprises highly conserved serine proteases . KLKs, along with the complement system and the renin-angiotensin system (RAS) pathway, play crucial roles in cardiovascular disease by initiating vascular inflammation, leading to hypertension and subsequent clot formation . KLK5 is mainly expressed in the skin, brain, breast, and testis and plays a key role in skin homeostasis . Additionally, KLK5 is also involved in the thrombolytic system as it binds and modifies plasminogen, kininogen, and fibrinogen . On the other hand, KLK5 can be inhibited by antiplasmin and antithrombin . Interestingly, our study observed a significant decrease in KLK5 levels in the plasma of individuals at high CVD risk, whereas a marked increase was noted in saliva samples from subjects with high CVD risk compared to those at low risk (Fig. and Table S1). It’s important to note that KLKs proteins in plasma differ from tissue KLKs, exhibiting distinct enzymes and releasing different kinins . This difference in enzymatic component, activation, and effect might explain the interesting variation in KLK5 level between saliva and plasma. Additionally, some of the proteins implicated in blood coagulation, such as plasma thrombin and the Integrin alpha-IIb: beta-3 complex (platelet receptor), demonstrated decreased levels in plasma, while fibrinogen exhibited a significant increase in saliva samples from the CVD-HR group (Table S1). This distinctive pattern mirrors the observed variation in KLK5 levels between plasma and saliva. Moreover, KLK5 is recognized as a promising early biomarker in cancer . In a recent study, KLK5 was found to be associated with T2D in an African American population . Here, we suggest the potential use of salivary KLK5 as a noninvasive and an early biomarker for predicting high CVD-risk. Another candidate for high CVD-risk markers is the Chemokine (C–C motif) ligand 15 (CCL15), also known as Macrophage inhibitory protein-5 (MIP-5) or leukotactin-1 (Lkn-1) . This pro-inflammatory chemokine plays a pivotal role in activating and recruiting leukocytes into the blood vessel wall . In a study involving a South African population, CCL15 emerged as a valuable indicator of vascular health, demonstrating a positive association with Carotid intima media thickness (cIMT), an early marker of atherosclerotic changes . Plasma CCL15, along with 70 proteins, was part of a protein risk-score that was associated with atherosclerotic cardiovascular disease incidence . CCL15 was found increased in patients with myocardial infarction with non-obstructive coronary arteries in comparison to acute myocardial infarction with obstructive coronary arteries patients . Besides, recent evidence from a large-scale study conducted in cohorts from Norway and the USA has highlighted the link between plasma CCL15 and heart failure incidents . Our data confirm the association of increased levels of CCL15 with high CVD-risk in the plasma and report similar findings in the saliva of the Qatari subjects (Fig. ). Additional CVD-risk biomarkers identified in our study include GFRA1, PLXNB2, ACP5, and PSME3. A recent investigation among the African American population reported associations between plasma levels of GFRA1 and Plexin B2 with type 2 diabetes . GFRA1, formally known as Glial cell line-derived neurotrophic factor Family Receptor Alpha 1, has been implicated in numerous studies investigating modifiable lifestyle risk factors. For example, within the Framingham Heart Study, GFRA1 exhibited a significant association with alcohol consumption. Furthermore, a study involving Saudi women with gestational diabetes mellitus (GDM) linked the GFRA1 gene with this condition . PLXNB2, also recognized as Plexin B2, is expressed in human monocytes, macrophages, and foam cells, and has been observed to play a role in monocyte binding to endothelial cells in vitro. Additionally, Plexin B2 has been associated with heightened diabetes risk within the Cardiovascular Health Study population , . ACP5, or tartrate-resistant acid phosphatase type 5, serves as an enzyme involved in bone metabolism and immune system response against bacteria , . ACP5 is primarily expressed by osteoclasts, dendritic cells, and activated macrophages. Serum ACP5 has been suggested as a potential biomarker for detecting bone metastasis in prostate cancer patients . In a previous study that addressed the effect of magnesium on cardiovascular disease blood biomarkers, ACP5 level was found to be affected by magnesium supplementation . Moreover, serum ACP5 levels have been observed to rise in chronic kidney disease patients with vascular calcification and undergoing hemodialysis . Our data reveal a significant elevation in ACP5 levels in the plasma and saliva of individuals at high risk for cardiovascular disease compared to those at low risk. PSME3, or Proteasome activator complex subunit 3 serves as a pivotal regulator in protein degradation by acting as a regulatory protein for the 20 S proteasome , . Within the cell, PSME3 predominantly exists as a homodimer within the nucleus. Its role in macrophages has been noted for its significant contribution to bolstering protection against bacterial infections , . Previous research has indicated an elevated level of PSME3 in pancreatic cancer. However, our findings demonstrate a reduction in PSME3 expression in both plasma and saliva samples from individuals classified in the high-risk group for cardiovascular disease (CVD-HR) (see Figs. and ). Notably, PSME3 has been associated with obesity and insulin resistance . Additionally, it has implications in cell proliferation and fosters glycolysis in pancreatic cancer . After correction for treatment (Figure S3), we observed a consistent level in most of CVD-risk shared biomarkers with our previous results (Fig. c-d), except plasma KLK5. Suggesting treatment didn’t influence the validity of these markers, especially in the saliva of CVD-HR. Moreover, we conducted a pathway enrichment analysis for the differentially expressed proteins in CVD-HR group to explore the pathways associated with the identified CVD-risk biomarkers (Fig. ). Our analysis revealed extracellular matrix organization and disassembly as the two shared enriched pathways among the protein biomarkers associated with CVD risk, identified in both plasma and saliva samples (depicted in Fig. ). This aligns with previous research indicating extracellular matrix organization as a primary mechanism associated with proteins linked to early death in heart failure . In saliva, KLK5 was significantly increased in CVD-HR (Fig. ). KLK5, with its trypsin-like activity, can digest components of extracellular matrix like collagen (I, II, III, and IV), fibronectin, and laminin . Our data also reports a significant increase in plasma collagen alpha-1(VIII) chain (CO8A1) along with differential expression of Matrix metalloproteinase (like MMP9, MMP3, and MMP12) in CVD-HR group (Table S1). This might explain the enrichment of extracellular matrix organization, and disassembly in CVD-HR (Fig. ). The pathway enrichment analysis also gave an insight into the distinct CVD protein signature in saliva as it is uniquely enriched for pathways related to antibacterial functions (Fig. b). This can be explained by the significantly high level of KLK5 and KLK7 proteins only in saliva from the CVD-risk group (Table S1). Enzymes in the saliva are involved in key roles, including antimicrobial function . KLK5 and KLK7, by their proteolytic activity, mediate the antimicrobial activity of antimicrobial peptides like cathelicidin . Besides, other proteins involved in defense against bacteria, like MRC1, and S100A7, were also significantly increased only in the saliva of CVD-HR group (Table S1). Interestingly, all the shared CVD-risk biomarkers are linked to the oxidized LDL, which builds up very early in atherosclerosis development. Suggesting the relevance of the identified CVD-risk biomarkers in reflecting CVD development at an early stage , – . Looking ahead, longitudinal studies will be needed to follow up on these subjects and observe their progression to CVD. Our study has some limitations. First, the sample size used on the SOMAscan platform for CVD-risk biomarker discovery is relatively small. Since SOMAscan provides a relative quantification instead of an absolute quantification , a complementary quantitative proteomic method like immunoassay to validate the biomarkers identified is needed. Second, applying SOMAscan technology to complex samples like plasma and saliva can result in nonspecific protein detection, as a single aptamer may bind to multiple targets. Third, the study analyzed samples from only Qatari subjects, limiting its generalizability to other population. Further validation of CVD risk biomarkers in a larger mutli-ethnic cohort is needed. In conclusion, this study marks the first attempt, to identify a protein signature associated with CVD-risk in saliva samples using a large-scale proteomic approach (SOMAscan) in the Qatari population. Our results unveil the presence of eight potential CVD-risk protein biomarkers with promising diagnostic accuracy, providing a valuable tool for identifying individuals at risk of CVD development. Consequently, both plasma proteomics and saliva present as promising avenues for predicting CVD risk.
Supplementary Information 1. Supplementary Information 2.
|
Expression of integrin α | ae5b1761-9b09-459c-bd0f-2d37e0016a8b | 11497997 | Anatomy[mh] | Background Medullary thyroid carcinoma (MTC) is a neuroendocrine tumor, derived from the calcitonin-producing parafollicular c-cells of the thyroid. Although MTC accounts for only 1–2% of thyroid carcinomas, it is accountable for 13% of thyroid cancer related deaths . In 75% of cases, MTC occurs sporadically, while it can also occur as part of the hereditary tumor syndrome Multiple Endocrine Neoplasia type 2 (MEN2) . Treatment with curative intent consists of total thyroidectomy and dissection of the central lymph node compartment. However, despite treatment, over half of patients continue to exhibit elevated calcitonin levels, indicating persistent disease. Conventional imaging modalities are inadequate for detecting low tumor marker levels in these cases. Imaging modalities are not sufficient in these patients with low tumor markers. Moreover, possibilities for adjuvant therapy are limited. Consequently, survival rates have not increased significantly in the last decades . Therefore, there is a demand for new imaging and therapeutic options that also target lymph node metastases, which will enable better treatment of patients who present with metastases or rapidly progress. Neuroendocrine tumors are highly vascularized and angiogenesis plays a major role in the development of thyroid tumors. Most current adjuvant treatments, such as tyrosine kinase inhibitors, target angiogenesis pathways. A v β 3 is a target for nuclear imaging and treatment (theranostics), which is also strongly involved in the regulation of angiogenesis . It is largely expressed in neovasculature and tumor cells of various malignancies including melanoma, glioma, breast, pancreas, prostate, lung, head and neck, and gastric cancer . Also, α v β 3 integrin affects tumor growth, local invasion and development of metastases . Arginine-glycine-aspartate (RGD) peptides have high affinity and specificity for the extracellular domain of α v β 3 integrin . Therefore, radiolabeled RGD can be used for imaging of malignancies as well as for subsequent treatment with peptide receptor radionuclide therapy (PRRT). The aim of this study was to determine α v β 3 integrin expression in MTC and its lymph node metastases to assess its suitabilitiy as a nuclear target. Correlation of α v β 3 with clinicopathologic variables and survival was assessed. Materials & methods The same cohort, database and TMA were used as described in our previous research . 2.1. Patients Patients who underwent surgery between 1988 and 2014 for MTC were identified from the pathology databases of five Dutch tertiary referral centers: Leiden University Medical Center (LUMC), Amsterdam University Medical Center (AUMC), Radboud University Medical Center (RUMC), University Medical Center Groningen (UMCG) and University Medical Center Utrecht (UMCU). Formalin fixed paraffin embedded (FFPE) tissues were retrieved from pathology archives. Primary tumor tissue was available from 104 patient for inclusion in the tissue microarray (TMA). Additionally, tissue of lymph node metastases from 27 patients from theLUMC and UMCU was available. Clinical and pathological data was obtained from patient records. Germline mutation analysis of the RET gene was performed to confirm all MEN2 diagnoses. Sporadic patients either had a negative germline mutation analysis or a negative family history. Microscopically detected positive resection margins were not included as a separate variable but incorporated into the T-stage classification. Disease status was based on postoperative calcitonin and CEA serum values. Given the range of assays used across five centers over nearly three decades, no exact values or doubling times were used. CEA or calcitonin level above the, at that time applicable, reference range was considered indicative of persistent disease, while values within normal range was interpreted as cured. Only postoperative CEA and calcitonin values measured more than 6 months after surgery were considered. Necrosis, angioinvasion and desmoplasia were scored on whole slides, on the same FFPE blocks that were used for the construction of the TMA. Necrosis and angioinvasion were scored as absent or present. Desmoplasia was scored as negative, some, moderate or severe. This study was performed according to national guidelines with respect to the use of leftover tissue and approval for this study, including the use of patient data, was obtained from the Institutional Review Board of the UMCU. 2.2. Construction of the tissue microarray An automated machine (TMA grand master, 3D Histech, Budapest, Hungary) was used to create the TMA. Three cores of 0.6 mm were punched from each FFPE block of primary tumor and available lymph node metastases. To ensure that cores were punched from tumor regions, a pathologist (PJvD) identified and marked cell-rich areas on H&E slides. These slides were then scanned and the marked areas were manually circled using TMA software (3D Histech). 2.3. Immunohistochemistry TMA blocks and whole slides were cut at 4 μm and mounted on coated slides. Staining for α v β 3 was carried out manually following protocol: after baking the slides at 60°C for 10 min, slides were deparaffinized in xylene for 10 min and hydrated in a series of 100% ethanol, 70% ethanol and rinsed with demi-water. Hereafter, slides were washed with PBS twice. Endogenous peroxidase was blocked using 3% H 2 O 2 in PBS for 15 min. Antigen retrieval was performed in Tris-EDTA buffer (pH 9) by boiling. Slides were washed with PBS-Tween two-times, then were then incubated with Pierce protein- free T20 (PBS) blocking buffer (PIER37573, Thermo Scientific) and incubated at room temperature in a dark place for 15 min. The primary α v β 3 antibody (1:25 ab7166 mouse monoclonal [BV3], Abcam) was incubated overnight in a dark place at 4°C. Slides were washed with PBS-Tween three-times. Then, a 2 step detection system was used (VWRKC-DPVB110HRP, Immunologic). First, a post-blocking step was performed for 15 min and slides were washed with PBS-tween three-times. Secondly, poly-HRP-anti-mouse/rabbit HRP was added for 30 min; both incubations took place in the dark at room temperature. Slides were washed with PBS-Tween three-times. Bright DAB (VWRKBS04-110, Immunologic) was added and the slides were incubated for 8 min in the dark at room temperature. Slides were washed with tap water, counterstained with 3x diluted Mayer's hemalum solution (1.09249.0500, Sigma-Aldrich), washed with tap water and coverslipped. Tissue of renal cell carcinoma and hemangioma was used as positive controle. As a negative controle, the staining was performed on tissue of renal cell carcinoma and MTC without addition of the primary antibody. 2.4. Scoring of the immunohistochemistry The cores included in the TMA and whole slides were scored by an experienced pathologist (PJvD) and researcher (LHdV) for cytoplasmic and membranous staining, both blinded to clinicopathologic characteristics. Any dDisagreements were resolved through discussion, when necessary with help of a third reviewer (LL). The intensity of cytoplasmic staining was scored as absent (0), weak (1), moderate (2) or strong (3). Membranous staining was scored as present or absent. Staining was considered homogenous if the intensity across various cores was consistent. shows representative scores of all immunostainings. Data on hypoxia inducible factor-1 alpha (HIF-1α), VEGF, glucose transporter 1 (Glut-1), carbonic anhydrase IX (CAIX), microvessel density (MVD) and somatostatin receptor 2A (SSTR2A) was available from previous studies . 2.5. Statistical analysis Categorical data were summarized using frequencies and percentages, while continuous data were summarized using medians and ranges. To enhance the statistical power, categorical data were recoded into dichotomous variables. Grade of desmoplasia was recoded into none-some vs. moderate-severe. Stage was recoded into stage I–III and stage IV. Hereditability was recoded as either sporadic disease or MEN2 syndrome. A v β 3 scorings were transformed into a dichotomous variable, considered positive in case of average intensity of cytoplasmic staining in the scored cores >1 or if membranous staining was present in ≥1 of the scored cores. Overall survival (OS) was defined as time to death from any cause. Progression-free survival (PFS) was defined as time to development of distant metastases or death. Univariate Cox regression survival analysis was performed. Furthermore, Kaplan-Meier survival curves were plotted and significance was calculated using log rank test. All reported p -values were two sided. Analysis was performed using SPSS software, version 25.0 (IBM, Armonk, NY, USA). Patients Patients who underwent surgery between 1988 and 2014 for MTC were identified from the pathology databases of five Dutch tertiary referral centers: Leiden University Medical Center (LUMC), Amsterdam University Medical Center (AUMC), Radboud University Medical Center (RUMC), University Medical Center Groningen (UMCG) and University Medical Center Utrecht (UMCU). Formalin fixed paraffin embedded (FFPE) tissues were retrieved from pathology archives. Primary tumor tissue was available from 104 patient for inclusion in the tissue microarray (TMA). Additionally, tissue of lymph node metastases from 27 patients from theLUMC and UMCU was available. Clinical and pathological data was obtained from patient records. Germline mutation analysis of the RET gene was performed to confirm all MEN2 diagnoses. Sporadic patients either had a negative germline mutation analysis or a negative family history. Microscopically detected positive resection margins were not included as a separate variable but incorporated into the T-stage classification. Disease status was based on postoperative calcitonin and CEA serum values. Given the range of assays used across five centers over nearly three decades, no exact values or doubling times were used. CEA or calcitonin level above the, at that time applicable, reference range was considered indicative of persistent disease, while values within normal range was interpreted as cured. Only postoperative CEA and calcitonin values measured more than 6 months after surgery were considered. Necrosis, angioinvasion and desmoplasia were scored on whole slides, on the same FFPE blocks that were used for the construction of the TMA. Necrosis and angioinvasion were scored as absent or present. Desmoplasia was scored as negative, some, moderate or severe. This study was performed according to national guidelines with respect to the use of leftover tissue and approval for this study, including the use of patient data, was obtained from the Institutional Review Board of the UMCU. Construction of the tissue microarray An automated machine (TMA grand master, 3D Histech, Budapest, Hungary) was used to create the TMA. Three cores of 0.6 mm were punched from each FFPE block of primary tumor and available lymph node metastases. To ensure that cores were punched from tumor regions, a pathologist (PJvD) identified and marked cell-rich areas on H&E slides. These slides were then scanned and the marked areas were manually circled using TMA software (3D Histech). Immunohistochemistry TMA blocks and whole slides were cut at 4 μm and mounted on coated slides. Staining for α v β 3 was carried out manually following protocol: after baking the slides at 60°C for 10 min, slides were deparaffinized in xylene for 10 min and hydrated in a series of 100% ethanol, 70% ethanol and rinsed with demi-water. Hereafter, slides were washed with PBS twice. Endogenous peroxidase was blocked using 3% H 2 O 2 in PBS for 15 min. Antigen retrieval was performed in Tris-EDTA buffer (pH 9) by boiling. Slides were washed with PBS-Tween two-times, then were then incubated with Pierce protein- free T20 (PBS) blocking buffer (PIER37573, Thermo Scientific) and incubated at room temperature in a dark place for 15 min. The primary α v β 3 antibody (1:25 ab7166 mouse monoclonal [BV3], Abcam) was incubated overnight in a dark place at 4°C. Slides were washed with PBS-Tween three-times. Then, a 2 step detection system was used (VWRKC-DPVB110HRP, Immunologic). First, a post-blocking step was performed for 15 min and slides were washed with PBS-tween three-times. Secondly, poly-HRP-anti-mouse/rabbit HRP was added for 30 min; both incubations took place in the dark at room temperature. Slides were washed with PBS-Tween three-times. Bright DAB (VWRKBS04-110, Immunologic) was added and the slides were incubated for 8 min in the dark at room temperature. Slides were washed with tap water, counterstained with 3x diluted Mayer's hemalum solution (1.09249.0500, Sigma-Aldrich), washed with tap water and coverslipped. Tissue of renal cell carcinoma and hemangioma was used as positive controle. As a negative controle, the staining was performed on tissue of renal cell carcinoma and MTC without addition of the primary antibody. Scoring of the immunohistochemistry The cores included in the TMA and whole slides were scored by an experienced pathologist (PJvD) and researcher (LHdV) for cytoplasmic and membranous staining, both blinded to clinicopathologic characteristics. Any dDisagreements were resolved through discussion, when necessary with help of a third reviewer (LL). The intensity of cytoplasmic staining was scored as absent (0), weak (1), moderate (2) or strong (3). Membranous staining was scored as present or absent. Staining was considered homogenous if the intensity across various cores was consistent. shows representative scores of all immunostainings. Data on hypoxia inducible factor-1 alpha (HIF-1α), VEGF, glucose transporter 1 (Glut-1), carbonic anhydrase IX (CAIX), microvessel density (MVD) and somatostatin receptor 2A (SSTR2A) was available from previous studies . Statistical analysis Categorical data were summarized using frequencies and percentages, while continuous data were summarized using medians and ranges. To enhance the statistical power, categorical data were recoded into dichotomous variables. Grade of desmoplasia was recoded into none-some vs. moderate-severe. Stage was recoded into stage I–III and stage IV. Hereditability was recoded as either sporadic disease or MEN2 syndrome. A v β 3 scorings were transformed into a dichotomous variable, considered positive in case of average intensity of cytoplasmic staining in the scored cores >1 or if membranous staining was present in ≥1 of the scored cores. Overall survival (OS) was defined as time to death from any cause. Progression-free survival (PFS) was defined as time to development of distant metastases or death. Univariate Cox regression survival analysis was performed. Furthermore, Kaplan-Meier survival curves were plotted and significance was calculated using log rank test. All reported p -values were two sided. Analysis was performed using SPSS software, version 25.0 (IBM, Armonk, NY, USA). Results 3.1. Clinicopathological variables Baseline characteristics are shown in . One-hundred-and-four patients were included. Patients were aged 10 to 82 years (mean 45.8, SD 16.3). Half of patients were male. The majority of patients had sporadic disease (56.8%), 38.9% MEN2A and 4.2% MEN2B. Patients presented with stage I, II, III and IV in 13.5%, 24.0%, 16.7% and 45.8%, respectively. Tumor size ranged from 4 to 70 mm (mean 25.6 mm, SD 14.8). At time of initial surgery, 63.4% of patients had developed lymph node metastases. 3.2. A v β 3 expression in primary tumor The mean intensity of α v β 3 in all cores containing primary tumor was 1.6 (SD 0.58). Only two patients showed no cytoplasmic α v β 3 expression in one or more cores. The intensity of the scored cores was 0, 1, 2 and 3 in 0.8%, 42.8%, 52.4%% and 4.0%, respectively. Among the 91 patients with multiple cores available for analysis, 71.4% exhibited homogeneous expression throughout the primary tumor. Membranous staining was seen in 28.8% patients. In 75.8% of patients with multiple cores available for analysis, membranous staining was consequently present or absent in all cores. 3.3. A v β 3 expression in primary tumor vs. lymph node metastases The average expression in primary tumor and lymph node metastases for these individual patients is demonstrated in . Tissue of lymph node metastases of 27 patients was available in the TMA. Twenty-three patients had cytoplasmic α v β 3 positive primary tumors. These 23 patients had 29 lymph nodes available for analysis, of which six had negative and 23 had α v β positive cytoplasm. Two of the four patients with α v β 3 negative cytoplasm in the primary tumor had positive cytoplasm in the lymph node metastases. Eleven of the 27 patients had α v β 3 positive membranes in the primary tumor, of which two patients also showed membranous expression in the lymph node metastases. Four patients had negative membranes in the primary tumor but positive membranes in the lymph node metastases. 3.4. Association between α v β 3 expression in primary tumor & clinicopathological variables shows α v β 3 expression in comparison with clinicopathological variables. A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). For membranous positivity no other significant variables were found. Patients with lymph node metastases at time of initial surgery had significantly ( p = 0.02) more often α v β 3 positive cytoplasm compared with patients without lymph node metastases (71.0 vs. 29.0%, respectively). A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). 3.5. Prognostic value Univariate survival analysis for cytoplasmic and membranous α v β expression was not significant for PFS or OS as is outlined in . In Supplementary Figure S1, Kaplan-Meier survival curves are shown. For cytoplasmic α v β 3 positive vs. negative MTC, 10-year survival rates were 84 and 81% for PFS, and 70 and 64% for OS, respectively. For membranous α v β 3 positivity and negativity, PFS was 70 and 52%, and OS was 84 and 75% after 10 years, respectively. Clinicopathological variables Baseline characteristics are shown in . One-hundred-and-four patients were included. Patients were aged 10 to 82 years (mean 45.8, SD 16.3). Half of patients were male. The majority of patients had sporadic disease (56.8%), 38.9% MEN2A and 4.2% MEN2B. Patients presented with stage I, II, III and IV in 13.5%, 24.0%, 16.7% and 45.8%, respectively. Tumor size ranged from 4 to 70 mm (mean 25.6 mm, SD 14.8). At time of initial surgery, 63.4% of patients had developed lymph node metastases. A v β 3 expression in primary tumor The mean intensity of α v β 3 in all cores containing primary tumor was 1.6 (SD 0.58). Only two patients showed no cytoplasmic α v β 3 expression in one or more cores. The intensity of the scored cores was 0, 1, 2 and 3 in 0.8%, 42.8%, 52.4%% and 4.0%, respectively. Among the 91 patients with multiple cores available for analysis, 71.4% exhibited homogeneous expression throughout the primary tumor. Membranous staining was seen in 28.8% patients. In 75.8% of patients with multiple cores available for analysis, membranous staining was consequently present or absent in all cores. A v β 3 expression in primary tumor vs. lymph node metastases The average expression in primary tumor and lymph node metastases for these individual patients is demonstrated in . Tissue of lymph node metastases of 27 patients was available in the TMA. Twenty-three patients had cytoplasmic α v β 3 positive primary tumors. These 23 patients had 29 lymph nodes available for analysis, of which six had negative and 23 had α v β positive cytoplasm. Two of the four patients with α v β 3 negative cytoplasm in the primary tumor had positive cytoplasm in the lymph node metastases. Eleven of the 27 patients had α v β 3 positive membranes in the primary tumor, of which two patients also showed membranous expression in the lymph node metastases. Four patients had negative membranes in the primary tumor but positive membranes in the lymph node metastases. Association between α v β 3 expression in primary tumor & clinicopathological variables shows α v β 3 expression in comparison with clinicopathological variables. A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). For membranous positivity no other significant variables were found. Patients with lymph node metastases at time of initial surgery had significantly ( p = 0.02) more often α v β 3 positive cytoplasm compared with patients without lymph node metastases (71.0 vs. 29.0%, respectively). A v β 3 positive membranes were seen significantly ( p = 0.01) more often in patients with sporadic MTC compared with patients with MEN2 (77.8 vs. 22.2%, respectively). Prognostic value Univariate survival analysis for cytoplasmic and membranous α v β expression was not significant for PFS or OS as is outlined in . In Supplementary Figure S1, Kaplan-Meier survival curves are shown. For cytoplasmic α v β 3 positive vs. negative MTC, 10-year survival rates were 84 and 81% for PFS, and 70 and 64% for OS, respectively. For membranous α v β 3 positivity and negativity, PFS was 70 and 52%, and OS was 84 and 75% after 10 years, respectively. Discussion This study shows that the theranostic target α v β 3 was expressed in cytoplasm in the majority and on the membrane in a minority of MTC. In most cases, α v β 3 positive tumors exhibited homogeneous expression throughout the primary tumor. Survival analysis showed no prognostic value of α v β 3 . While Cheng et al. examined α v β 3 expression in three PTC cell lines using immunofluorescence and showed moderate to high expression on the cell surface ( p = 0.05), immunohistochemical staining of α v β 3 has not been evaluated on thyroid tumors in other series . In pancreas carcinoma, predominantly cytoplasmic staining is observed . Gastric cancer shows mainly membranous staining. In case of strong membranous staining, also some cytoplasmic staining is seen . Brain metastases of lung carcinoma exhibit prominent membranous staining . Prostate cancer displays some cytoplasmic staining, but lacks membranous staining . Our immunohistochemistry results show that α v β 3 is largely expressed in the cytoplasm of MTC rather than in the membrane. Only three cores in the TMA did not express any cytoplasmic α v β 3 while 67.3% of patients were deemed α v β 3 positive using our cut off value. Membranous staining was seen in 28.8% patients. A v β 3 expression and imaging with radiolabeled RGD has not yet been investigated in MTC, nor has treatment with 177 Lu-labeled RGD. However, imaging and treatment with radiolabeled RGD has been investigated in differentiated thyroid carcinoma (DTC). Zhao et al. described uptake of radioactive iodine (RAI) refractory metastatic lesions in ten DTC patients on 99m Tc-3PRGD 2 SPECT imaging . Vatsa et al. presented a case of RAI and 18 F-FDG non avid papillary thyroid carcinoma (PTC), in which 68 Ga-DOTA-RGD 2 was able to depict cervical lymph node metastases . Parihar et al. compared 68 Ga-DOTA-RGD 2 to 18 F-FDG PET/CT in 44 patients with RAI-refractory DTC and found a similar sensitivity but a significantly higher specificity of 68 Ga-DOTA-RGD 2 , especially for lymph node metastases . Furthermore, they have reported results suggesting response to 177 Lu-DOTA-RGD 2 treatment with a follow-up time of four months in a single DTC patient with uptake in the thyroid remnant, cervical and mediastinal lymph nodes, bone lesions and lung nodules on 68 Ga-DOTA-RGD 2 PET/CT . In our analysis, a distinction was made between patients with cytoplasmic and membranous expression. RGD binds to the extracellular domain of the α v β 3 integrin . Therefore, membranous expressions is interesting for theranostic purposes and should be the expression to focus on in further research. Patients with sporadic MTC had significantly more often α v β 3 positive membranes. Hence, this subgroup of patients, though small, may benefit more from imaging with radiolabeled RGD and may be more eligible for PRRT, especially when curative surgery is no longer possible. It is plausible that patients with more abundant membranous α v β 3 expression show more uptake on RGD imaging. However, this has not been studied in thyroid cancer or other tumors. Further research on the relation between immunohistochemical α v β 3 expression and uptake of radiolabeled RGD is therefore needed. A v β 3 integrin has a strong effect on angiogenesis and is associated with tumor growth, tumor invasion and development of metastases in various malignancies, which are all prognostically relevant . Our results show a correlation between cytoplasmic expression and having lymph node metastases at time of the primary surgery, which is in line with results on pancreas cancer . Furthermore, the expression of α v β 3 was correlated with bone metastases in prostate and breast carcinoma . Further research is needed to investigate whether α v β 3 is also correlated with distant metastases in MTC. A correlation with tumor size was not seen in our study, contrary to results of studies describing tumor growth and proliferation in ovarian cancer . In cervical cancer, α v β 3 is significantly correlated with decreased survival . This in contrast with the findings of Böger et al., which showed a significantly increased survival for patients with α v β 3 positive gastric cancer . In our study, survival analysis showed no significant results. A strength of this study is the relatively large sample size of 104 patients, considering the rarity of MTC. Another strength is the long follow-up time (mean 68.9 months, range 0–318 months), which is essential since MTC has low proliferative activity and low event rates. Furthermore, for the first time immunohistochemical α v β 3 data was combined with clinical end points such as the development of distant metastases and death. Most limitations of this study are a result of the retrospective design and the low incidence of MTC. To assess a substantial amount of data, patients were included from five tertiary referral centers comprising almost thirty years. As a consequence, variables which were consistent over time and between centers had to be used in our analysis and our follow-up ranges widely. Over the years, surgical guidelines have changed and surgical techniques may have differed between centers. A subanalysis of progressive patients would have been of added value, but was not possible due to the sample size. For future research involving a larger cohort, it would be interesting to use a more extensive IHC scoring system such as the immunoreactive score (IRS). Conclusion To conclude, α v β 3 seems to be frequently expressed in the cytoplasm and less often on the membranes of MTC cells. For future research, implementing a more extensive IHC scoring system such as the IRS would be advisable. Also, the correlation of immunohistochemical α v β 3 expression and uptake of radiolabeled RGD should be further assessed in patients with Membranous α v β 3 expression. Supplementary Figure S1 |
Thalamic Local Field Potentials and Closed‐Loop Deep Brain Stimulation in Orthostatic Tremor | dc554631-954e-4350-9882-95d460b2e314 | 11752987 | Surgical Procedures, Operative[mh] | Case A 70‐year‐old man with a 10‐year progressive history of OT first presented to the Movement Disorders Centre at Toronto Western Hospital in 2021. There were no accompanying features, relevant family history, or alcohol responsiveness (Video ). A baseline magnetic resonance imaging brain scan was unremarkable except for nonspecific T2 hyperintensities in keeping with mild small vessel disease. A diagnosis of OT was made, and a number of different medications (clonazepam, propranolol, primidone, gabapentin, and levodopa) were tried; however, they all resulted in side effects and were discontinued. Surgical procedures, electrode localization, and volume of tissue activated (VTA) modeling are shown in Figure . Experimental Phase 1: LFP and Tremor Recording A postsurgical tremor evaluation at 12 months was performed, which consisted of three conditions: (1) standing (both feet and single‐leg stance), (2) walking, and (3) lying supine on a tilt table for 60 seconds followed by tilting to 45° for 60 seconds and upright for another 60 seconds (Fig. ). All three conditions were conducted with the stimulation OFF followed by stimulation ON. Continuous DBS (cDBS) settings were right VIM: case+ contact 2‐/2.3 mA/60 μsec/180 Hz; left VIM: case+ contact 9‐/2.3 mA/60 μsec/180 Hz. Amplitude was subsequently reduced to 2.0 mA bilaterally due to gait ataxia. A sensing survey of available contacts was performed during the standing condition to capture baseline LFPs in both hemispheres (Fig. ). The LFP of interest ±2.5 Hz was livestreamed for up to 30 minutes and collected as a JSON file (sampled at 250 Hz) for offline magnitude‐squared coherence analysis. Tremor was simultaneously recorded using two Xsens DOT sensors (Sensor Fusion Technologies, Enschede, the Netherlands) attached to the shins. Data were synchronized by placing a transcutaneous electrode on the left side of the neck—delivering a small artifact through electrical stimulation to “mark” the start of a condition during livestreaming—video recordings, the internal clock of the DBS device, and time duration of the accelerometers. Data analysis is presented in the . Experimental Phase 2: a DBS After a transient initial improvement, cDBS could only improve tremor using amplitudes that worsened gait. Gait assessment was completed on the Zeno Walkway Gait Analysis System (ProtoKinetics, Havertown, PA, USA), and PMKAS software was used to extract data outcomes for each recording . Gait analysis demonstrated a reduction in velocity and stride length accompanied by widening of the base of support when the stimulation was ON (Table ), thus representing DBS‐induced gait ataxia, also confirmed by the subject reporting greater instability despite tremor reduction. In June 2022, after the approval of Health Canada was obtained, the features of aDBS embedded in the DBS device were unlocked. In keeping with the methods published by Little et al, single‐threshold aDBS was programmed thereafter. The LFP found to be coherent with tremor during offline analysis was used to trigger VIM DBS independently based on hemispheric‐specific power cutoffs (Fig. ). The captured LFP was able to discriminate standing versus supine position as also confirmed by the chronic recording at home (Fig. ). Sensing was set up at 14.65 Hz, and right and left single thresholds were lowered to −570 and −360, respectively. Clinical evaluation and gait analyses were carried out comparing DBS OFF, aDBS, and cDBS in a double‐blinded fashion. A wash‐out period of 30 minutes was carried out between each condition. A stopwatch was used to measure the maximum standing duration before the patient felt the need to sit down. Pre‐ and post‐aDBS Short Form 36 Health Survey Questionnaires (SF‐36) were completed at different time points. The study was approved by the institutional Research Ethics Board (REB) (UHN: 15‐8777).
A 70‐year‐old man with a 10‐year progressive history of OT first presented to the Movement Disorders Centre at Toronto Western Hospital in 2021. There were no accompanying features, relevant family history, or alcohol responsiveness (Video ). A baseline magnetic resonance imaging brain scan was unremarkable except for nonspecific T2 hyperintensities in keeping with mild small vessel disease. A diagnosis of OT was made, and a number of different medications (clonazepam, propranolol, primidone, gabapentin, and levodopa) were tried; however, they all resulted in side effects and were discontinued. Surgical procedures, electrode localization, and volume of tissue activated (VTA) modeling are shown in Figure .
LFP and Tremor Recording A postsurgical tremor evaluation at 12 months was performed, which consisted of three conditions: (1) standing (both feet and single‐leg stance), (2) walking, and (3) lying supine on a tilt table for 60 seconds followed by tilting to 45° for 60 seconds and upright for another 60 seconds (Fig. ). All three conditions were conducted with the stimulation OFF followed by stimulation ON. Continuous DBS (cDBS) settings were right VIM: case+ contact 2‐/2.3 mA/60 μsec/180 Hz; left VIM: case+ contact 9‐/2.3 mA/60 μsec/180 Hz. Amplitude was subsequently reduced to 2.0 mA bilaterally due to gait ataxia. A sensing survey of available contacts was performed during the standing condition to capture baseline LFPs in both hemispheres (Fig. ). The LFP of interest ±2.5 Hz was livestreamed for up to 30 minutes and collected as a JSON file (sampled at 250 Hz) for offline magnitude‐squared coherence analysis. Tremor was simultaneously recorded using two Xsens DOT sensors (Sensor Fusion Technologies, Enschede, the Netherlands) attached to the shins. Data were synchronized by placing a transcutaneous electrode on the left side of the neck—delivering a small artifact through electrical stimulation to “mark” the start of a condition during livestreaming—video recordings, the internal clock of the DBS device, and time duration of the accelerometers. Data analysis is presented in the .
DBS After a transient initial improvement, cDBS could only improve tremor using amplitudes that worsened gait. Gait assessment was completed on the Zeno Walkway Gait Analysis System (ProtoKinetics, Havertown, PA, USA), and PMKAS software was used to extract data outcomes for each recording . Gait analysis demonstrated a reduction in velocity and stride length accompanied by widening of the base of support when the stimulation was ON (Table ), thus representing DBS‐induced gait ataxia, also confirmed by the subject reporting greater instability despite tremor reduction. In June 2022, after the approval of Health Canada was obtained, the features of aDBS embedded in the DBS device were unlocked. In keeping with the methods published by Little et al, single‐threshold aDBS was programmed thereafter. The LFP found to be coherent with tremor during offline analysis was used to trigger VIM DBS independently based on hemispheric‐specific power cutoffs (Fig. ). The captured LFP was able to discriminate standing versus supine position as also confirmed by the chronic recording at home (Fig. ). Sensing was set up at 14.65 Hz, and right and left single thresholds were lowered to −570 and −360, respectively. Clinical evaluation and gait analyses were carried out comparing DBS OFF, aDBS, and cDBS in a double‐blinded fashion. A wash‐out period of 30 minutes was carried out between each condition. A stopwatch was used to measure the maximum standing duration before the patient felt the need to sit down. Pre‐ and post‐aDBS Short Form 36 Health Survey Questionnaires (SF‐36) were completed at different time points. The study was approved by the institutional Research Ethics Board (REB) (UHN: 15‐8777).
LFP and Tremor Recording A peak LFP signal in the standing position was detected at 14.65 Hz bilaterally (Fig. ; Fig. ). Tremor frequency in the legs when standing (bipedal and one‐leg standing) was also detected to be at 14.65 Hz and was highly coherent bilaterally (Fig. ). Ipsilateral and contralateral LFPs were also highly coherent when only one leg was bearing weight (Fig. ). Tremor power was suppressed bilaterally using DBS (Fig. ), whereas LFP power was only slightly reduced (Fig. ). The coherence between accelerometers, and between accelerometers and contralateral LFPs, was reduced (left brain more than right) when stimulation was turned ON; the coherence between LFPs was not reduced (Fig. ). During one‐leg standing, similar effects on tremor and LFPs were recorded although they were generally less pronounced (Fig. ). Coherence between accelerometers and contralateral LFPs was suppressed using DBS on both sides (Fig. ). Finally the coherence between LFPs was only slightly reduced using DBS, more so when standing on the right leg (Fig. ). Neither LFPs nor tremor was detected when the patient was in the semi‐recumbent position (45°; Fig. ). No stepwise change in LFP signal was observed during a change in posture from semi‐recumbent to the upright position on the tilt table (data not shown). During walking, tremor was largely suppressed, whereas LFPs were bilaterally attenuated and appearing in an antiphase between hemispheres. Therefore, coherence between LFP signals and tremor was lost during walking (Fig. ). Adaptive DBS Regarding the side effect of cDBS‐induced ataxia, aDBS was initiated as described in the Supplementary Material (Fig. ). At the end of the programming optimization (double‐blinded evaluation 1 month later), gait analysis demonstrated an improvement in velocity and step length during aDBS compared to cDBS, alongside a 12.8% reduction in stride width. However, these changes were largely absent during the double‐blinded evaluation 9 months later (Table ). Subjective and other objective measures (timed tasks performed in a double‐blind fashion) generally confirmed the superiority of aDBS over cDBS in this patient up to the last follow‐up appointment, 18 months after aDBS initiation (Fig. ; Video ). Similarly, SF‐36 scores demonstrated a sustained improvement in several categories with aDBS use, including physical functioning, energy/fatigue, emotional well‐being, and general health (Fig. ).
and Tremor Recording A peak LFP signal in the standing position was detected at 14.65 Hz bilaterally (Fig. ; Fig. ). Tremor frequency in the legs when standing (bipedal and one‐leg standing) was also detected to be at 14.65 Hz and was highly coherent bilaterally (Fig. ). Ipsilateral and contralateral LFPs were also highly coherent when only one leg was bearing weight (Fig. ). Tremor power was suppressed bilaterally using DBS (Fig. ), whereas LFP power was only slightly reduced (Fig. ). The coherence between accelerometers, and between accelerometers and contralateral LFPs, was reduced (left brain more than right) when stimulation was turned ON; the coherence between LFPs was not reduced (Fig. ). During one‐leg standing, similar effects on tremor and LFPs were recorded although they were generally less pronounced (Fig. ). Coherence between accelerometers and contralateral LFPs was suppressed using DBS on both sides (Fig. ). Finally the coherence between LFPs was only slightly reduced using DBS, more so when standing on the right leg (Fig. ). Neither LFPs nor tremor was detected when the patient was in the semi‐recumbent position (45°; Fig. ). No stepwise change in LFP signal was observed during a change in posture from semi‐recumbent to the upright position on the tilt table (data not shown). During walking, tremor was largely suppressed, whereas LFPs were bilaterally attenuated and appearing in an antiphase between hemispheres. Therefore, coherence between LFP signals and tremor was lost during walking (Fig. ).
DBS Regarding the side effect of cDBS‐induced ataxia, aDBS was initiated as described in the Supplementary Material (Fig. ). At the end of the programming optimization (double‐blinded evaluation 1 month later), gait analysis demonstrated an improvement in velocity and step length during aDBS compared to cDBS, alongside a 12.8% reduction in stride width. However, these changes were largely absent during the double‐blinded evaluation 9 months later (Table ). Subjective and other objective measures (timed tasks performed in a double‐blind fashion) generally confirmed the superiority of aDBS over cDBS in this patient up to the last follow‐up appointment, 18 months after aDBS initiation (Fig. ; Video ). Similarly, SF‐36 scores demonstrated a sustained improvement in several categories with aDBS use, including physical functioning, energy/fatigue, emotional well‐being, and general health (Fig. ).
This is the first report of thalamic LFP activity captured on a chronically implanted OT patient during standing and walking conditions. Furthermore, we describe the first clinical use of aDBS in OT. A tremor frequency of ~14.65 Hz was found peripherally when standing, which was highly coherent bilaterally. Additionally, LFPs of a similar frequency to the contralateral tremor were captured, which demonstrated significant coherent activity, a finding not previously demonstrated in OT. We found pathological coupling between brain (LFP) and leg tremor (accelerometer) only during the standing position, and absence as the body was being experimentally tilted, which suggests that OT is an all‐or‐none phenomenon related to weight‐bearing. LFP power and tremor (as well as their coupling) were suppressed during walking, which is consistent with the well‐known subjective improvement in symptoms experienced by patients. Visual inspection of the spectrogram, however, shows bouts of LFPs, which are most likely in correspondence with the contralateral stance. Although gait cycle was not synchronized to LFP streaming, such hypothesis fits with a previous electromyography study. Interestingly, highly coherent LFPs were recorded in both hemispheres when one leg was weight‐bearing (and during locomotion), which might support the widely accepted notion of a single central oscillator possibly involving the ponto‐cerebello‐thalamo‐cortical pathways and bilateral spreading of the tremor in OT. , , , Early studies describe the potential role of the cerebellar thalamus in tremor ; the anatomical location of a central oscillator remains to be a matter of debate although most authors hypothesize it being outside of the thalamus. , , , OT is a progressive condition, which we were initially able to improve by increasing the stimulation of cDBS. Stimulation only marginally suppressed LFP power but nevertheless was able to decouple tremor and LFPs, in keeping with the generally modest results of VIM DBS in OT when compared to ET. This also might suggest that the cerebello‐thalamic tract only amplifies tremor amplitude and perhaps the conscious perception of unsteadiness. Numerous programming strategies were carried out due to gait ataxia, and a detrimental effect of cDBS on standing tasks (Fig. ). aDBS was found to be effective in improving mobility, tremor, and quality of life up to the latest follow‐up (18 months). Single thresholds required optimization initially but remained unchanged after an increase in amplitude and more ventral stimulation. aDBS was also well tolerated, with no reported side effects during stimulation. Interestingly, the time to setting off stimulation had increased by the latest follow‐up, possibly pointing to long‐lasting neuroplasticity effects favored by aDBS. Progressive gait worsening was nevertheless observed over time (Table ), which is considered as part of OT. , Our unique study protocol is not without limitations. First, the greater LFP power in the right VIM might have been related to the slight differences in electrode position (Fig. ), although this is unlikely as tremor amplitude was actually greater on the left (Fig. ) and the right VTA was eventually moved ventrally to the inferior border of the thalamus to engage the cerebellar‐thalamic bundle. Second, the study protocol did not include unilateral stimulation to reduce the number of conditions and analyses. Third, magnitude‐squared coherence measurements potentially could be affected by movement artifacts. Tremor mechanically transmitted from the weight‐bearing to the contralateral non‐weight‐bearing leg cannot be excluded (unlikely due to the presence of LFP). Finally, these results originate from a single patient although every opportunity was used to strengthen our findings, including randomizing the order of the experimental conditions, keeping the patient and raters blinded, and checking fatigue level before proceeding. In conclusion, this experience provides new insights into both the pathophysiology and management of OT. The commercialization of DBS devices with sensing ability and the imminent adoption of large‐scale aDBS promise to further advance our understanding and treatment of this disabling, typically medication‐refractory condition.
(1) Research project: A. Conception, B. Organization, C. Execution; (2) Statistical analysis: A. Design, B. Execution, C. Review and critique; (3) Manuscript: A. Writing of the first draft, B. Review and critique. W.K.W.F.: 1B, 1C, 2A, 2C, 3A. S.S.: 1C, 2A, 2B, 3B. G.S.: 1C, 2A, 2B, 3B. B.S.: 2B, 2C, 3B. L.M.: 2A, 2B, 3B. A.E.L.: 2C, 3B. A.M.L.: 2C, 3B. S.K.K.: 2C, 3B. A.F.: 1A, 1B, 1C, 2A, 2C, 3B.
Figure S1. Electrode localization and visualization of stimulation. Figure S2. The different experimental epochs: subject standing on both feet (A), subject adopting one‐leg standing (B), subject lying on tilt table (C) and tilted at 45 degrees (D). Figure S3. Survey identifying LFPs of 14.65Hz only during standing. Table S1. Gait analysis comparing aDBS, cDBS, and OFF DBS conditions. Figure S4. Screenshot of streaming data captured on the clinician programmer from the right VIM. Figure S5. Screenshot of ‘Brainsense™ Timeline’ function captured on the clinician programmer from the Left VIM.
|
The effect of maxillary premolar distalization with different designed clear aligners: a 4D finite element study with staging simulation | 5e446117-10bb-4f60-955d-cadde05b882e | 11609138 | Dentistry[mh] | As a nonextraction orthodontic strategy, molar distalization can provide space for alignment or incisor retraction. Molar distalization with clear aligners (CAs) has been proven to be effective . The accuracy of CAs for maxillary molar distalization has been reported to be as high as 87% 2 . However, a recent retrospective study revealed that the efficiency of molar distalization decreased to 42% following the subsequent anterior retraction process . With respect to the biomechanics of molar distalization, several studies focused on the tooth movement pattern have proven that distalized maxillary molars achieve mostly tipping movement instead of the planned distal translation with CAs . Molar distalization with different anchorage designs might have different effects on the whole dentition . The use of temporary anchorage devices (TADs) combined with CAs can increase the efficiency of molar distalization . The classic whole treatment process of dentition distalization with clear aligners (DDCA), i.e., the ‘V-pattern’ strategy, can be divided into three stages : (1) the molar distalization stage, in which molars are moved distally and space is moved anteriorly mesial to molars; (2) the premolar distalization stage, in which premolars are moved distally and space is moved anteriorly mesial to premolars; and (3) the aligning or retraction of anterior teeth, in which space in the anterior dentition is used to align or retract anterior teeth. During the staging process, the 3 stages are not exactly separate, with several steps in between that could belong to either stage. In fact, molar distalization is only the first step of DDCA. In the subsequent treatment process, whether the space generated by molar distalization can be successfully transferred to the anterior arch region is a more important step in DDCA . This determines whether the CA can achieve final success . The efficiency of premolar distalization represents how much space has been transferred in this stage. However, there is currently a lack of research on the efficiency of premolar distalization and its underlying factors. In recent years, finite element analysis (FEA) has been used in CA treatment simulations in various studies . Nevertheless, all prior studies conducted to date have focused solely on investigating the initial displacement observed during CA wear, thus overlooking the long-term alterations in dentition throughout CA treatment. The clinical significance of the results is further undermined by the fact that only one step of CA treatment was assessed in the FEA. Four-dimensional (4D) FEA, which considers the biomechanical response of the periodontal ligament (PDL) and the morphological changes in the CA during staging as another dimension, was introduced in our previous studies , enabling long-term CA treatment simulation. For DDCA, there is currently a lack of research on the premolar distalization stage. Therefore, this study used previously constructed 4D FEA of the DDCA-stimulating premolar distalization stage to observe the efficiency of the space forward transfer generated by molar distalization and to explore the impact of different aligner shapes on the efficiency of this stage.
The Institutional Review Board of bioethics committee at the Peking University School and Hospital of Stomatology (No. PKUSSIRB-202059154) granted approval for this research. The FEA model contained teeth, PDLs, attachments, and CAs (Fig. a), generated from a patient’s cone beam CT (CBCT) and oral scan record using Mimics (Mimics 17.0, Materialise, Leuven, Belgium) and Geomagic Studio (Geomagic 15.0, 3D Systems, Rock Hill, SC), following the methodologies outlined in previous publications . For computational efficiency, the model was constructed for the right side only, with symmetrical boundary conditions applied to the median section of the CA. A 0.30 mm shell element was utilized to represent the PDL . Conventional rectangular attachments were designed. A 2 mm space was set between the second premolar and the first molar (Fig. a), simulating the scenario where the two molars had completed distalization during DDCA. By employing the temperature changing method (TCM), ten steps of the distal body movement of the first and second premolars were designed, with 0.2 mm movement for each step. In other words, during the 10 steps, the CA between the first molar and the second premolar decreased by 2 mm, the CA between the first premolar and the canine increased by 2 mm, and both deformations were restricted in the mesial–distal direction. The crowns were uniformly prepared to a thickness of 0.7 mm and preprocessed to create the initial CA model . Elastic forces were applied to the CAs, extending from the buccal mesial cervical region of the canine to the buccal TAD, positioned within the buccal interradicular space between the first and second molars and situated 4 mm above the alveolar crest. The magnitude of the elastic forces was set at 150 g (Fig. b). In addition to the control group, which utilized a conventional CA design, two test groups were established, namely, (1) the second molar half wrap (SMHW) group, where the CA was cut off at the mid-coronal level of the second molar, and (2) the all molars half wrap (MHW) group, where the CA extended from the mid-coronal level of the first molar to the mid-coronal plane of the second molar, with the CA being cut off at the mid-coronal level of the second molar (Fig. c). As described in previous studies, an iteration method was employed to simulate the bone remodeling process during CA treatment (Fig. a) . In this iteration method, the alveolar bone and the teeth are simplified into rigid bodies without any deformation during the calculation. During the loading process, the PDL deformed. After the initial calculation, the displacements of the teeth were saved to the next calculation step, and the PDLs were deleted and regenerated for the next calculation step. Using the previously described TCM, automatic remodeling of the CA was implemented throughout the long-term orthodontic simulation (Fig. b, Supplementary Fig. , Supplementary Video ) . A virtual CA was initially built for morphology changing to gain the solid model of ‘actual’ CAs for further 10 steps of simulation. Specific steps were detailed in Supplementary Fig. . During the TCM process, the CA between the canine and the first premolar increased by 0.2 mm in each step, and the CA between the second premolar and the first molar decreased by 0.2 mm in each step. The deformation direction and amount of CA were precisely controlled according to the formula detailed in Supplementary Fig. . According to preliminary experimentation, less than 0.1% strain of the PDL was observed during the third iteration of the PDL adjustments at each stage of CA treatment, indicating that the model stabilized after the initial 2 iterations of the PDL . Hence, to simulate the clinical scenario accurately, where CAs are worn over an adequate duration, each step of CA treatment was matched with two iterations of PDL adjustments during staging. Before each iteration, the CA from the previous step was removed and the CA for the next step was imported. Best-fit algorithm was carried out to match the inner surface of the CA with the dental crowns to simulate the wear-in process with custom Python subroutines. The material properties for the involved models are detailed in Table . All of the models were assembled via Hypermesh 14.0 (Altair, Troy, Mich). Unstructured four-node tetrahedral elements were employed. ABAQUS/CAE (SIMULIA, Providence, RI) was subsequently used for FEA. The PDLs and tooth roots were considered position constraints. The contact relationship between the aligners and teeth was the same as that described in a previous study . The optimal size of the elements was set as 0.2 mm according to the results of a convergence study. The occlusion plane and the global coordinate system were defined as described in a previous study (Fig. c). Local coordinate systems were used to illustrate the displacement of the teeth. The crown point (CP), root point (RP), resistance center (RC), and long axis (LA) of each tooth were determined, as shown in Table . For the local coordinate systems for each tooth, the Z-axes were aligned with the global coordinate system. However, the X- and Y-axes correspond to the mesial/distal and labial/lingual directions, respectively (Fig. d). At the 10th step (target step), some remaining space was observed between the second premolar and the first molar. Therefore, a few more steps with the same prescribed teeth movements (0.2 mm distal movement per step for both premolars) were prescribed to completely close the remaining space, and the teeth movements at the ‘space-closed step’ of each group were also investigated. To obtain more results with clinical significance, not only were the tooth movements at the 10th step compared but also a few more steps were prescribed to completely close the space between the second premolar and the first molar, and the tooth movements at the ‘space-closed step’ of each group were investigated.
Supplementary Video shows the tooth movements during each step in the 3 groups. The initial tooth position and the final position are shown in Fig. . The ‘space-closed step’ for the conventional CA design group, the SMHW group, and the MHW group was the 11th step, 12th step, and 13th step, respectively. Figures and illustrate the teeth displacement and rotation in local coordinates, respectively. In all line charts, steps 10, 11, 12, and 13 are marked with dashed lines to indicate the target movement step (10th step) and the ‘space-closed steps’ for all three groups. All three groups experienced labial inclination and buccal movement of the canines and incisors in the ‘space-closed step,’ with the control group showing the least movement and the MHW group showing the greatest movement (Table ). Specifically, the labial movement and inclination of the central incisors were 0.66 mm/1.82° (control group), 0.83 mm/2.38° (SMHW group), and 1.08 mm/3.19° (MHW group), respectively. For the canines, the mesial movement and mesial tipping were 0.73 mm/2.15° (control group), 0.92 mm/2.31° (SMHW group), and 1.19 mm/2.62° (MHW group), respectively. In the ‘space-closed step,’ the MHW group exhibited the greatest distal tipping movement of the second premolars (1.93 mm/5.08°), followed by the control group (1.70 mm/4.55°), with the SMHW group showing the least (1.51 mm/4.18°). The SMHW group exhibited the greatest mesial tipping movement of the first molars (0.60 mm), followed by the control group (0.41 mm), with the MHW group showing the least (0.17 mm), and the trend at step 10 was the same. In the control group, the mesial tipping movement of the second molars was similar to that of the first molars (both 0.41 mm), with the second molars exhibiting mesial tipping of 1.93°. In contrast, the SMHW and MHW groups presented relatively less mesial tipping movement of the second molars (0.11 mm and 0.02 mm, respectively), with corresponding mesial tipping angles of 0.80° and 0.20°, respectively. In this study, a molar distalization of 2 mm was designed. The initial space between the second molars and the first molars in all groups was set to 2 mm, resulting in an expected movement of 2 mm for the premolars. In each group’s ‘space-closed step,’ the efficiencies of the distal movement of the second premolars were 85.0% (control group), 75.5% (SMHW group), and 96.5% (MHW group). The first molars exhibited varying degrees of mesial tipping movement, resulting in a decrease in the achievement rate of the 2 mm movement, i.e., the loss of anchorage, during the distalization of the first molars, that is, 20.5% (control group), 30% (SMHW group), and 8.5% (MHW group). Thus, the design of the MHW group improved the overall efficiency of distalization of the molars by 12% during the step of distalizing the premolars by 2 mm. Using a scale factor of 20, Fig. provides a more intuitive representation of the displacement tendencies of teeth, which is consistent with the numerical results in Figs. and . Specifically, the second molars in the control group exhibited a noticeable tendency toward uncontrolled tipping, with the rotation center at approximately 1/3 of the tooth root apex. In the SMHW group, there was less unexpected movement in the second molars, whereas the first molars demonstrated greater mesial movement. Conversely, in the MHW group, there was an increased occurrence of labial tipping in the incisors and canines. The distribution of the maximum principal stress in the PDL (Fig. ) further validates the aforementioned analyses, where cooler colors (blue) represent tension and warmer colors (red) represent compression.
Most FEA studies on CA have focused on a single step of CA treatment. However, the mechanical system in the CA treatment has a significant cumulative effect, where the complex interaction between the teeth and aligners determines the final result. The 4D FEA technique, which takes the biomechanical response as the 4th dimension, has been applied to orthodontic-related research in recent years . With 4D FEA, which enables the simulation of dynamically evolving interrelations among all components, orthodontists can obtain more substantial insights into long-term orthodontic treatment outcomes. Nevertheless, prior to our introduction of TCM to simulate the morphological changes in CAs , all 4D FEA investigations had been confined to fixed appliances. A pioneering simulation of long-term maxillary whole arch distalization with CAs was subsequently conducted for the first time in another study , in which the mesial movements of the pre-distalized molars were also noticed during the premolar distalization process. There are two critical points of controversy surrounding DDCA: the movement pattern of the tooth and the overall efficiency. The movement pattern of molar distalization with CA represents varying degrees of tipping . CAs even showed better control of the vertical dimension and distal tipping of molars compared to fixed appliances . To unify the baseline, the initial molar position was set as a total bodily distalization of 2 mm in this study, and premolars exhibited a similar movement pattern as molar distalization, with controlled bodily movement with varying degrees of mesiodistal tipping from 3.65° to 5.49°, increasing with moving distance. This result is consistent with the findings of a clinical study conducted by Yurdakul et al. . Since tipping is challenging to avoid and ensuring sufficient distal movement of the premolars and molars is important in DDCA, it should be noted that there would be a greater relapse tendency for premolars and molars in cases where they tipped more. Although molar distalization is considered one of the most predictable tooth movement patterns associated with CAs, the overall efficacy of DDCA has been debated among orthodontists . Compared with the well-studied molar distalization stage, the biomechanical system of the premolar distalization stage has long been a neglected area of research. In the molar distalization stage of DDCA, a relatively simple force system is created where molars receive a distal pushing force while other teeth receive a mesial counterforce. This counterforce can be effectively countered by the TAD or intermaxillary traction . Therefore, in research on the molar distalization stage of DDCA, regardless of the type of traction employed, high efficiency can be achieved . In this study, 2 mm distal movement was designed for the maxillary dentition. The amount of tooth movement was determined according to previous studies, which suggested that 2–3 mm distal movement of molars was a common target movement amount during DDCA . As molars reach their target position and the premolar distalization stage commences, the situation becomes more intricate than in the previous stage. For the premolars, the length of the CAs between the first molar and the second premolar is shortened, whereas the length between the first premolar and the canine is extended, resulting in a distal force. For the anterior teeth, a mesial counterforce is experienced. The most challenging issue involves molars. As the CA length between the first molar and the second premolar decreases, the molars experience a mesial force, and this force cannot be effectively countered. Consequently, this leads to mesial movement of the molars in subsequent stages, reducing the overall efficiency of DDCA. This study compared the biomechanical effects of three different CA designs on the premolar distalization stages. Full-crown-surface-wrapped aligners (control group) are the current mainstream design for CAs. The first and second molars both experienced mesial force in the premolar distalization stage with this CA design (Fig. ). In response to the undesirable counterforce on the molars, some manufacturers of CAs have removed the distal portion of the CAs for the second molar, which is the design of the SMHW group. The first molars still experienced mesial force on the premolar distalization stage with this design. In the MHW group, the CA design simultaneously removed the distal areas of the aligners that were in contact with both the first and second molars. This CA design eliminated the mesial counterforce produced by the ends of the aligners while retaining the ability of the CAs to block the mesial movement of the molars. If contact between the second premolar and the first molar was considered the end of the simulation, the total number of steps for premolar movement varied among the three groups. This variation is due to the differences in the amount of mesial movement of the first molar among the three groups (Fig. ). In the MHW group, the mesial movement of the first molar was the smallest, at only 0.17 mm, thus providing more distal movement space for the premolars and achieving a distal movement efficiency of 95.5–96.5%. This represents a significant increase compared with that of the control group (84.5–85%) and the SMHW group (75–75.5%). The unexpected inclination or rotation of the first and second molars was also lower in the MHW group (Fig. ). Since the simulation assumed that at the start of the premolar distal movement, the first molars in all three groups were in the same 2 mm distally moved position, the baseline distal movement efficiency of the first molar was assumed to be 100%. The calculated molar distal movement efficiency at this stage, which is based on the final position of the first molar, represents the proportion of distal movement loss during this stage. The MHW group presented the lowest molar distal movement loss efficiency (8.5%), whereas the SMHW group presented the greatest loss efficiency (30%). However, it should be noted that in the MHW group, since neither the first nor the second molars provide anchorage in the mesial–distal direction, the movement of the incisors and canines was the greatest among the three groups (Fig. ). Notably, the mesial movement of the first molar was the greatest in the SMHW group (Table ). This interesting phenomenon occurs because when the aligner of the distal part of the second molar is removed, the second molar no longer provides anchorage. Therefore, the force experienced by the first molar increases, leading to greater mesial movement of the first molar. The FEA model in this study did not incorporate calculations for relapse movement. However, clinical experience indicates that the second molar will relapse mesially with the mesial movement of the first molar. Additionally, compared with the full-crown-surface-wrapped CA design, the SMHW design increased the counterforce on the anterior teeth, resulting in increased mesial movement of the canines (Table ). The original intention of this design was to reduce the mesial movement of the molars by eliminating the mesial counterforce by the end of the aligner on the second molar. However, as the results indicated, this design fails to prevent mesial movement of the molars and may contribute to an increased burden on the anterior teeth. In this study, a buccal TAD, placed at the buccal interradicular space between the first and second molars situated 4 mm above the alveolar crest, was used for anchorage enhancement. Both the infrazygomatic crest and the interradicular space are suitable sites for buccal TAD placement. However, the buccal interradicular space is more commonly chosen due to its advantages in patient comfort and ease of clinical operation . With respect to the choice of the infrazygomatic crest, a larger amount of molar distalization could be achieved without worrying about the contact between the roots of molar and the TAD. However, larger buccal and vertical force components may lead to side effects . With respect to the choice of the interradicular space, fewer buccal and vertical force components were gained, but a change in the location of the TAD may be needed with a greater degree of molar distalization . In clinical practice, a feasible approach for buccal interradicular TAD placement during maxillary distalization involves positioning TADs between the premolars during the molar distalization phase. As the treatment progresses to distalize the premolars and anterior teeth, the TADs can be repositioned between the molars, as demonstrated in this study. As our previous study suggested, with a buccal TAD located in the interradicular space, during the DDCA process, the dentition is wider in the molar region, which may lead to unwanted side effects . However, traction forces from TADs located in the infrazygomatic crest provide larger buccal and vertical force components, and the buccal component leads to palatal force in the molar region. Therefore, in the global coordinate system, for molars, the magnitude of the outward component of the distalization force and the inward component of the palatal force determine the horizontal movement tendency of the molar region. For this study, the inward component of the unwanted palatal force was relatively small due to the TAD location, which led to an increase in the width of the molar region. However, other situations need to be analyzed separately. Conventional rectangular attachments were designed during the DDCA process in this model. However, novel attachment designs may further decrease the tipping tendency during distalization of the molars. Rossini et al. carried out an FEA study of DDCA and concluded that attachments are mandatory during the distalization of the second molar. However, Hong et al. suggested that different attachment designs had a limited impact on the efficacy of the designed movement during DDCA. Ayidaga et al. analyzed the effects of different attachment designs on the efficacy of upper maxillary molar distalization, and their results indicated that vertical rectangular attachment significantly reduced the tipping tendency during distalization, whereas the novel attachment design they proposed offered the best control of molars. However, most of the current studies investigating different attachment types are limited to FEA, and further clinical evidence is needed. Some shortcomings of this study must be noted. The first is the absence of calculations for the relapse tendencies of teeth, which is a critical issue that needs to be resolved in further FEA related to orthodontics. Moreover, several simplifications have been made in the current PDL iteration method to balance computational efficiency and accuracy , and errors might accumulate during the iteration process . Only by combining further in vitro experiments and clinical trials could the accuracy of the 4D FEA could be certified and improved. For the convenience of this study, DDCA was clearly divided into three distinct stages, which may differ from the specific clinical steps of DDCA . However, the findings of this study are significant, as they demonstrate that strategic removal of the distal part of the aligner for both the first and second molars can effectively curb the mesial movement of the molars in the premolar distalization stage. This, in turn, can enhance the distal movement of the premolars, a critical factor in improving the overall treatment efficiency of DDCA. The results of this study underscore the importance of simplifying the force applied to molars in this stage and focusing solely on restricting their mesial movement. In future research, further consideration should be given to better controlling the positions of the incisors and canines to improve the treatment efficiency of DDCA further. Additionally, 4D FEA has shown great potential in CA treatment simulations. Since orthodontic treatment is a long-term process of complex tooth movement, dynamic simulation techniques that incorporate the effects of time can help us realize a broader range of clinical scenarios and contribute to clinical research in orthodontics.
A 4D FEA model was developed to predict tooth movement during premolar distalization in the DDCA. In the premolar distalization stage of DDCA, removing the distal portion of the aligner covering the first and second molars can effectively reduce the mesial movement of molars and increase the overall efficiency of molar distalization. Elimination of the contour force on molars increases the proportion of anterior teeth in the premolar distalization stage.
Below is the link to the electronic supplementary material. Supplementary Figure 1: The temperature changing method. ( a ) The process is initiated by pinpointing the center point of the dental crown from the occlusal perspective, labeled (Ci, Cj). The margin points of the dental crowns along the Ci-Cj line were subsequently established as (Pi, Pj), after which the midpoint of Pi-Pj (Pc) was determined. An area perpendicular to the Ci–Cj line, encompassing both the mesial and distal regions to Pc, was selected as the deformation zone during staging. Deformation within this zone was constrained along the Ci–Cj line. In the formula, U represents the preset deformation magnitude, k represents the coefficient of linear expansion, Δ represents the deformation amount from previous steps, d represents the width of the area, and t represents the temperature change. ( b ) Process of gaining ‘actual’ clear aligners through temperature changing method. ( c ) The morphology of the virtual clear aligner and the ‘actual’ clear aligners of 3 typical steps Supplementary Video 1: Occlusal and buccal views of the clear aligner over the course of 10 steps Supplementary Video 2: Occlusal and buccal views of the dentition over the course of 10 steps
|
β‐arrestin2 recruitment at the β2 adrenergic receptor: A luciferase complementation assay adapted for undergraduate training in pharmacology | eccc610f-c389-47a6-b780-7756c524ee9a | 7842874 | Pharmacology[mh] | Experimental workshops are an essential complement to theoretical lectures in pharmacology teaching Experimental activities based on animal tissues are not aligned with current research practices in pharmacology and with the 3Rs principle.
The luciferase complementation assay provides a reliable and animal‐free alternative for practical activities in pharmacology teaching for undergraduate students Combining practical and computer‐based activities provides students with a thorough overview of pharmacological research practices, encountering marked appreciation.
INTRODUCTION The teaching of general pharmacology entails an essential focus on the concepts of agonism and antagonism, two notions underpinning the common understanding of drug properties. Parameters such as pEC 50 , pK B , and pA 2 quantitatively define the action that biologically active compounds exert on their molecular targets and are thus of utmost importance in the educational curriculum of pharmacology courses. In this context, practical pharmacology classes are instrumental in fostering the learning and comprehension of such parameters among undergraduate students. These workshops typically consist in the characterization of drug‐induced effects on fresh animal tissues, notably measuring guinea pig ileum contraction in response to different ligands. , Students are then required to compute EC 50 , pK B , and pA 2 values on the basis of the experimental data that they collected, resulting in an active learning experience, promoting a quality and long‐lasting understanding. Despite their relevant pedagogic value, ex vivo experiments encounter several drawbacks. Their implementation frequently turns out to be cumbersome, poorly reliable, and endowed with low reproducibility, hindering their pedagogical outcomes. Of note, their integration in pharmacology programs has been drastically reduced because of growing concerns related to animal experimentation. Plus, ex vivo experimentation is not aligned with the current drug investigation practices in the pharmaceutical industry, mainly relying on high‐throughput screening (HTS) campaigns with cellular bioassays. Given that many students in the health‐care sector will be confronted with such technologies in their future workplace, there is an evident unmet need to provide students with an introduction to up‐to‐date methods in drug research. In this context, a number of computer‐based simulations of ex vivo experiments have been developed in order to provide time‐ and resource‐saving alternative approaches, especially when delivering workshops to large cohorts of students. Such in silico approaches meet most of the educational objectives of pharmacology practical classes, especially concerning data handling and experimental design. , However, computer‐based platforms do not support the practical aspects of laboratory training, which is of particular relevance in Pharmacy and Biomedical Sciences programs. , Taken together, these concerns have called for the development of animal‐free, screening‐compatible experimental alternatives to ex vivo educational pharmacology experiments. We herein report the design of a novel pharmacology workshop based on a split firefly luciferase complementation assay (LCA), inscribed in the undergraduate programs of Pharmacy and Biomedical Sciences. The activity started with the experimental phase, aiming at employing the LCA to characterize the competitive antagonism of propranolol at the beta‐2 adrenergic receptor ( β2AR ), followed by a guided data analysis session leading to the calculation of propranolol pK B and pA 2 values.
MATERIALS AND METHODS 2.1 Materials Isoproterenol , (±)‐propranolol, poly‐L‐lysine, glass coverslips, paraformaldehyde (PFA), and anti‐FLAG (RRID:AB_10950495) antibodies were purchased from Sigma‐Aldrich (Diegem, Belgium). Primary anti‐hemagglutinin antibodies (RRID:AB_390918) were purchased from Roche (Basel, Switzerland). d ‐Luciferin was purchased from Promega, UK. Secondary antibodies (donkey anti‐mouse coupled to AlexaFluor 488, fluorophore (RRID:AB_2556746) and goat anti‐rat antibodies coupled to AlexaFluor 555 fluorophore (RRID:AB_2535855)), together with bovine serum albumin, l ‐glutamine, penicillin/streptomycin, ammonium chloride, trypsin‐EDTA, and microscope slides were purchased from Thermo Fisher Scientific (Waltham, MA). Cell culture medium (Dulbecco's Modified Eagle Medium) and Puromycin (50 mg/mL Stock) were purchased from Invitrogen (Merelbeke, Belgium). 96‐well plates were purchased form Greiner Bio‐One (Wemmel, Belgium). 4′,6‐diamidino‐2‐phenylindole‐containing mounting medium was purchased from Biotium (San Francisco, CA). Fetal bovine serum (FBS) was purchased form Biowest (Riverside, MO). 2.2 Molecular cloning The human β 2 AR coding sequence was cloned and amplified from Human Embryonic Kidney 293 (HEK 293) cells (RRID:CVCL_0045) cDNA, while the rat β‐arrestin2 coding sequence was amplified from β‐arrestin2 green fluorescent protein (GFP; 35411; Addgene, Cambridge, MA). Both sequences were engineered as described by Dupuis et al. The N‐terminus of the β 2 AR sequence was fused to a signal sequence (KTIIALSYIFCLVFA) and a FLAG epitope (DYKDDDDK), whereas its C‐terminus was attached to a flexible linker (GSSGGG) followed by the C‐terminal fragment of the firefly luciferase enzyme (Fc‐Luc, amino acids 413–549, as described by Takakura et al ). The β‐arrestin2 protein was flanked with a HA epitope (YPYDVPDYA) and the N‐terminal moiety of the luciferase enzyme (FnLuc, amino acids 1–415), also followed by a flexible linker (GGGS). The modified β 2 AR and β‐arrestin2 sequences were, respectively, cloned into pIRESpuro and pIREShygro3 plasmids (TakaraBio, Kusatsu, Japan) yielding constructs pIRESpuro‐β 2 AR‐FcLuc and pIREShygro3‐β‐arr2‐FnLuc. 2.3 Cell culture and transfection HEK293 cells were cultivated at 37°C with 5% CO 2 in Dulbecco's modified Eagle medium supplemented with 1% l ‐glutamine, 1% penicillin/streptomycin, and 10% FBS. After the transfection and selection steps described by Dupuis et al. a clonal population of cells stably expressing the β‐arrestin2‐FnLuc protein (HEK293‐β‐arr2‐FnLuc cells) was obtained. HEK293‐β‐arr2‐FnLuc cells were then transfected with the pIREShygro3‐β 2 AR‐FcLuc vector using the calcium phosphate precipitation method. Three days after transfection, the cells were selected for puromycin resistance (1 μg/mL) in order to obtain stable transfectants, herein referred to as HEK‐LCA cells. Cells were then tested for the expression of β‐arr2‐FnLuc and β 2 AR‐FcLuc proteins by immunofluorescence. After selection of clones, cell lines were routinely cultured with medium containing hygromycin and puromycin to maintain good expression levels. 2.4 Immunofluorescence HEK‐LCA cells plated on poly‐ l ‐lysine‐coated glass coverslips were fixed with a 4% PFA solution for 30 minutes. Background fluorescence was reduced by exposing the coverslips for 15 minutes to a 50 mM solution of ammonium chloride. Non‐specific binding of antibodies to the samples was prevented with a blocking solution composed of 2% BSA diluted in PBS. Cell membranes were permeabilized through the addition of 0.3% Triton X‐100 detergent to the blocking solution. Immunodetection of the FLAG and HA epitopes was performed by incubating coverslips overnight at 4°C with mouse anti‐FLAG and rat anti‐HA primary antibodies (1/1000 dilution) followed by a 1‐h exposure (at room temperature) to anti‐mouse antibodies coupled to Alexa Fluor 488 (1/1000 dilution), and anti‐rat antibodies coupled to Alexa Fluor 555 (1/1000 dilution), respectively. After a washing step, nuclei were stained with 4′,6‐diamidino‐2‐phenylindole, mounted on microscope slides, and analyzed with an EVOS fluorescence microscope. 2.5 Luciferase complementation assay The day of the experiment, students were provided with confluent T175 flasks with HEK‐LCA cells and with 100 mM stock solutions of isoproterenol and propranolol. They were instructed to harvest cells from one confluent flask with a trypsin‐EDTA 0.05% solution at room temperature, to resuspend pelleted cells in 5 mL HBSS buffer and distribute them in 96‐well plates (50 µl per well, with approximately 2x10 5 cells per well). Students were also instructed to dilute the drugs in HBSS buffer (120 mM NaCl, 5.4 mM KCl, 0.8 mM MgSO 4 , and 10 mM HEPES, pH 7.4) in microtubes. Isoproterenol was to be tested at different concentrations (logarithmic dilutions) combined or not with defined concentrations of propranolol. Luciferase activity was tested after the addition of 50 µL of a 500 µM d ‐luciferin (Promega, UK) solution to each well, and, after an incubation period of 20–30 minutes, emitted luminescence was measured using a Victor X3 Plate Reader (Perkin Elmer, Waltham, MA). 2.6 Participants and organization of the laboratory class The laboratory activity was integrated in the pharmacology course given during the second and third years of the Pharmacy and Biomedical Sciences programs, respectively. During the academic year 2019–2020, a total of 139 students participated in the activity (103 were registered in the bachelor programs in Pharmacy and 36 in Biomedical Sciences). The practical pharmacology session was scheduled after completion of a series of theoretical lectures. These covered the concepts necessary to the understanding of the experimental design, granting the necessary background for the insightful interpretation of results. The attendance together with the completion of a laboratory report were compulsory requirements to pass the final exam. The objectives and the experimental protocols were presented in an introductory lecture and in a series of online videos. Each workshop session involved 20–24 students, divided into groups of three on a voluntary basis. The day of the experiment, before the start of the activity, student knowledge about the key theoretical concepts required to accomplish the laboratory activity was evaluated by the means of a short written, closed‐book test. 2.7 Survey One hundred and six students (76% of all participants) anonymously completed an online survey (in French language), consisting of seven, closed‐ended, unipolar Likert scale items, answerable with 5‐point agreement options (“Strongly Disagree”, “Rather Disagree”, “Neutral”, “Rather Agree”, and “Strongly Agree”). These items were designed to assess the overall opinion of students in regard to the main objectives of the workshop, notably improving the understanding of relevant pharmacological parameters, introduce students to research practices and data handling, together with reducing teaching‐related animal experimentation. Completion of the questionnaire necessitated approximately 5 minutes, and was not mandatory to the successful completion of the pharmacology class. 2.8 Data and statistical analysis LCA luminescence readouts were expressed in relative light units (RLUs) as means ±SEM. Results were normalized to the maximal response obtained with isoproterenol (considered 100%) or to the signal obtained from non‐stimulated cells (considered 100%) to control for receptor expression variation. During the validation steps, linear regression and statistical analyses were performed using GraphPad Prism version 5.03 (GraphPad Software, CA). During the teaching activities, students performed data analysis using an in‐house developed Microsoft Excel spreadsheet including the Solver.xlam add‐in macro. Survey answers were first summarized by descriptive statistics and then compared using a two‐sided Fisher's test. In order to facilitate comparisons, we merged together the categories “Strongly Disagree”, “Rather Disagree”, and “Neutral”. The same procedure was performed for the “Rather Agree” and “Strongly Agree” categories. Answer quantification was performed by transforming the aforementioned categories into quantitative variables. The score of 1, 2, 3, 4, and 5 was, respectively, assigned to the “Strongly Disagree”, “Rather Disagree”, “Neutral”, “Rather Agree”, and “Strongly Agree” categories. 2.9 Nomenclature of Targets and Ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org , the common portal for data from the IUPHAR/BPS Guide to PHARMACOLOGY, and are permanently archived in the Concise Guide to PHARMACOLOGY 2019/20.
Materials Isoproterenol , (±)‐propranolol, poly‐L‐lysine, glass coverslips, paraformaldehyde (PFA), and anti‐FLAG (RRID:AB_10950495) antibodies were purchased from Sigma‐Aldrich (Diegem, Belgium). Primary anti‐hemagglutinin antibodies (RRID:AB_390918) were purchased from Roche (Basel, Switzerland). d ‐Luciferin was purchased from Promega, UK. Secondary antibodies (donkey anti‐mouse coupled to AlexaFluor 488, fluorophore (RRID:AB_2556746) and goat anti‐rat antibodies coupled to AlexaFluor 555 fluorophore (RRID:AB_2535855)), together with bovine serum albumin, l ‐glutamine, penicillin/streptomycin, ammonium chloride, trypsin‐EDTA, and microscope slides were purchased from Thermo Fisher Scientific (Waltham, MA). Cell culture medium (Dulbecco's Modified Eagle Medium) and Puromycin (50 mg/mL Stock) were purchased from Invitrogen (Merelbeke, Belgium). 96‐well plates were purchased form Greiner Bio‐One (Wemmel, Belgium). 4′,6‐diamidino‐2‐phenylindole‐containing mounting medium was purchased from Biotium (San Francisco, CA). Fetal bovine serum (FBS) was purchased form Biowest (Riverside, MO).
Molecular cloning The human β 2 AR coding sequence was cloned and amplified from Human Embryonic Kidney 293 (HEK 293) cells (RRID:CVCL_0045) cDNA, while the rat β‐arrestin2 coding sequence was amplified from β‐arrestin2 green fluorescent protein (GFP; 35411; Addgene, Cambridge, MA). Both sequences were engineered as described by Dupuis et al. The N‐terminus of the β 2 AR sequence was fused to a signal sequence (KTIIALSYIFCLVFA) and a FLAG epitope (DYKDDDDK), whereas its C‐terminus was attached to a flexible linker (GSSGGG) followed by the C‐terminal fragment of the firefly luciferase enzyme (Fc‐Luc, amino acids 413–549, as described by Takakura et al ). The β‐arrestin2 protein was flanked with a HA epitope (YPYDVPDYA) and the N‐terminal moiety of the luciferase enzyme (FnLuc, amino acids 1–415), also followed by a flexible linker (GGGS). The modified β 2 AR and β‐arrestin2 sequences were, respectively, cloned into pIRESpuro and pIREShygro3 plasmids (TakaraBio, Kusatsu, Japan) yielding constructs pIRESpuro‐β 2 AR‐FcLuc and pIREShygro3‐β‐arr2‐FnLuc.
Cell culture and transfection HEK293 cells were cultivated at 37°C with 5% CO 2 in Dulbecco's modified Eagle medium supplemented with 1% l ‐glutamine, 1% penicillin/streptomycin, and 10% FBS. After the transfection and selection steps described by Dupuis et al. a clonal population of cells stably expressing the β‐arrestin2‐FnLuc protein (HEK293‐β‐arr2‐FnLuc cells) was obtained. HEK293‐β‐arr2‐FnLuc cells were then transfected with the pIREShygro3‐β 2 AR‐FcLuc vector using the calcium phosphate precipitation method. Three days after transfection, the cells were selected for puromycin resistance (1 μg/mL) in order to obtain stable transfectants, herein referred to as HEK‐LCA cells. Cells were then tested for the expression of β‐arr2‐FnLuc and β 2 AR‐FcLuc proteins by immunofluorescence. After selection of clones, cell lines were routinely cultured with medium containing hygromycin and puromycin to maintain good expression levels.
Immunofluorescence HEK‐LCA cells plated on poly‐ l ‐lysine‐coated glass coverslips were fixed with a 4% PFA solution for 30 minutes. Background fluorescence was reduced by exposing the coverslips for 15 minutes to a 50 mM solution of ammonium chloride. Non‐specific binding of antibodies to the samples was prevented with a blocking solution composed of 2% BSA diluted in PBS. Cell membranes were permeabilized through the addition of 0.3% Triton X‐100 detergent to the blocking solution. Immunodetection of the FLAG and HA epitopes was performed by incubating coverslips overnight at 4°C with mouse anti‐FLAG and rat anti‐HA primary antibodies (1/1000 dilution) followed by a 1‐h exposure (at room temperature) to anti‐mouse antibodies coupled to Alexa Fluor 488 (1/1000 dilution), and anti‐rat antibodies coupled to Alexa Fluor 555 (1/1000 dilution), respectively. After a washing step, nuclei were stained with 4′,6‐diamidino‐2‐phenylindole, mounted on microscope slides, and analyzed with an EVOS fluorescence microscope.
Luciferase complementation assay The day of the experiment, students were provided with confluent T175 flasks with HEK‐LCA cells and with 100 mM stock solutions of isoproterenol and propranolol. They were instructed to harvest cells from one confluent flask with a trypsin‐EDTA 0.05% solution at room temperature, to resuspend pelleted cells in 5 mL HBSS buffer and distribute them in 96‐well plates (50 µl per well, with approximately 2x10 5 cells per well). Students were also instructed to dilute the drugs in HBSS buffer (120 mM NaCl, 5.4 mM KCl, 0.8 mM MgSO 4 , and 10 mM HEPES, pH 7.4) in microtubes. Isoproterenol was to be tested at different concentrations (logarithmic dilutions) combined or not with defined concentrations of propranolol. Luciferase activity was tested after the addition of 50 µL of a 500 µM d ‐luciferin (Promega, UK) solution to each well, and, after an incubation period of 20–30 minutes, emitted luminescence was measured using a Victor X3 Plate Reader (Perkin Elmer, Waltham, MA).
Participants and organization of the laboratory class The laboratory activity was integrated in the pharmacology course given during the second and third years of the Pharmacy and Biomedical Sciences programs, respectively. During the academic year 2019–2020, a total of 139 students participated in the activity (103 were registered in the bachelor programs in Pharmacy and 36 in Biomedical Sciences). The practical pharmacology session was scheduled after completion of a series of theoretical lectures. These covered the concepts necessary to the understanding of the experimental design, granting the necessary background for the insightful interpretation of results. The attendance together with the completion of a laboratory report were compulsory requirements to pass the final exam. The objectives and the experimental protocols were presented in an introductory lecture and in a series of online videos. Each workshop session involved 20–24 students, divided into groups of three on a voluntary basis. The day of the experiment, before the start of the activity, student knowledge about the key theoretical concepts required to accomplish the laboratory activity was evaluated by the means of a short written, closed‐book test.
Survey One hundred and six students (76% of all participants) anonymously completed an online survey (in French language), consisting of seven, closed‐ended, unipolar Likert scale items, answerable with 5‐point agreement options (“Strongly Disagree”, “Rather Disagree”, “Neutral”, “Rather Agree”, and “Strongly Agree”). These items were designed to assess the overall opinion of students in regard to the main objectives of the workshop, notably improving the understanding of relevant pharmacological parameters, introduce students to research practices and data handling, together with reducing teaching‐related animal experimentation. Completion of the questionnaire necessitated approximately 5 minutes, and was not mandatory to the successful completion of the pharmacology class.
Data and statistical analysis LCA luminescence readouts were expressed in relative light units (RLUs) as means ±SEM. Results were normalized to the maximal response obtained with isoproterenol (considered 100%) or to the signal obtained from non‐stimulated cells (considered 100%) to control for receptor expression variation. During the validation steps, linear regression and statistical analyses were performed using GraphPad Prism version 5.03 (GraphPad Software, CA). During the teaching activities, students performed data analysis using an in‐house developed Microsoft Excel spreadsheet including the Solver.xlam add‐in macro. Survey answers were first summarized by descriptive statistics and then compared using a two‐sided Fisher's test. In order to facilitate comparisons, we merged together the categories “Strongly Disagree”, “Rather Disagree”, and “Neutral”. The same procedure was performed for the “Rather Agree” and “Strongly Agree” categories. Answer quantification was performed by transforming the aforementioned categories into quantitative variables. The score of 1, 2, 3, 4, and 5 was, respectively, assigned to the “Strongly Disagree”, “Rather Disagree”, “Neutral”, “Rather Agree”, and “Strongly Agree” categories.
Nomenclature of Targets and Ligands Key protein targets and ligands in this article are hyperlinked to corresponding entries in http://www.guidetopharmacology.org , the common portal for data from the IUPHAR/BPS Guide to PHARMACOLOGY, and are permanently archived in the Concise Guide to PHARMACOLOGY 2019/20.
RESULTS 3.1 Cell model and protocol optimization 3.1.1 Immunodetection of β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells HEK‐LCA cells were obtained after stably transfecting HEK293 cells with pIRESpuro‐β 2 AR‐FcLuc and pIREShygro3‐β‐arr2‐FnLuc constructs. Immunofluorescence experiments were performed to verify the co‐expression of β 2 AR‐FcLuc receptor and β‐arrestin2‐FnLuc protein, respectively, tagged with FLAG and HA epitopes. We analyzed the fluorescent microscopy images from non‐transfected control cells (Figure ) and transfected cells selected with hygromycin and puromycin (Figure ). In these cells, we observed the superposition of green and red fluorescent signals, absent in images obtained from non‐transfected cells, confirming the expression of the β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells. 3.1.2 Determination of the optimal cellular density for the design of the workshop experimental protocol Next, we aimed at establishing the best working protocol enabling students to successfully perform LCAs. Cells were plated onto 96‐well plates at different densities (200.000, 100.000, 50.000, and 25.000 cells per well) and exposed to increasing concentrations of isoproterenol. RLUs were normalized to the signal obtained from unstimulated cells plated at the same density (“basal”). The signal amplitude was maximal after approximately 15 min and remained stable for up to 45 min (data not shown). We could observe a concentration‐dependent increase in luciferase activity at all the tested cellular densities, yielding isoproterenol pEC 50 values consistent with those from the literature (Figure ). Plus, variations in cell densities did not result in significant alterations in the potency of isoproterenol ( p > 0.05; one‐way ANOVA) (Figure ). The analysis of concentration‐response curves indicated a cell density‐dependent nature of the luminescent signal, with the highest cellular density (200.000 cells/well) delivering the largest signal amplitude. Hence, we designed an experimental protocol requiring students to seed HEK‐LCA cells at a 200.000 cells/well density, increasing the chances of success of the laboratory activity and facilitating its setup, since a single confluent T175 flask typically yields the 20 million cells needed for a single 96‐well plate. 3.1.3 The LCA allows the detection of propranolol competitive antagonism at the β 2 AR. Prior to organizing the workshop, we tested the effect of inhibiting isoproterenol‐induced β‐arrestin2 recruitment at the β 2 AR using the beta‐blocker propranolol. RLUs were normalized to the maximal value obtained for every condition and plotted in Figure . When analyzing the concentration‐response curves, we observed that propranolol (10 −7.5 M) induced a rightward shift in bioluminescent signal, indicating its expected competitive antagonism at the β 2 AR (Figure ). Using the Gaddum equation, we calculated the pK B of propranolol at 8.23, in line with the available literature on this compound. 3.2 Laboratory activity 3.2.1 Workshop setup and results The pharmacology workshop was delivered to students enrolled in the second year of the bachelor's degree in Pharmacy and in the third year of the bachelor's degree in Biomedical Sciences, respectively, referred to as “Pharm” and “BioMed” students. The laboratory activity consisted of the quantification of the competitive antagonism of propranolol for β‐arrestin2 recruitment at the β 2 AR. Students were required to harvest cells and seed them in 96‐well plates at a density of 200.000 cells/well. Then, they treated cells with increasing concentrations of isoproterenol in combination with propranolol (at a concentration of 10 −7 , 10 −7.5 , or 10 −8 M) or a vehicle solution. After 20–30 min, luciferase activity readouts were collected and analyzed during a supervised data analysis session. An in‐house developed Microsoft Excel ® sheet enabled students to plot concentration‐response curves and derive pEC 50 and pK B values from the experimental results they gathered, together with performing Schild linear regression analysis leading to the obtention of the pA 2 value for propranolol. In total, 46 groups of students participated in the activity. Forty‐two groups obtained a detectable response in the described experimental conditions, setting the success rate of the LCA‐based activity at 91%. Pooling together the experimental results of these 42 groups resulted in the concentration‐response curves shown in Figure , revealing the concentration‐dependent inhibition of isoproterenol‐mediated β‐arrestin2 recruitment at the β 2 AR in response to increasing concentrations of propranolol. The Schild linear regression analysis of the same data set, plotted in Figure , determined a pA 2 value of 8.45 ± 0.04. 3.2.2 The LCA‐based activity showed marked reproducibility features In order to assess the reproducibility of the LCA, the combined results of student groups having participated in the workshop on different days were compared by one‐way ANOVA analysis followed by a Tukey post hoc test, revealing no statistically significant difference among the average pK B or pA 2 values obtained by the different groups (Figure ). As mentioned above, students who took part in this workshop were enrolled in two distinct curricula: the bachelor degree in Pharmacy and the bachelor degree in Biomedical Sciences. Considering their distinct practical laboratory experience, we hypothesized that their experimental results may substantially differ. Comparison of the mean pK B or pA 2 values obtained by the two different student cohorts using a two‐sided Welch t‐test did not reveal any statistically significant difference (Figure ). 3.3 Survey Students having participated in the workshop were invited to complete an online survey on voluntary basis. In detail, we evaluated to what extent students agreed to the seven statements concerning (1) the perceived value of the workshop in their curriculum, (2) its contribution to their understanding of pEC 50 , pK B , and pA 2 pharmacological parameters, (3) the theoretical knowledge required for the understanding of the workshop, (4) the utility of this workshop in providing useful insights in pharmacological research practices, (5) the utility of this workshop in the development of practical laboratory skills, (6) to what extent the practical, hands‐on approach was a useful complement to the computer‐based feature of the workshop, and (7) whether reducing animal experimentation was a priority. Students answered expressing their agreement to the presented items employing a Likert scale with five modalities: “Strongly Agree”, “Agree”, “Neutral”, “Disagree”, and “Strongly Disagree”. Answers to the survey are summarized in Figure . Data analysis was performed by pooling together “Strongly Agree” and “Agree” answers under the category “Agree”, whereas “Neutral”, “Disagree”, and “Strongly Disagree” answers were grouped under the category “Disagree”. A total number of 106 students answered the questionnaire, implying a total response rate of 76%. Seventy‐four of 103 Pharm and 32 of 36 BioMed students took part in the online evaluation, yielding a participation rate of 72% and 89%, respectively. In a defined scale from 0 to 5, Pharm and BioMed students responded with an average score of 4.27 ± 0.18 and 4.42 ± 0.18, respectively, placing the average answer in the “Agree” category. A two‐tailed Student t‐test did not reveal any statistical difference between the average response of the two cohorts. Nevertheless, differences emerged when comparing the answers using the two‐sided Fisher's test. In detail, students enrolled in the Biomedical Sciences curriculum agreed significantly more than students enrolled in the Pharmacy curriculum with the statements 1, 2, 3, and 7 (Figure ).
Cell model and protocol optimization 3.1.1 Immunodetection of β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells HEK‐LCA cells were obtained after stably transfecting HEK293 cells with pIRESpuro‐β 2 AR‐FcLuc and pIREShygro3‐β‐arr2‐FnLuc constructs. Immunofluorescence experiments were performed to verify the co‐expression of β 2 AR‐FcLuc receptor and β‐arrestin2‐FnLuc protein, respectively, tagged with FLAG and HA epitopes. We analyzed the fluorescent microscopy images from non‐transfected control cells (Figure ) and transfected cells selected with hygromycin and puromycin (Figure ). In these cells, we observed the superposition of green and red fluorescent signals, absent in images obtained from non‐transfected cells, confirming the expression of the β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells. 3.1.2 Determination of the optimal cellular density for the design of the workshop experimental protocol Next, we aimed at establishing the best working protocol enabling students to successfully perform LCAs. Cells were plated onto 96‐well plates at different densities (200.000, 100.000, 50.000, and 25.000 cells per well) and exposed to increasing concentrations of isoproterenol. RLUs were normalized to the signal obtained from unstimulated cells plated at the same density (“basal”). The signal amplitude was maximal after approximately 15 min and remained stable for up to 45 min (data not shown). We could observe a concentration‐dependent increase in luciferase activity at all the tested cellular densities, yielding isoproterenol pEC 50 values consistent with those from the literature (Figure ). Plus, variations in cell densities did not result in significant alterations in the potency of isoproterenol ( p > 0.05; one‐way ANOVA) (Figure ). The analysis of concentration‐response curves indicated a cell density‐dependent nature of the luminescent signal, with the highest cellular density (200.000 cells/well) delivering the largest signal amplitude. Hence, we designed an experimental protocol requiring students to seed HEK‐LCA cells at a 200.000 cells/well density, increasing the chances of success of the laboratory activity and facilitating its setup, since a single confluent T175 flask typically yields the 20 million cells needed for a single 96‐well plate. 3.1.3 The LCA allows the detection of propranolol competitive antagonism at the β 2 AR. Prior to organizing the workshop, we tested the effect of inhibiting isoproterenol‐induced β‐arrestin2 recruitment at the β 2 AR using the beta‐blocker propranolol. RLUs were normalized to the maximal value obtained for every condition and plotted in Figure . When analyzing the concentration‐response curves, we observed that propranolol (10 −7.5 M) induced a rightward shift in bioluminescent signal, indicating its expected competitive antagonism at the β 2 AR (Figure ). Using the Gaddum equation, we calculated the pK B of propranolol at 8.23, in line with the available literature on this compound.
Immunodetection of β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells HEK‐LCA cells were obtained after stably transfecting HEK293 cells with pIRESpuro‐β 2 AR‐FcLuc and pIREShygro3‐β‐arr2‐FnLuc constructs. Immunofluorescence experiments were performed to verify the co‐expression of β 2 AR‐FcLuc receptor and β‐arrestin2‐FnLuc protein, respectively, tagged with FLAG and HA epitopes. We analyzed the fluorescent microscopy images from non‐transfected control cells (Figure ) and transfected cells selected with hygromycin and puromycin (Figure ). In these cells, we observed the superposition of green and red fluorescent signals, absent in images obtained from non‐transfected cells, confirming the expression of the β 2 AR‐FcLuc and β‐arr2‐FnLuc proteins in HEK‐LCA cells.
Determination of the optimal cellular density for the design of the workshop experimental protocol Next, we aimed at establishing the best working protocol enabling students to successfully perform LCAs. Cells were plated onto 96‐well plates at different densities (200.000, 100.000, 50.000, and 25.000 cells per well) and exposed to increasing concentrations of isoproterenol. RLUs were normalized to the signal obtained from unstimulated cells plated at the same density (“basal”). The signal amplitude was maximal after approximately 15 min and remained stable for up to 45 min (data not shown). We could observe a concentration‐dependent increase in luciferase activity at all the tested cellular densities, yielding isoproterenol pEC 50 values consistent with those from the literature (Figure ). Plus, variations in cell densities did not result in significant alterations in the potency of isoproterenol ( p > 0.05; one‐way ANOVA) (Figure ). The analysis of concentration‐response curves indicated a cell density‐dependent nature of the luminescent signal, with the highest cellular density (200.000 cells/well) delivering the largest signal amplitude. Hence, we designed an experimental protocol requiring students to seed HEK‐LCA cells at a 200.000 cells/well density, increasing the chances of success of the laboratory activity and facilitating its setup, since a single confluent T175 flask typically yields the 20 million cells needed for a single 96‐well plate.
The LCA allows the detection of propranolol competitive antagonism at the β 2 AR. Prior to organizing the workshop, we tested the effect of inhibiting isoproterenol‐induced β‐arrestin2 recruitment at the β 2 AR using the beta‐blocker propranolol. RLUs were normalized to the maximal value obtained for every condition and plotted in Figure . When analyzing the concentration‐response curves, we observed that propranolol (10 −7.5 M) induced a rightward shift in bioluminescent signal, indicating its expected competitive antagonism at the β 2 AR (Figure ). Using the Gaddum equation, we calculated the pK B of propranolol at 8.23, in line with the available literature on this compound.
Laboratory activity 3.2.1 Workshop setup and results The pharmacology workshop was delivered to students enrolled in the second year of the bachelor's degree in Pharmacy and in the third year of the bachelor's degree in Biomedical Sciences, respectively, referred to as “Pharm” and “BioMed” students. The laboratory activity consisted of the quantification of the competitive antagonism of propranolol for β‐arrestin2 recruitment at the β 2 AR. Students were required to harvest cells and seed them in 96‐well plates at a density of 200.000 cells/well. Then, they treated cells with increasing concentrations of isoproterenol in combination with propranolol (at a concentration of 10 −7 , 10 −7.5 , or 10 −8 M) or a vehicle solution. After 20–30 min, luciferase activity readouts were collected and analyzed during a supervised data analysis session. An in‐house developed Microsoft Excel ® sheet enabled students to plot concentration‐response curves and derive pEC 50 and pK B values from the experimental results they gathered, together with performing Schild linear regression analysis leading to the obtention of the pA 2 value for propranolol. In total, 46 groups of students participated in the activity. Forty‐two groups obtained a detectable response in the described experimental conditions, setting the success rate of the LCA‐based activity at 91%. Pooling together the experimental results of these 42 groups resulted in the concentration‐response curves shown in Figure , revealing the concentration‐dependent inhibition of isoproterenol‐mediated β‐arrestin2 recruitment at the β 2 AR in response to increasing concentrations of propranolol. The Schild linear regression analysis of the same data set, plotted in Figure , determined a pA 2 value of 8.45 ± 0.04. 3.2.2 The LCA‐based activity showed marked reproducibility features In order to assess the reproducibility of the LCA, the combined results of student groups having participated in the workshop on different days were compared by one‐way ANOVA analysis followed by a Tukey post hoc test, revealing no statistically significant difference among the average pK B or pA 2 values obtained by the different groups (Figure ). As mentioned above, students who took part in this workshop were enrolled in two distinct curricula: the bachelor degree in Pharmacy and the bachelor degree in Biomedical Sciences. Considering their distinct practical laboratory experience, we hypothesized that their experimental results may substantially differ. Comparison of the mean pK B or pA 2 values obtained by the two different student cohorts using a two‐sided Welch t‐test did not reveal any statistically significant difference (Figure ).
Workshop setup and results The pharmacology workshop was delivered to students enrolled in the second year of the bachelor's degree in Pharmacy and in the third year of the bachelor's degree in Biomedical Sciences, respectively, referred to as “Pharm” and “BioMed” students. The laboratory activity consisted of the quantification of the competitive antagonism of propranolol for β‐arrestin2 recruitment at the β 2 AR. Students were required to harvest cells and seed them in 96‐well plates at a density of 200.000 cells/well. Then, they treated cells with increasing concentrations of isoproterenol in combination with propranolol (at a concentration of 10 −7 , 10 −7.5 , or 10 −8 M) or a vehicle solution. After 20–30 min, luciferase activity readouts were collected and analyzed during a supervised data analysis session. An in‐house developed Microsoft Excel ® sheet enabled students to plot concentration‐response curves and derive pEC 50 and pK B values from the experimental results they gathered, together with performing Schild linear regression analysis leading to the obtention of the pA 2 value for propranolol. In total, 46 groups of students participated in the activity. Forty‐two groups obtained a detectable response in the described experimental conditions, setting the success rate of the LCA‐based activity at 91%. Pooling together the experimental results of these 42 groups resulted in the concentration‐response curves shown in Figure , revealing the concentration‐dependent inhibition of isoproterenol‐mediated β‐arrestin2 recruitment at the β 2 AR in response to increasing concentrations of propranolol. The Schild linear regression analysis of the same data set, plotted in Figure , determined a pA 2 value of 8.45 ± 0.04.
The LCA‐based activity showed marked reproducibility features In order to assess the reproducibility of the LCA, the combined results of student groups having participated in the workshop on different days were compared by one‐way ANOVA analysis followed by a Tukey post hoc test, revealing no statistically significant difference among the average pK B or pA 2 values obtained by the different groups (Figure ). As mentioned above, students who took part in this workshop were enrolled in two distinct curricula: the bachelor degree in Pharmacy and the bachelor degree in Biomedical Sciences. Considering their distinct practical laboratory experience, we hypothesized that their experimental results may substantially differ. Comparison of the mean pK B or pA 2 values obtained by the two different student cohorts using a two‐sided Welch t‐test did not reveal any statistically significant difference (Figure ).
Survey Students having participated in the workshop were invited to complete an online survey on voluntary basis. In detail, we evaluated to what extent students agreed to the seven statements concerning (1) the perceived value of the workshop in their curriculum, (2) its contribution to their understanding of pEC 50 , pK B , and pA 2 pharmacological parameters, (3) the theoretical knowledge required for the understanding of the workshop, (4) the utility of this workshop in providing useful insights in pharmacological research practices, (5) the utility of this workshop in the development of practical laboratory skills, (6) to what extent the practical, hands‐on approach was a useful complement to the computer‐based feature of the workshop, and (7) whether reducing animal experimentation was a priority. Students answered expressing their agreement to the presented items employing a Likert scale with five modalities: “Strongly Agree”, “Agree”, “Neutral”, “Disagree”, and “Strongly Disagree”. Answers to the survey are summarized in Figure . Data analysis was performed by pooling together “Strongly Agree” and “Agree” answers under the category “Agree”, whereas “Neutral”, “Disagree”, and “Strongly Disagree” answers were grouped under the category “Disagree”. A total number of 106 students answered the questionnaire, implying a total response rate of 76%. Seventy‐four of 103 Pharm and 32 of 36 BioMed students took part in the online evaluation, yielding a participation rate of 72% and 89%, respectively. In a defined scale from 0 to 5, Pharm and BioMed students responded with an average score of 4.27 ± 0.18 and 4.42 ± 0.18, respectively, placing the average answer in the “Agree” category. A two‐tailed Student t‐test did not reveal any statistical difference between the average response of the two cohorts. Nevertheless, differences emerged when comparing the answers using the two‐sided Fisher's test. In detail, students enrolled in the Biomedical Sciences curriculum agreed significantly more than students enrolled in the Pharmacy curriculum with the statements 1, 2, 3, and 7 (Figure ).
DISCUSSION Understanding and interpreting pharmacological parameters is of pivotal importance for health‐care professionals. For this reason, pharmacology undergraduate training frequently includes practical, laboratory‐based activities aiming at manipulating pEC 50 , pA 2 , and pK B values. Commonly, these activities involve experiments on fresh tissues obtained from laboratory animals. Albeit its physiological relevance, such experimental model implies a complex setup, resulting in scarce robustness and reproducibility. We herein report the development and implementation of an original LCA‐based practical class, aiming at providing a reliable, straightforward, and animal‐free option for pharmacology teaching. The assay has been applied to the human β 2 AR, a G protein‐coupled receptor (GPCR) of major interest in an educational context due to its importance in human physiology as well as pharmacotherapy. The LCA allowed the monitoring of agonist‐evoked recruitment of β‐arrestin2 at the β 2 AR, a molecular event triggering not only receptor desensitization and internalization, but also contributing to downstream signaling. The LCA consists in the detection of protein–protein interactions in intact living cells by means of two complementary fragments of the firefly luciferase enzyme, FcLuc and FnLuc, respectively, fused to the proteins of interest. As these proteins come into close contact in response to defined triggers, the recombined enzyme is able to catalyze the oxidation of its substrate, which is accompanied by light emission. After validating the expression of β 2 AR and β‐arrestin2 carrying the luciferase fragments in transfected HEK293 cells by immunofluorescence, we optimized the experimental protocol to be communicated to student groups. Seeding cells at a high cellular density (200.000 cells/well) yielded the strongest bioluminescent signal amplitude and, considering that students had limited laboratory experience at the time of the workshop, opting for an elevated signal to noise ratio appeared as ideal to improve their chances to successfully accomplish the experimental tasks. Hence, the estimated potency of isoproterenol was found to be independent from cellular density. We also verified that propranolol, a β 2 AR competitive antagonist, could inhibit isoproterenol‐induced signal with a pK B value comparable to literature data obtained from diverse in vitro approaches, including LCA , and BRET. As such, the LCA proved to be a valid tool for the pharmacological characterization of β 2 AR ligands and convenient for didactic purpose in the context of a pharmacology workshop. The goal of such experimental activity, organized in the bachelor programs in Pharmacy or Biomedical Sciences, was to exploit the LCA to assess and quantify the competitive antagonism of propranolol when detecting isoproterenol‐induced β‐arrestin2 recruitment at β 2 AR. Students performed LCAs with a success ratio of 91%. The reported experimental failures were mostly associated with incomplete cell harvesting and inaccurate pipetting, leading to large variations or even to totally odd data. After data collection, students were able to observe that the concentration‐response of isoproterenol underwent a rightward shift depending on the concentration of propranolol. Thanks to a supervised data analysis session using an in‐house developed Microsoft Excel sheet, students calculated pEC 50 and pK B values, together with performing Schild linear regression analyses yielding the pA 2 value for propranolol. The comprehensive analysis of the LCA experiments (N = 42) set the pA 2 value of propranolol at 8.45 ± 0.04, closely comparable to the estimated potency of propranolol reported in animal tissues. , , This result supports the validity of LCA as a reliable alternative to pharmacological experiments performed on animal‐derived fresh tissue in the context of undergraduate training. Importantly, when analyzing the experimental results collected on different days of the workshop, we observed the robustness provided by the LCA. Mean pK B and pA 2 values obtained for propranolol at different days were not statistically different. BioMed students participated in the workshop after having acquired more laboratory experience than Pharm students, raising the possibility of a cross‐cohort variation in experimental results. Yet, statistical analysis of mean pK B and pA 2 values obtained by Pharm and BioMed students did not reveal any statistically significant difference. As of rather straightforward application, LCAs may likely be exploited in any health care‐related programs, without requiring any particular practical skill. However, the successful outcome of the workshop requires a series of activities aiming at priming students to the practical class. In the hereby reported case, students gained theoretical knowledge through pharmacology lectures, together with attending a specific session describing the rationale and the timeline of the experiment. Moreover, the preparation to the practical activity was supported by a series of short videos, available on the online platform of the pharmacology course. Student groups of up to 24 were then expected to run the experiment autonomously, under the supervision of two designed postgraduate teaching assistants, capable of providing guidance throughout the workshop, including the data analysis session. Of importance was the opinion of students in regard to the newly implemented activity. Their appreciation of the laboratory activity was evaluated through a survey exploring their agreement with seven statements over the main goals and features of the LCA‐based pharmacology workshop. Seventy‐six percent of the students participated in the evaluation activity. Overall, the assay received a high degree of satisfaction, as the vast majority of students agreed with the presented statements. Supported by the survey results, we can conclude that the LCA‐based workshop provided students with a laboratory activity considered as valuable for their education (indistinctively to their curriculum), helping them to familiarize with pEC 50 , pK B , and pA 2 parameters and providing them with useful insights in pharmacological research practices. The preparatory activities were also regarded as useful in the understanding of the workshop. Of note, the practical orientation of the activity was regarded as an asset, providing the occasion to develop useful technical skills and as a valuable complement to the computer‐based data analysis session. Importantly, the majority of students showed to be particularly concerned by the reduction of animal experimentation, supporting the introduction of in vitro experiments in a teaching‐related context. Interestingly, the opinion of BioMed and Pharm students slightly differed. Students enrolled in the Biomedical Sciences curriculum considered the workshop as being more useful in the context of their education, together with showing more concern for animal experimentation. These results may be explained by the strong focus that the Biomedical Sciences program brings on fundamental research. Indeed, the practical aspects of bench‐related activities, together with animal experimentation, may appear as more compelling to future professionals in the field of biomedical research rather than for future pharmacists. In conclusion, we hereby report the successful implementation of an LCA in the context of undergraduate pharmacological teaching. Requiring currently available material, and delivering reproducible results, this assay offers a valuable and cost‐effective in vitro alternative to educational experiments performed on living animals or on freshly extracted animal tissues, providing a suitable option for the reduction of teaching‐related animal experimentation. In this context, computer‐based simulations of pharmacological experiments have found widespread application in pharmacology teaching. Nevertheless, these approaches fail to initiate undergraduate students to practical laboratory activities and to the acquisition of practical skills that are necessary in the context of pharmacological research. The pharmacology workshop developed at our university included both practical, on‐the‐bench experimentation with computer‐based data handling and interpretation. Its implementation resulted in a fruitful blend that encountered strongly positive appreciation by students, together with facilitating their understanding of crucial pharmacological concepts such as pEC 50 , pK B , and pA 2 . Moreover, the assay presented a relevant degree of reproducibility and robustness, together with a straightforward implementation, providing academics with a flexible methodological tool to design and run pharmacological experiments with an educational purpose. Its screening‐compatible format aligns it with current practices in pharmacological research and the technological advances in the field. Of note, the assay was applied to the β 2 AR after being initially designed to investigate the molecular interaction between β‐arrestin2 and the GPR27 orphan receptor. Similarly, the LCA might easily be adapted for a wide array of receptors and signaling pathways, providing teachers with relevant implementation flexibility. Of note, selecting drugs that are relatively stable in solution at room temperature, such as isoproterenol and propranolol, constitutes an advantage for student‐oriented laboratory activities. In conclusion, we propose that likewise in vitro assays blending practical and computer‐based activities—in combination with theoretical lectures and digital supports—may contribute in enhancing pharmacology teaching for students enrolled in health care‐related university programs.
DECLARATION OF TRANSPARENCY AND SCIENTIFIC RIGOR This Declaration acknowledges that this paper adheres to the principles for transparent reporting and scientific rigor of preclinical research as stated in the BJP guidelines for Design & Analysis, and as recommended by funding agencies, publishers, and other organizations engaged with supporting research.
The authors declare no conflict of interest.
N.D. and J.H. developed the LCA. M.F. and N.M. performed the preliminary experiments. E.H conceived and designed the didactic application of the LCA, together with supervising the workshop and the whole pharmacology course. N.M. developed the Microsoft Excel spreadsheet employed during the computer‐based part of the workshop. N.M., M.R., and L.R. developed the teaching material and the protocol for the students. M.F. and E.H. designed and delivered the survey. M.F and P.B. performed data analysis. M.F. and E.H. wrote the paper.
|
Guía de tratamiento del accidente cerebrovascular. Academia Iberoamericana de Neurología Pediátrica | 7dbf80aa-cb41-420f-bdc5-7f3ff79b761d | 11605910 | Pediatrics[mh] | El accidente cerebrovascular (ACV) en pediatría es infrecuente; sin embargo, se asocia con alta morbimortalidad. Puede ocurrir en cualquier momento de la vida. Se clasifica en arterial o venoso, y en isquémico o hemorrágico. El objetivo de esta guía es desarrollar y unificar lineamientos y conductas que puedan ser de ayuda al profesional no especialista en patología vascular. Se hizo una revisión exhaustiva de la bibliografía. Se calificaron las recomendaciones con el sistema de niveles de evidencia (NdE) de Oxford de 2011. Este manuscrito fue revisado por el comité científico de la Academia Iberoamericana de Neurología Pediátrica. Ocurre entre el nacimiento y los 28 días de vida, y la resonancia magnética (RM) muestra infarto isquémico agudo . Se exceptúan los ACV presuntamente perinatales (semana 20-28 días de vida). Además, se excluyen causas de ACV como hemorragias intraparenquimatosas y los infartos diagnosticados en lactantes con encefalopatía hipóxica isquémica . Ocurre en 1/3.000 a 1/9.803 nacidos vivos y representa ≈80% de los ACV perinatales . El ACV de circulación posterior representa sólo el 10% del ACV isquémico arterial neonatal y se asocia con mejor evolución . ¿Cuándo debemos pensar en un accidente cerebrovascular isquémico arterial en un neonato? Inicios con convulsiones focales después de las 12 horas de vida están presentes en la mitad de los neonatos con ACV . Pueden ser sutiles y pasar desapercibidas. Otros síntomas son: apnea, letargo, dificultades para alimentarse e hipotonía . Billinghurst et al describen que el 95% de los recién nacidos con ACV neonatal tuvo convulsiones sintomáticas agudas a las 12 horas de vida en el 26%; el 40%, a las 12-24 horas; el 24%, a las 25-48 horas; y el 10%, a las 48 horas . En resumen, convulsiones clínicas y/o apnea en las primeras 12-72 horas posteriores al nacimiento están presentes en el 70-90% de los pacientes con ACV isquémico arterial neonatal (NdE B). Factores de riesgo Los factores de riesgo propuestos son numerosos. Preeclampsia, fiebre intraparto, corioamnionitis, asfixia al nacer, hipoglucemia y bajo peso al nacer se consideran los más importantes (NdE B). ¿Cuáles son los estudios complementarios recomendados? Ecografía cerebral Es confiable y, frecuentemente, el primer estudio. En el ACV isquémico arterial neonatal puede detectarse un área de hiperecogenicidad triangular de base cortical en territorio arterial . La sensibilidad global de la ecografía cerebral descrita para detectar una imagen sugestiva de ACV isquémico arterial neonatal fue del 87% (intervalo de confianza al 95%: 79-95%) para un evaluador experto, pero disminuyó al 72% (61-83%) al realizarse por un evaluador no experto. La sensibilidad fue del 83% en las primeras 24 horas y del 86% a las 24-48 horas (NdE B). Tomografía computarizada El infarto agudo puede no ser visible en la tomografía computarizada (TC) en las primeras 24 horas y los infartos lacunares pueden subdiagnosticarse . No se recomienda la TC debido a la radiación y baja sensibilidad en la fase aguda. Sólo como emergencia, y cuando la RM no es posible, puede utilizarse para descartar una lesión con efecto de masa de posible resolución quirúrgica. El uso de TC en el ACV isquémico arterial neonatal no está recomendado (NdE B). Resonancia magnética La RM es más sensible en el diagnóstico del ACV isquémico arterial neonatal, especialmente en infartos pequeños. La secuencia en difusión (DWI) reforzada por mapas de coeficiente de difusión aparente es diagnóstica de infarto agudo. Las secuencias en T 1 y susceptibilidad magnética (SWI) o T 2 en gradiente de eco (T 2 GRE) sirven para evaluar hemorragias intra o extraaxiales, y la T 2 axial, para el edema y la mielinización. Además, debe incluirse una angio-RM (NdE B). El mejor momento para evaluar la extensión del infarto es entre 2 y 4 días. Evaluación cardiológica Hay que hacer un examen físico y una auscultación cuidadosos. Si se constata soplo/ruido anormal, se solicitará un ecocardiograma (NdE B). Estudios de trombofilia Las pruebas de rutina para la trombofilia (antitrombina III, deficiencia de proteínas C y S, y mutación del factor V Leiden o de la protrombina 20210) o para detectar otros factores de riesgo biológicos, como anticuerpos antifosfolípidos, FVIII alto u homocisteinemia, la prueba de lipoproteína (a) o la variante termolábil metilentetrahidrofolato reductasa, no deben considerarse de rutina en neonatos con ACV isquémico arterial. Estudios de mutación en el factor V Leiden sólo deben realizarse en caso de antecedentes familiares de enfermedad tromboembólica venosa; y la búsqueda de anticuerpos antifosfolípidos sólo en caso de eventos clínicos relacionados con un síndrome antifosfolípido materno . En resumen, no deben solicitarse como estudios de rutina en neonatos con ACV isquémico arterial (NdE B). ¿Se indica tratamiento? Durante la fase aguda, son las medidas de apoyo, como la hidratación normal, los electrólitos, la glucosa, la hemoglobina, el oxígeno y los niveles de pH. La hipertermia debe prevenirse. El rol de la hipotermia terapéutica aún no está determinado. Las convulsiones clínicas o subclínicas deben tratarse. Se requiere un electroencefalograma o un electroencefalograma continuo para reconocer las convulsiones subclínicas y el efecto de la medicación antiepiléptica. Las guías incluyen al fenobarbital como primera línea. Los antiplaquetarios y la anticoagulación con heparina de bajo peso molecular o heparina no fraccionada rara vez están indicados, debido al bajo riesgo de recurrencia. Deben considerarse en neonatos con ACV y trombofilia hereditaria o cardiopatía congénita compleja (sin incluir el foramen oval permeable). No hay evidencia para los tratamientos de reperfusión (NdE B). Inicios con convulsiones focales después de las 12 horas de vida están presentes en la mitad de los neonatos con ACV . Pueden ser sutiles y pasar desapercibidas. Otros síntomas son: apnea, letargo, dificultades para alimentarse e hipotonía . Billinghurst et al describen que el 95% de los recién nacidos con ACV neonatal tuvo convulsiones sintomáticas agudas a las 12 horas de vida en el 26%; el 40%, a las 12-24 horas; el 24%, a las 25-48 horas; y el 10%, a las 48 horas . En resumen, convulsiones clínicas y/o apnea en las primeras 12-72 horas posteriores al nacimiento están presentes en el 70-90% de los pacientes con ACV isquémico arterial neonatal (NdE B). Los factores de riesgo propuestos son numerosos. Preeclampsia, fiebre intraparto, corioamnionitis, asfixia al nacer, hipoglucemia y bajo peso al nacer se consideran los más importantes (NdE B). Ecografía cerebral Es confiable y, frecuentemente, el primer estudio. En el ACV isquémico arterial neonatal puede detectarse un área de hiperecogenicidad triangular de base cortical en territorio arterial . La sensibilidad global de la ecografía cerebral descrita para detectar una imagen sugestiva de ACV isquémico arterial neonatal fue del 87% (intervalo de confianza al 95%: 79-95%) para un evaluador experto, pero disminuyó al 72% (61-83%) al realizarse por un evaluador no experto. La sensibilidad fue del 83% en las primeras 24 horas y del 86% a las 24-48 horas (NdE B). Tomografía computarizada El infarto agudo puede no ser visible en la tomografía computarizada (TC) en las primeras 24 horas y los infartos lacunares pueden subdiagnosticarse . No se recomienda la TC debido a la radiación y baja sensibilidad en la fase aguda. Sólo como emergencia, y cuando la RM no es posible, puede utilizarse para descartar una lesión con efecto de masa de posible resolución quirúrgica. El uso de TC en el ACV isquémico arterial neonatal no está recomendado (NdE B). Resonancia magnética La RM es más sensible en el diagnóstico del ACV isquémico arterial neonatal, especialmente en infartos pequeños. La secuencia en difusión (DWI) reforzada por mapas de coeficiente de difusión aparente es diagnóstica de infarto agudo. Las secuencias en T 1 y susceptibilidad magnética (SWI) o T 2 en gradiente de eco (T 2 GRE) sirven para evaluar hemorragias intra o extraaxiales, y la T 2 axial, para el edema y la mielinización. Además, debe incluirse una angio-RM (NdE B). El mejor momento para evaluar la extensión del infarto es entre 2 y 4 días. Evaluación cardiológica Hay que hacer un examen físico y una auscultación cuidadosos. Si se constata soplo/ruido anormal, se solicitará un ecocardiograma (NdE B). Estudios de trombofilia Las pruebas de rutina para la trombofilia (antitrombina III, deficiencia de proteínas C y S, y mutación del factor V Leiden o de la protrombina 20210) o para detectar otros factores de riesgo biológicos, como anticuerpos antifosfolípidos, FVIII alto u homocisteinemia, la prueba de lipoproteína (a) o la variante termolábil metilentetrahidrofolato reductasa, no deben considerarse de rutina en neonatos con ACV isquémico arterial. Estudios de mutación en el factor V Leiden sólo deben realizarse en caso de antecedentes familiares de enfermedad tromboembólica venosa; y la búsqueda de anticuerpos antifosfolípidos sólo en caso de eventos clínicos relacionados con un síndrome antifosfolípido materno . En resumen, no deben solicitarse como estudios de rutina en neonatos con ACV isquémico arterial (NdE B). Es confiable y, frecuentemente, el primer estudio. En el ACV isquémico arterial neonatal puede detectarse un área de hiperecogenicidad triangular de base cortical en territorio arterial . La sensibilidad global de la ecografía cerebral descrita para detectar una imagen sugestiva de ACV isquémico arterial neonatal fue del 87% (intervalo de confianza al 95%: 79-95%) para un evaluador experto, pero disminuyó al 72% (61-83%) al realizarse por un evaluador no experto. La sensibilidad fue del 83% en las primeras 24 horas y del 86% a las 24-48 horas (NdE B). El infarto agudo puede no ser visible en la tomografía computarizada (TC) en las primeras 24 horas y los infartos lacunares pueden subdiagnosticarse . No se recomienda la TC debido a la radiación y baja sensibilidad en la fase aguda. Sólo como emergencia, y cuando la RM no es posible, puede utilizarse para descartar una lesión con efecto de masa de posible resolución quirúrgica. El uso de TC en el ACV isquémico arterial neonatal no está recomendado (NdE B). La RM es más sensible en el diagnóstico del ACV isquémico arterial neonatal, especialmente en infartos pequeños. La secuencia en difusión (DWI) reforzada por mapas de coeficiente de difusión aparente es diagnóstica de infarto agudo. Las secuencias en T 1 y susceptibilidad magnética (SWI) o T 2 en gradiente de eco (T 2 GRE) sirven para evaluar hemorragias intra o extraaxiales, y la T 2 axial, para el edema y la mielinización. Además, debe incluirse una angio-RM (NdE B). El mejor momento para evaluar la extensión del infarto es entre 2 y 4 días. Hay que hacer un examen físico y una auscultación cuidadosos. Si se constata soplo/ruido anormal, se solicitará un ecocardiograma (NdE B). Las pruebas de rutina para la trombofilia (antitrombina III, deficiencia de proteínas C y S, y mutación del factor V Leiden o de la protrombina 20210) o para detectar otros factores de riesgo biológicos, como anticuerpos antifosfolípidos, FVIII alto u homocisteinemia, la prueba de lipoproteína (a) o la variante termolábil metilentetrahidrofolato reductasa, no deben considerarse de rutina en neonatos con ACV isquémico arterial. Estudios de mutación en el factor V Leiden sólo deben realizarse en caso de antecedentes familiares de enfermedad tromboembólica venosa; y la búsqueda de anticuerpos antifosfolípidos sólo en caso de eventos clínicos relacionados con un síndrome antifosfolípido materno . En resumen, no deben solicitarse como estudios de rutina en neonatos con ACV isquémico arterial (NdE B). Durante la fase aguda, son las medidas de apoyo, como la hidratación normal, los electrólitos, la glucosa, la hemoglobina, el oxígeno y los niveles de pH. La hipertermia debe prevenirse. El rol de la hipotermia terapéutica aún no está determinado. Las convulsiones clínicas o subclínicas deben tratarse. Se requiere un electroencefalograma o un electroencefalograma continuo para reconocer las convulsiones subclínicas y el efecto de la medicación antiepiléptica. Las guías incluyen al fenobarbital como primera línea. Los antiplaquetarios y la anticoagulación con heparina de bajo peso molecular o heparina no fraccionada rara vez están indicados, debido al bajo riesgo de recurrencia. Deben considerarse en neonatos con ACV y trombofilia hereditaria o cardiopatía congénita compleja (sin incluir el foramen oval permeable). No hay evidencia para los tratamientos de reperfusión (NdE B). En esta sección abordaremos diferentes aspectos, tanto clínicos como diagnósticos, en pacientes pediátricos con el objetivo de incrementar la sospecha de ACV y ofrecer tratamientos en etapas más tempranas. ¿Cuándo debemos pensar en un accidente cerebrovascular isquémico arterial en un paciente pediátrico? Los síntomas de inicio más frecuentes son hemiparesia y parálisis facial (67-90%), alteraciones del habla o del lenguaje (20-50%), trastornos de la visión (10-15%) y ataxia (8-10%) . El inicio también puede ser con síntomas no localizadores, como cefalea (20-50%) y alteración de la conciencia (17-38%) . Las convulsiones son más comunes en los niños que en los adultos (15-25%), especialmente en <6 años. (NdE C). Pueden considerarse diferentes escenarios: – ACV isquémico por cardioembolismo . Más frecuente en pacientes internados y de menor edad (media: 6 meses a 3 años). Inicio abrupto: hemiparesia (36%-75%) y convulsiones (más del 40%). Hasta un 40% puede ser silente . – ACV isquémico por arteriopatía de tipo moyamoya . Alta prevalencia de accidentes isquémicos transitorios y una gran carga de infartos silentes. Clínica: hemiparesia y déficit hemisensitivo (72%), cefalea crónica (52%) y convulsiones (<10%) (NdE B). – ACV de circulación posterior . Más en varones, edad media 7 a 8 años y generalmente sanos . Inicio: hemiparesia, ataxia, disartria, y déficit en el campo visual y oculomotor (70-100%). Los síntomas no localizadores como cefalea, vómitos y afectación de la conciencia, ocurren en el 60-70%. La disección de la arteria vertebral es la etiología más frecuente (25-50%) (NdE C y D). ¿Cuáles son los estudios complementarios recomendados? Debido al amplio espectro de diagnósticos diferenciales en pacientes con un síndrome neurológico agudo, y para confirmar un ACV isquémico arterial, la neuroimagen es el primer estudio que se debe realizar. Método de elección La RM de cerebro demuestra el infarto tempranamente. Se recomienda confeccionar un protocolo que pueda realizarse en menos de 15 minutos, especialmente si el paciente puede ser candidato a tratamiento hiperagudo. Las secuencias recomendadas (pueden variar por institución) son: DWI y mapa de coeficiente de difusión aparente, T 2 o FLAIR y T 2 GRE o SWI y angio-RM arterial de vasos cerebrales y cuello (NdE B). Una gran desventaja es la necesidad de anestesia debido a la edad de los pacientes. En estos casos se recomienda TC de cerebro con angio-TC de vasos cerebrales y cuello, que excluye hemorragias y algunos diagnósticos diferenciales, y confirma si existe oclusión de un gran vaso. En caso de que el paciente reúna criterios para la reperfusión, debe aplicarse en la neuroimagen la escala del Alberta Stroke Program Early CT Score (ASPECTS). De este modo se cuantifican los cambios isquémicos tempranos en el territorio de la arteria cerebral media. Los objetivos son seleccionar a pacientes con mayor beneficio por la reperfusión, detectar el riesgo de hemorragia postratamiento y el pronóstico. Puede calcularse tanto en la TC como en la RM (en DWI). El ASPECTS es cuantitativo y divide el territorio vascular de la arteria cerebral media en 10 regiones (ganglionar y supraganglionar). De una puntuación inicial de 10, se resta un punto por cada región afectada (NdE D). De forma concomitante, se realizará un hemograma, un hepatograma, un ionograma y un coagulograma (NdE C). Evaluación cardiológica En todo paciente se recomienda el examen clínico cardiológico y un ecocardiograma con estudio de burbuja, además de un electrocardiograma para descartar arritmias. Cuando existe historia de defectos cardíacos o ACV durante el ejercicio, se recomienda monitorización con Holter e interconsulta con un especialista. En caso de recurrencia debe considerarse, además, un ecocardiograma transesofágico con burbuja (NdE B). Estudios de trombofilia Las enfermedades hereditarias que se asocian estadísticamente con un primer evento de ACV isquémico son los niveles elevados de lipoproteína (a), la disminución de los niveles de inhibidores de la coagulación (antitrombina y proteína C) o mutaciones genéticas (mutación del factor V Leiden o de la protrombina, metilentetrahidrofolato reductasa, con hiperhomocisteinemia). Para el estudio de trombofilias adquiridas se solicitará anticuerpos antifosfolípidos, anticoagulante lúpico, anticardiolipinas y/o anti-β 2 glucoproteína. Con excepción del estudio genético, los estudios de laboratorios deberán solicitarse después de los tres meses del ACV (NdE B). Estudios genéticos Deben considerarse teniendo en cuenta hallazgos del examen físico, como, por ejemplo, hiperlaxitud articular asociada a disección. En estos casos se sugiere descartar algunas enfermedades del tejido conectivo. Además, teniendo en cuenta la distribución del infarto y el patrón de la arteriopatía, se solicitarán: RNF213, ACTA2 R179, BRCC3/MTCPI, GUCYIA3, SAMHD1 , síndrome de Alagille, neurofibromatosis de tipo I, síndrome de PHACE, etc. Si se documenta hemorragia asociada a la isquemia sin una mejor explicación, se considerará COL4A1-COL4A2 . ¿Se indica tratamiento? ¿Cuáles son los tratamientos recomendados? Sí. Para elegir el tratamiento adecuado se considera el tiempo transcurrido (ventana) y la etiología del ACV. Considerando el período ventana (desde el inicio a la confirmación del ACV), los tratamientos pueden dividirse en: reperfusión/hiperagudo, neuroprotección, secundarios y quirúrgico. Terapias hiperagudas La terapia de recanalización arterial, tanto con activador tisular de plasminógeno (tPA) intravenoso como con tPA intraarterial, y la trombectomía mecánica han demostrado beneficios significativos en adultos con ACV isquémico en período de ventana. En pediatría, aún es dificultosa su aplicación (NdE C). Cuando se produce la recanalización antes de la muerte tisular, la reperfusión reduce la lesión isquémica. Posteriormente al período de ventana, aumenta el riesgo de trasformación hemorrágica del infarto, lesión por reperfusión, y complicaciones trombóticas y no trombóticas relacionadas con el catéter y el dispositivo; en conclusión, se produce más daño que beneficio. Alteplasa endovenosa El uso de tPA en niños pequeños es limitado debido a la dificultad para determinar exactamente el comienzo del ACV y la evaluación del déficit neurológico. Las últimas guías sobre ACV pediátrico establecen que es factible aplicar tPA en niños desde los 2 años, con déficit neurológico persistente ( National institute of Health Stroke Scale pediátrica ≥ 4) y oclusión de un gran vaso confirmados radiográficamente dentro de las 4,5 horas del inicio (NdE B). Por opinión de expertos, se utiliza la misma dosis de tPA endovenosa que en adultos (0,9 mg/kg), el 10% de la dosis en bolo en un minuto y el resto por infusión en 60 minutos (máximo, 90 mg). Existen criterios que contraindican el tratamiento relacionado con la historia del paciente, la etiología, las neuroimágenes, el examen clínico y los laboratorios (NdE B). Trombectomía El éxito de la trombectomía mecánica en adultos con ACV se basa en ensayos controlados aleatorizados (clase 1A). En pediatría, este procedimiento ha alcanzado el NdE C y se debe principalmente al calibre de las arterias, el uso del contraste en relación con el peso, la radiación y las arteriopatías como etiología frecuente del ACV. La situación ideal es cuando la oclusión de un gran vaso es secundaria a un trombo de origen cardíaco. La localización de la oclusión de un gran vaso es otro factor determinante. La carótida interna intracraneal, la arteria cerebral media (M1) y la basilar son los sitios más recomendados. La puntuación de la National institute of Health Stroke Scale pediátrica debe ser > 6 antes del procedimiento, la escala de ASPECTS > 7 y un período ventana menor de seis horas. Neuroprotección La mayoría de los pacientes con ACV no serán candidatos a la reperfusión; sin embargo, la confirmación temprana permitirá instaurar neuroprotección. El objetivo es salvar el tejido de penumbra mediante la optimización del trasporte de oxígeno y glucosa y minimizar las demandas metabólicas mediante control de fiebre y convulsiones. Todas estas estrategias son extrapoladas de bibliografía en adultos (NdE B). Se recomienda la monitorización durante al menos 24 horas en todo paciente con ACV; tratar la hiperglucemia/hipoglucemia (ideal: 140-180 mg/dL); e identificar y tratar causas de hipertermia (> 38 °C) con antipiréticos. La presión arterial debe mantenerse en rangos normales y principalmente evitar hipotensión. Pacientes con ACV secundario a arteriopatía cerebral intracraneal son particularmente sensibles a los descensos abruptos de la presión arterial que pueden provocar hipoperfusión cerebral. El uso de fármacos antihipertensivos en este grupo de pacientes puede desencadenar nuevos infartos. La hipotensión debe tratarse enérgicamente: cabecera plana, fluidos endovenosos y, en raras ocasiones, vasopresores o fludrocortisona. En pacientes con anemia de células falciformes y ACV isquémico se recomienda la transfusión de sangre en el período agudo (en las seis primeras horas), aun antes de la realización de la neuroimagen. De esta manera se incrementa el transporte de oxígeno cuando el nivel de hemoglobina es <10 g/dL. Con este tratamiento, los valores de hemoglobina no deben ser mayores a >11 g/dL. Para evitar el síndrome de hiperviscocidad deben controlarse los niveles de hemoglobina cada dos horas tras la transfusión. Independientemente del tipo de anemia de células falciformes, después de la transfusión de sangre se recomienda una exanguinotransfusión para disminuir los niveles de hemoglobina S aproximadamente un 15% e incrementar la hemoglobina a ≈10 g/dL. Además, deben tomarse medidas generales, como hidratación óptima, y corrección de la hipoxemia y de la hipotensión sistémica, en especial en pacientes que tienen asociado síndrome de moyamoya. Cabe considerar tempranamente cirugía descompresiva en casos de infarto maligno de la arteria cerebral media y de infarto cerebeloso con efecto de masa. En niños con infartos malignos de la arteria cerebral media se debería considerar realizar la hemicraniectomía profiláctica temprana en las primeras 24 horas o implementar controles clínicos y neurorradiológicos frecuentes dentro de las primeras 72 horas para monitorizar el edema y la necesidad de la cirugía (NdE B). Tratamiento de prevención secundaria La decisión de comenzar una terapia antitrombótica, la elección del fármaco, el momento de inicio y la duración dependen de la causa del ACV y de factores como la edad y las comorbilidades. Cuando se determina que es cardioembólico o debido a una trombofilia, se recomienda la anticoagulación (NdE B). En casos de disección cervical arterial no hay datos pediátricos que indiquen la elección del fármaco antitrombótico. La presencia de trombo intraluminal puede pesar a favor de la anticoagulación, mientras que en un infarto de gran tamaño se prefiere la antiagregación. Los agentes antiagregantes se recomiendan en los infartos criptógenos y en el moyamoya. Se recomienda iniciar la terapia antitrombótica sólo después de que el riesgo de recurrencia supere la posibilidad de transformación hemorrágica del infarto. Cuando se utiliza anticoagulación, se recomienda infusión de heparina y realizar una neuroimagen cuando está en rango terapéutico para descartar una hemorragia antes de cambiar a un anticoagulante de larga acción. La elección del anticoagulante (heparina de bajo peso molecular, warfarina o anticoagulantes orales directos) debería basarse en la etiología y los factores del paciente. Los antiagregantes (ácido acetilsalicílico y clopidogrel) pueden iniciarse más tempranamente que los anticoagulantes. La duración del tratamiento depende de la causa. En los casos de ACV criptógeno, por consenso de expertos, se recomienda durante 2 años, ya que la mayoría de las recurrencias ocurre en este período (NdE B). Terapias inmunomoduladoras Los corticoides y otras terapias inmunomoduladoras puede desempeñar un papel en la prevención de recurrencias en pacientes con arteriopatías infecciosas inflamatorias (NdE D). En niños con arteriopatía cerebral focal, se sugiere que los corticoides pueden mejorar la evolución cuando se agregan a los antiplaquetarios (NdE D). La terapia con inhibidores del factor de necrosis tumoral es el tratamiento pilar en niños con DADA2 (NdE D). Además, por el riesgo de ACV hemorrágico, no se recomiendan terapias antitrombóticas. Cierre de foramen oval permeable No está claro el papel fisiopatológico del foramen oval permeable en el ACV criptógeno (NdE D). Cirugía de revascularización en moyamoya Debido al alto riesgo de recurrencia de ACV (NdE D). Los síntomas de inicio más frecuentes son hemiparesia y parálisis facial (67-90%), alteraciones del habla o del lenguaje (20-50%), trastornos de la visión (10-15%) y ataxia (8-10%) . El inicio también puede ser con síntomas no localizadores, como cefalea (20-50%) y alteración de la conciencia (17-38%) . Las convulsiones son más comunes en los niños que en los adultos (15-25%), especialmente en <6 años. (NdE C). Pueden considerarse diferentes escenarios: – ACV isquémico por cardioembolismo . Más frecuente en pacientes internados y de menor edad (media: 6 meses a 3 años). Inicio abrupto: hemiparesia (36%-75%) y convulsiones (más del 40%). Hasta un 40% puede ser silente . – ACV isquémico por arteriopatía de tipo moyamoya . Alta prevalencia de accidentes isquémicos transitorios y una gran carga de infartos silentes. Clínica: hemiparesia y déficit hemisensitivo (72%), cefalea crónica (52%) y convulsiones (<10%) (NdE B). – ACV de circulación posterior . Más en varones, edad media 7 a 8 años y generalmente sanos . Inicio: hemiparesia, ataxia, disartria, y déficit en el campo visual y oculomotor (70-100%). Los síntomas no localizadores como cefalea, vómitos y afectación de la conciencia, ocurren en el 60-70%. La disección de la arteria vertebral es la etiología más frecuente (25-50%) (NdE C y D). Debido al amplio espectro de diagnósticos diferenciales en pacientes con un síndrome neurológico agudo, y para confirmar un ACV isquémico arterial, la neuroimagen es el primer estudio que se debe realizar. Método de elección La RM de cerebro demuestra el infarto tempranamente. Se recomienda confeccionar un protocolo que pueda realizarse en menos de 15 minutos, especialmente si el paciente puede ser candidato a tratamiento hiperagudo. Las secuencias recomendadas (pueden variar por institución) son: DWI y mapa de coeficiente de difusión aparente, T 2 o FLAIR y T 2 GRE o SWI y angio-RM arterial de vasos cerebrales y cuello (NdE B). Una gran desventaja es la necesidad de anestesia debido a la edad de los pacientes. En estos casos se recomienda TC de cerebro con angio-TC de vasos cerebrales y cuello, que excluye hemorragias y algunos diagnósticos diferenciales, y confirma si existe oclusión de un gran vaso. En caso de que el paciente reúna criterios para la reperfusión, debe aplicarse en la neuroimagen la escala del Alberta Stroke Program Early CT Score (ASPECTS). De este modo se cuantifican los cambios isquémicos tempranos en el territorio de la arteria cerebral media. Los objetivos son seleccionar a pacientes con mayor beneficio por la reperfusión, detectar el riesgo de hemorragia postratamiento y el pronóstico. Puede calcularse tanto en la TC como en la RM (en DWI). El ASPECTS es cuantitativo y divide el territorio vascular de la arteria cerebral media en 10 regiones (ganglionar y supraganglionar). De una puntuación inicial de 10, se resta un punto por cada región afectada (NdE D). De forma concomitante, se realizará un hemograma, un hepatograma, un ionograma y un coagulograma (NdE C). Evaluación cardiológica En todo paciente se recomienda el examen clínico cardiológico y un ecocardiograma con estudio de burbuja, además de un electrocardiograma para descartar arritmias. Cuando existe historia de defectos cardíacos o ACV durante el ejercicio, se recomienda monitorización con Holter e interconsulta con un especialista. En caso de recurrencia debe considerarse, además, un ecocardiograma transesofágico con burbuja (NdE B). Estudios de trombofilia Las enfermedades hereditarias que se asocian estadísticamente con un primer evento de ACV isquémico son los niveles elevados de lipoproteína (a), la disminución de los niveles de inhibidores de la coagulación (antitrombina y proteína C) o mutaciones genéticas (mutación del factor V Leiden o de la protrombina, metilentetrahidrofolato reductasa, con hiperhomocisteinemia). Para el estudio de trombofilias adquiridas se solicitará anticuerpos antifosfolípidos, anticoagulante lúpico, anticardiolipinas y/o anti-β 2 glucoproteína. Con excepción del estudio genético, los estudios de laboratorios deberán solicitarse después de los tres meses del ACV (NdE B). Estudios genéticos Deben considerarse teniendo en cuenta hallazgos del examen físico, como, por ejemplo, hiperlaxitud articular asociada a disección. En estos casos se sugiere descartar algunas enfermedades del tejido conectivo. Además, teniendo en cuenta la distribución del infarto y el patrón de la arteriopatía, se solicitarán: RNF213, ACTA2 R179, BRCC3/MTCPI, GUCYIA3, SAMHD1 , síndrome de Alagille, neurofibromatosis de tipo I, síndrome de PHACE, etc. Si se documenta hemorragia asociada a la isquemia sin una mejor explicación, se considerará COL4A1-COL4A2 . La RM de cerebro demuestra el infarto tempranamente. Se recomienda confeccionar un protocolo que pueda realizarse en menos de 15 minutos, especialmente si el paciente puede ser candidato a tratamiento hiperagudo. Las secuencias recomendadas (pueden variar por institución) son: DWI y mapa de coeficiente de difusión aparente, T 2 o FLAIR y T 2 GRE o SWI y angio-RM arterial de vasos cerebrales y cuello (NdE B). Una gran desventaja es la necesidad de anestesia debido a la edad de los pacientes. En estos casos se recomienda TC de cerebro con angio-TC de vasos cerebrales y cuello, que excluye hemorragias y algunos diagnósticos diferenciales, y confirma si existe oclusión de un gran vaso. En caso de que el paciente reúna criterios para la reperfusión, debe aplicarse en la neuroimagen la escala del Alberta Stroke Program Early CT Score (ASPECTS). De este modo se cuantifican los cambios isquémicos tempranos en el territorio de la arteria cerebral media. Los objetivos son seleccionar a pacientes con mayor beneficio por la reperfusión, detectar el riesgo de hemorragia postratamiento y el pronóstico. Puede calcularse tanto en la TC como en la RM (en DWI). El ASPECTS es cuantitativo y divide el territorio vascular de la arteria cerebral media en 10 regiones (ganglionar y supraganglionar). De una puntuación inicial de 10, se resta un punto por cada región afectada (NdE D). De forma concomitante, se realizará un hemograma, un hepatograma, un ionograma y un coagulograma (NdE C). En todo paciente se recomienda el examen clínico cardiológico y un ecocardiograma con estudio de burbuja, además de un electrocardiograma para descartar arritmias. Cuando existe historia de defectos cardíacos o ACV durante el ejercicio, se recomienda monitorización con Holter e interconsulta con un especialista. En caso de recurrencia debe considerarse, además, un ecocardiograma transesofágico con burbuja (NdE B). Las enfermedades hereditarias que se asocian estadísticamente con un primer evento de ACV isquémico son los niveles elevados de lipoproteína (a), la disminución de los niveles de inhibidores de la coagulación (antitrombina y proteína C) o mutaciones genéticas (mutación del factor V Leiden o de la protrombina, metilentetrahidrofolato reductasa, con hiperhomocisteinemia). Para el estudio de trombofilias adquiridas se solicitará anticuerpos antifosfolípidos, anticoagulante lúpico, anticardiolipinas y/o anti-β 2 glucoproteína. Con excepción del estudio genético, los estudios de laboratorios deberán solicitarse después de los tres meses del ACV (NdE B). Deben considerarse teniendo en cuenta hallazgos del examen físico, como, por ejemplo, hiperlaxitud articular asociada a disección. En estos casos se sugiere descartar algunas enfermedades del tejido conectivo. Además, teniendo en cuenta la distribución del infarto y el patrón de la arteriopatía, se solicitarán: RNF213, ACTA2 R179, BRCC3/MTCPI, GUCYIA3, SAMHD1 , síndrome de Alagille, neurofibromatosis de tipo I, síndrome de PHACE, etc. Si se documenta hemorragia asociada a la isquemia sin una mejor explicación, se considerará COL4A1-COL4A2 . Sí. Para elegir el tratamiento adecuado se considera el tiempo transcurrido (ventana) y la etiología del ACV. Considerando el período ventana (desde el inicio a la confirmación del ACV), los tratamientos pueden dividirse en: reperfusión/hiperagudo, neuroprotección, secundarios y quirúrgico. Terapias hiperagudas La terapia de recanalización arterial, tanto con activador tisular de plasminógeno (tPA) intravenoso como con tPA intraarterial, y la trombectomía mecánica han demostrado beneficios significativos en adultos con ACV isquémico en período de ventana. En pediatría, aún es dificultosa su aplicación (NdE C). Cuando se produce la recanalización antes de la muerte tisular, la reperfusión reduce la lesión isquémica. Posteriormente al período de ventana, aumenta el riesgo de trasformación hemorrágica del infarto, lesión por reperfusión, y complicaciones trombóticas y no trombóticas relacionadas con el catéter y el dispositivo; en conclusión, se produce más daño que beneficio. Alteplasa endovenosa El uso de tPA en niños pequeños es limitado debido a la dificultad para determinar exactamente el comienzo del ACV y la evaluación del déficit neurológico. Las últimas guías sobre ACV pediátrico establecen que es factible aplicar tPA en niños desde los 2 años, con déficit neurológico persistente ( National institute of Health Stroke Scale pediátrica ≥ 4) y oclusión de un gran vaso confirmados radiográficamente dentro de las 4,5 horas del inicio (NdE B). Por opinión de expertos, se utiliza la misma dosis de tPA endovenosa que en adultos (0,9 mg/kg), el 10% de la dosis en bolo en un minuto y el resto por infusión en 60 minutos (máximo, 90 mg). Existen criterios que contraindican el tratamiento relacionado con la historia del paciente, la etiología, las neuroimágenes, el examen clínico y los laboratorios (NdE B). Trombectomía El éxito de la trombectomía mecánica en adultos con ACV se basa en ensayos controlados aleatorizados (clase 1A). En pediatría, este procedimiento ha alcanzado el NdE C y se debe principalmente al calibre de las arterias, el uso del contraste en relación con el peso, la radiación y las arteriopatías como etiología frecuente del ACV. La situación ideal es cuando la oclusión de un gran vaso es secundaria a un trombo de origen cardíaco. La localización de la oclusión de un gran vaso es otro factor determinante. La carótida interna intracraneal, la arteria cerebral media (M1) y la basilar son los sitios más recomendados. La puntuación de la National institute of Health Stroke Scale pediátrica debe ser > 6 antes del procedimiento, la escala de ASPECTS > 7 y un período ventana menor de seis horas. Neuroprotección La mayoría de los pacientes con ACV no serán candidatos a la reperfusión; sin embargo, la confirmación temprana permitirá instaurar neuroprotección. El objetivo es salvar el tejido de penumbra mediante la optimización del trasporte de oxígeno y glucosa y minimizar las demandas metabólicas mediante control de fiebre y convulsiones. Todas estas estrategias son extrapoladas de bibliografía en adultos (NdE B). Se recomienda la monitorización durante al menos 24 horas en todo paciente con ACV; tratar la hiperglucemia/hipoglucemia (ideal: 140-180 mg/dL); e identificar y tratar causas de hipertermia (> 38 °C) con antipiréticos. La presión arterial debe mantenerse en rangos normales y principalmente evitar hipotensión. Pacientes con ACV secundario a arteriopatía cerebral intracraneal son particularmente sensibles a los descensos abruptos de la presión arterial que pueden provocar hipoperfusión cerebral. El uso de fármacos antihipertensivos en este grupo de pacientes puede desencadenar nuevos infartos. La hipotensión debe tratarse enérgicamente: cabecera plana, fluidos endovenosos y, en raras ocasiones, vasopresores o fludrocortisona. En pacientes con anemia de células falciformes y ACV isquémico se recomienda la transfusión de sangre en el período agudo (en las seis primeras horas), aun antes de la realización de la neuroimagen. De esta manera se incrementa el transporte de oxígeno cuando el nivel de hemoglobina es <10 g/dL. Con este tratamiento, los valores de hemoglobina no deben ser mayores a >11 g/dL. Para evitar el síndrome de hiperviscocidad deben controlarse los niveles de hemoglobina cada dos horas tras la transfusión. Independientemente del tipo de anemia de células falciformes, después de la transfusión de sangre se recomienda una exanguinotransfusión para disminuir los niveles de hemoglobina S aproximadamente un 15% e incrementar la hemoglobina a ≈10 g/dL. Además, deben tomarse medidas generales, como hidratación óptima, y corrección de la hipoxemia y de la hipotensión sistémica, en especial en pacientes que tienen asociado síndrome de moyamoya. Cabe considerar tempranamente cirugía descompresiva en casos de infarto maligno de la arteria cerebral media y de infarto cerebeloso con efecto de masa. En niños con infartos malignos de la arteria cerebral media se debería considerar realizar la hemicraniectomía profiláctica temprana en las primeras 24 horas o implementar controles clínicos y neurorradiológicos frecuentes dentro de las primeras 72 horas para monitorizar el edema y la necesidad de la cirugía (NdE B). Tratamiento de prevención secundaria La decisión de comenzar una terapia antitrombótica, la elección del fármaco, el momento de inicio y la duración dependen de la causa del ACV y de factores como la edad y las comorbilidades. Cuando se determina que es cardioembólico o debido a una trombofilia, se recomienda la anticoagulación (NdE B). En casos de disección cervical arterial no hay datos pediátricos que indiquen la elección del fármaco antitrombótico. La presencia de trombo intraluminal puede pesar a favor de la anticoagulación, mientras que en un infarto de gran tamaño se prefiere la antiagregación. Los agentes antiagregantes se recomiendan en los infartos criptógenos y en el moyamoya. Se recomienda iniciar la terapia antitrombótica sólo después de que el riesgo de recurrencia supere la posibilidad de transformación hemorrágica del infarto. Cuando se utiliza anticoagulación, se recomienda infusión de heparina y realizar una neuroimagen cuando está en rango terapéutico para descartar una hemorragia antes de cambiar a un anticoagulante de larga acción. La elección del anticoagulante (heparina de bajo peso molecular, warfarina o anticoagulantes orales directos) debería basarse en la etiología y los factores del paciente. Los antiagregantes (ácido acetilsalicílico y clopidogrel) pueden iniciarse más tempranamente que los anticoagulantes. La duración del tratamiento depende de la causa. En los casos de ACV criptógeno, por consenso de expertos, se recomienda durante 2 años, ya que la mayoría de las recurrencias ocurre en este período (NdE B). Terapias inmunomoduladoras Los corticoides y otras terapias inmunomoduladoras puede desempeñar un papel en la prevención de recurrencias en pacientes con arteriopatías infecciosas inflamatorias (NdE D). En niños con arteriopatía cerebral focal, se sugiere que los corticoides pueden mejorar la evolución cuando se agregan a los antiplaquetarios (NdE D). La terapia con inhibidores del factor de necrosis tumoral es el tratamiento pilar en niños con DADA2 (NdE D). Además, por el riesgo de ACV hemorrágico, no se recomiendan terapias antitrombóticas. Cierre de foramen oval permeable No está claro el papel fisiopatológico del foramen oval permeable en el ACV criptógeno (NdE D). Cirugía de revascularización en moyamoya Debido al alto riesgo de recurrencia de ACV (NdE D). La terapia de recanalización arterial, tanto con activador tisular de plasminógeno (tPA) intravenoso como con tPA intraarterial, y la trombectomía mecánica han demostrado beneficios significativos en adultos con ACV isquémico en período de ventana. En pediatría, aún es dificultosa su aplicación (NdE C). Cuando se produce la recanalización antes de la muerte tisular, la reperfusión reduce la lesión isquémica. Posteriormente al período de ventana, aumenta el riesgo de trasformación hemorrágica del infarto, lesión por reperfusión, y complicaciones trombóticas y no trombóticas relacionadas con el catéter y el dispositivo; en conclusión, se produce más daño que beneficio. El uso de tPA en niños pequeños es limitado debido a la dificultad para determinar exactamente el comienzo del ACV y la evaluación del déficit neurológico. Las últimas guías sobre ACV pediátrico establecen que es factible aplicar tPA en niños desde los 2 años, con déficit neurológico persistente ( National institute of Health Stroke Scale pediátrica ≥ 4) y oclusión de un gran vaso confirmados radiográficamente dentro de las 4,5 horas del inicio (NdE B). Por opinión de expertos, se utiliza la misma dosis de tPA endovenosa que en adultos (0,9 mg/kg), el 10% de la dosis en bolo en un minuto y el resto por infusión en 60 minutos (máximo, 90 mg). Existen criterios que contraindican el tratamiento relacionado con la historia del paciente, la etiología, las neuroimágenes, el examen clínico y los laboratorios (NdE B). El éxito de la trombectomía mecánica en adultos con ACV se basa en ensayos controlados aleatorizados (clase 1A). En pediatría, este procedimiento ha alcanzado el NdE C y se debe principalmente al calibre de las arterias, el uso del contraste en relación con el peso, la radiación y las arteriopatías como etiología frecuente del ACV. La situación ideal es cuando la oclusión de un gran vaso es secundaria a un trombo de origen cardíaco. La localización de la oclusión de un gran vaso es otro factor determinante. La carótida interna intracraneal, la arteria cerebral media (M1) y la basilar son los sitios más recomendados. La puntuación de la National institute of Health Stroke Scale pediátrica debe ser > 6 antes del procedimiento, la escala de ASPECTS > 7 y un período ventana menor de seis horas. La mayoría de los pacientes con ACV no serán candidatos a la reperfusión; sin embargo, la confirmación temprana permitirá instaurar neuroprotección. El objetivo es salvar el tejido de penumbra mediante la optimización del trasporte de oxígeno y glucosa y minimizar las demandas metabólicas mediante control de fiebre y convulsiones. Todas estas estrategias son extrapoladas de bibliografía en adultos (NdE B). Se recomienda la monitorización durante al menos 24 horas en todo paciente con ACV; tratar la hiperglucemia/hipoglucemia (ideal: 140-180 mg/dL); e identificar y tratar causas de hipertermia (> 38 °C) con antipiréticos. La presión arterial debe mantenerse en rangos normales y principalmente evitar hipotensión. Pacientes con ACV secundario a arteriopatía cerebral intracraneal son particularmente sensibles a los descensos abruptos de la presión arterial que pueden provocar hipoperfusión cerebral. El uso de fármacos antihipertensivos en este grupo de pacientes puede desencadenar nuevos infartos. La hipotensión debe tratarse enérgicamente: cabecera plana, fluidos endovenosos y, en raras ocasiones, vasopresores o fludrocortisona. En pacientes con anemia de células falciformes y ACV isquémico se recomienda la transfusión de sangre en el período agudo (en las seis primeras horas), aun antes de la realización de la neuroimagen. De esta manera se incrementa el transporte de oxígeno cuando el nivel de hemoglobina es <10 g/dL. Con este tratamiento, los valores de hemoglobina no deben ser mayores a >11 g/dL. Para evitar el síndrome de hiperviscocidad deben controlarse los niveles de hemoglobina cada dos horas tras la transfusión. Independientemente del tipo de anemia de células falciformes, después de la transfusión de sangre se recomienda una exanguinotransfusión para disminuir los niveles de hemoglobina S aproximadamente un 15% e incrementar la hemoglobina a ≈10 g/dL. Además, deben tomarse medidas generales, como hidratación óptima, y corrección de la hipoxemia y de la hipotensión sistémica, en especial en pacientes que tienen asociado síndrome de moyamoya. Cabe considerar tempranamente cirugía descompresiva en casos de infarto maligno de la arteria cerebral media y de infarto cerebeloso con efecto de masa. En niños con infartos malignos de la arteria cerebral media se debería considerar realizar la hemicraniectomía profiláctica temprana en las primeras 24 horas o implementar controles clínicos y neurorradiológicos frecuentes dentro de las primeras 72 horas para monitorizar el edema y la necesidad de la cirugía (NdE B). La decisión de comenzar una terapia antitrombótica, la elección del fármaco, el momento de inicio y la duración dependen de la causa del ACV y de factores como la edad y las comorbilidades. Cuando se determina que es cardioembólico o debido a una trombofilia, se recomienda la anticoagulación (NdE B). En casos de disección cervical arterial no hay datos pediátricos que indiquen la elección del fármaco antitrombótico. La presencia de trombo intraluminal puede pesar a favor de la anticoagulación, mientras que en un infarto de gran tamaño se prefiere la antiagregación. Los agentes antiagregantes se recomiendan en los infartos criptógenos y en el moyamoya. Se recomienda iniciar la terapia antitrombótica sólo después de que el riesgo de recurrencia supere la posibilidad de transformación hemorrágica del infarto. Cuando se utiliza anticoagulación, se recomienda infusión de heparina y realizar una neuroimagen cuando está en rango terapéutico para descartar una hemorragia antes de cambiar a un anticoagulante de larga acción. La elección del anticoagulante (heparina de bajo peso molecular, warfarina o anticoagulantes orales directos) debería basarse en la etiología y los factores del paciente. Los antiagregantes (ácido acetilsalicílico y clopidogrel) pueden iniciarse más tempranamente que los anticoagulantes. La duración del tratamiento depende de la causa. En los casos de ACV criptógeno, por consenso de expertos, se recomienda durante 2 años, ya que la mayoría de las recurrencias ocurre en este período (NdE B). Los corticoides y otras terapias inmunomoduladoras puede desempeñar un papel en la prevención de recurrencias en pacientes con arteriopatías infecciosas inflamatorias (NdE D). En niños con arteriopatía cerebral focal, se sugiere que los corticoides pueden mejorar la evolución cuando se agregan a los antiplaquetarios (NdE D). La terapia con inhibidores del factor de necrosis tumoral es el tratamiento pilar en niños con DADA2 (NdE D). Además, por el riesgo de ACV hemorrágico, no se recomiendan terapias antitrombóticas. No está claro el papel fisiopatológico del foramen oval permeable en el ACV criptógeno (NdE D). Debido al alto riesgo de recurrencia de ACV (NdE D). Es una entidad infrecuente y subdiagnosticada que implica trombosis del sistema venoso superficial o profundo, con obstrucción del drenaje e hipertensión endocraneana. En la mitad de los casos se asocia a infartos venosos . La incidencia varía de 0,8 a 40/100.000 niños/año. La mitad de los casos se presenta antes del año de vida, especialmente en el neonato (NdE C). ¿Cuándo debemos pensar en una trombosis venosa y de los senos cerebrales? Hay que considerarla en pacientes con factores de riesgo que puedan favorecer la trombosis venosa y de los senos cerebrales. La clínica es inespecífica, aguda o subaguda. Los síntomas en el neonato son hiporreactividad, vómitos, rechazo alimentario o convulsiones (NdE C); en niños, los síntomas son hipertensión endocraneana, como cefaleas, vómitos, trastornos visuales y edema de papila, y pueden asociar convulsiones o signos focales asociados al infarto (NdE C). En algunos casos puede ser asintomática, como hallazgo radiológico en un niño con infección otomastoidea o traumatismo encefalocraneano. ¿Cuáles son los factores de riesgo/patologías predisponentes para la trombosis venosa y de los senos cerebrales? En más del 80% de los casos se identifican factores de riesgo, que son diferentes según el grupo etario . – En neonatos: factores maternos, embarazo, parto o enfermedad neonatal aguda. – En niños: enfermedades preexistentes, infecciones, traumatismos o deshidratación (NdE C). Factores yatrógenos vinculados a la hipotermia terapéutica y la cirugía cardíaca han incrementado esta complicación (NdE C). Hasta el 60% de los neonatos y niños con trombosis venosa y de los senos cerebrales presentan alteraciones en pruebas de trombofilia, frente a un 15-25% en adultos (NdE C). ¿Cuáles son los estudios complementarios recomendados? Neuroimágenes Son fundamentales para el diagnóstico . Los hallazgos se pueden dividir en signos directos (visualización del trombo en el seno dural o la vena cerebral) e indirectos (edema, isquemia o hemorragia parenquimatosa) . La ecografía es muy útil en neonatos con sospecha de trombosis venosa y de senos cerebrales y puede detectar hemorragias talámicas o intraventriculares sugestivas de trombosis (NdE D). El Doppler puede identificar directamente la trombosis en senos superficiales o mostrar la ausencia de flujo en una vena, lo que sugiere un trombo (NdE D). La TC se recomienda en pacientes muy inestables o cuando la RM no está disponible (NdE D). En estudios sin contraste, el seno trombosado se observa hiperdenso y expandido en la etapa aguda (signo del triángulo denso) (NdE D). Este hallazgo puede ser difícil de interpretar en neonatos, ya que el seno normalmente puede ser relativamente hiperdenso respecto al parénquima cerebral (NdE D). La TC con contraste puede facilitar la detección del trombo (signo del delta vacío) (NdE D). La angio-TC venosa es similar en sensibilidad a la RM, en particular para el sistema venoso profundo y pequeñas venas (NdE D). La RM con venografía es el método de elección (NdE D). Pueden observarse cambios en la intensidad de la señal del seno afectado, secundarios a la degradación de la hemoglobina y a alteraciones del flujo (NdE D). En neonatos, la interpretación de imágenes de venografía debe ser cuidadosa (mayor prevalencia de vacío de señal de flujo en el sistema venoso por vasos de menor calibre) (NdE D). La venografía por RM con contraste es superior comparada con imágenes sin contraste, ya que identifica el signo del delta vacío (NdE D). La afectación parenquimatosa está presente aproximadamente en el 57% de los casos (el 56%, lesiones hemorrágicas) (NdE D). Laboratorio Se recomienda la evaluación exhaustiva de los factores de riesgo: hemograma, sideremia y ferritina, examen de orina, azoemia-creatininemia, hepatograma, proteinograma, eritrosedimentación, proteína C reactiva, procalcitonina, hemocultivos, urocultivo y secreciones respiratorias, incluyendo el SARS-CoV-2; y, según sospecha, líquido cefalorraquídeo o panel para colagenopatías (NdE D). Los estudios de trombofilias genéticas y adquiridas se recomiendan especialmente cuando no se identifica la etiología (NdE D). Los trastornos protrombóticos no genéticos deben realizarse a los 3 meses (NdE D). ¿Cuál es el tratamiento recomendado? Es fundamental el tratamiento de sostén (mantenimiento de la homeostasis, fármacos antiepilépticos si es necesario y tratamiento de las infecciones subyacentes) (NdE D). La anticoagulación con heparina de bajo peso molecular es segura y debe considerarse de forma individualizada, tanto en neonatos como en niños mayores, incluso en presencia de hemorragia (NdE D). La evidencia más clara sugiere que la ausencia de tratamiento anticoagulante se asocia con propagación del trombo y subsecuente infarto, lo que acarrea peor pronóstico (NdE D). En la trombosis venosa cerebral secundaria a infecciosas otológicas se recomienda el tratamiento combinado de antibióticos, quirúrgico y anticoagulación (NdE D). En los neonatos, los tiempos de recanalización pueden ser más cortos, por lo que se puede repetir la resonancia entre las 6 y las 12 semanas para evaluar la suspensión del tratamiento con heparina de bajo peso molecular (NdE D). Hay que considerarla en pacientes con factores de riesgo que puedan favorecer la trombosis venosa y de los senos cerebrales. La clínica es inespecífica, aguda o subaguda. Los síntomas en el neonato son hiporreactividad, vómitos, rechazo alimentario o convulsiones (NdE C); en niños, los síntomas son hipertensión endocraneana, como cefaleas, vómitos, trastornos visuales y edema de papila, y pueden asociar convulsiones o signos focales asociados al infarto (NdE C). En algunos casos puede ser asintomática, como hallazgo radiológico en un niño con infección otomastoidea o traumatismo encefalocraneano. En más del 80% de los casos se identifican factores de riesgo, que son diferentes según el grupo etario . – En neonatos: factores maternos, embarazo, parto o enfermedad neonatal aguda. – En niños: enfermedades preexistentes, infecciones, traumatismos o deshidratación (NdE C). Factores yatrógenos vinculados a la hipotermia terapéutica y la cirugía cardíaca han incrementado esta complicación (NdE C). Hasta el 60% de los neonatos y niños con trombosis venosa y de los senos cerebrales presentan alteraciones en pruebas de trombofilia, frente a un 15-25% en adultos (NdE C). Neuroimágenes Son fundamentales para el diagnóstico . Los hallazgos se pueden dividir en signos directos (visualización del trombo en el seno dural o la vena cerebral) e indirectos (edema, isquemia o hemorragia parenquimatosa) . La ecografía es muy útil en neonatos con sospecha de trombosis venosa y de senos cerebrales y puede detectar hemorragias talámicas o intraventriculares sugestivas de trombosis (NdE D). El Doppler puede identificar directamente la trombosis en senos superficiales o mostrar la ausencia de flujo en una vena, lo que sugiere un trombo (NdE D). La TC se recomienda en pacientes muy inestables o cuando la RM no está disponible (NdE D). En estudios sin contraste, el seno trombosado se observa hiperdenso y expandido en la etapa aguda (signo del triángulo denso) (NdE D). Este hallazgo puede ser difícil de interpretar en neonatos, ya que el seno normalmente puede ser relativamente hiperdenso respecto al parénquima cerebral (NdE D). La TC con contraste puede facilitar la detección del trombo (signo del delta vacío) (NdE D). La angio-TC venosa es similar en sensibilidad a la RM, en particular para el sistema venoso profundo y pequeñas venas (NdE D). La RM con venografía es el método de elección (NdE D). Pueden observarse cambios en la intensidad de la señal del seno afectado, secundarios a la degradación de la hemoglobina y a alteraciones del flujo (NdE D). En neonatos, la interpretación de imágenes de venografía debe ser cuidadosa (mayor prevalencia de vacío de señal de flujo en el sistema venoso por vasos de menor calibre) (NdE D). La venografía por RM con contraste es superior comparada con imágenes sin contraste, ya que identifica el signo del delta vacío (NdE D). La afectación parenquimatosa está presente aproximadamente en el 57% de los casos (el 56%, lesiones hemorrágicas) (NdE D). Laboratorio Se recomienda la evaluación exhaustiva de los factores de riesgo: hemograma, sideremia y ferritina, examen de orina, azoemia-creatininemia, hepatograma, proteinograma, eritrosedimentación, proteína C reactiva, procalcitonina, hemocultivos, urocultivo y secreciones respiratorias, incluyendo el SARS-CoV-2; y, según sospecha, líquido cefalorraquídeo o panel para colagenopatías (NdE D). Los estudios de trombofilias genéticas y adquiridas se recomiendan especialmente cuando no se identifica la etiología (NdE D). Los trastornos protrombóticos no genéticos deben realizarse a los 3 meses (NdE D). Son fundamentales para el diagnóstico . Los hallazgos se pueden dividir en signos directos (visualización del trombo en el seno dural o la vena cerebral) e indirectos (edema, isquemia o hemorragia parenquimatosa) . La ecografía es muy útil en neonatos con sospecha de trombosis venosa y de senos cerebrales y puede detectar hemorragias talámicas o intraventriculares sugestivas de trombosis (NdE D). El Doppler puede identificar directamente la trombosis en senos superficiales o mostrar la ausencia de flujo en una vena, lo que sugiere un trombo (NdE D). La TC se recomienda en pacientes muy inestables o cuando la RM no está disponible (NdE D). En estudios sin contraste, el seno trombosado se observa hiperdenso y expandido en la etapa aguda (signo del triángulo denso) (NdE D). Este hallazgo puede ser difícil de interpretar en neonatos, ya que el seno normalmente puede ser relativamente hiperdenso respecto al parénquima cerebral (NdE D). La TC con contraste puede facilitar la detección del trombo (signo del delta vacío) (NdE D). La angio-TC venosa es similar en sensibilidad a la RM, en particular para el sistema venoso profundo y pequeñas venas (NdE D). La RM con venografía es el método de elección (NdE D). Pueden observarse cambios en la intensidad de la señal del seno afectado, secundarios a la degradación de la hemoglobina y a alteraciones del flujo (NdE D). En neonatos, la interpretación de imágenes de venografía debe ser cuidadosa (mayor prevalencia de vacío de señal de flujo en el sistema venoso por vasos de menor calibre) (NdE D). La venografía por RM con contraste es superior comparada con imágenes sin contraste, ya que identifica el signo del delta vacío (NdE D). La afectación parenquimatosa está presente aproximadamente en el 57% de los casos (el 56%, lesiones hemorrágicas) (NdE D). Se recomienda la evaluación exhaustiva de los factores de riesgo: hemograma, sideremia y ferritina, examen de orina, azoemia-creatininemia, hepatograma, proteinograma, eritrosedimentación, proteína C reactiva, procalcitonina, hemocultivos, urocultivo y secreciones respiratorias, incluyendo el SARS-CoV-2; y, según sospecha, líquido cefalorraquídeo o panel para colagenopatías (NdE D). Los estudios de trombofilias genéticas y adquiridas se recomiendan especialmente cuando no se identifica la etiología (NdE D). Los trastornos protrombóticos no genéticos deben realizarse a los 3 meses (NdE D). Es fundamental el tratamiento de sostén (mantenimiento de la homeostasis, fármacos antiepilépticos si es necesario y tratamiento de las infecciones subyacentes) (NdE D). La anticoagulación con heparina de bajo peso molecular es segura y debe considerarse de forma individualizada, tanto en neonatos como en niños mayores, incluso en presencia de hemorragia (NdE D). La evidencia más clara sugiere que la ausencia de tratamiento anticoagulante se asocia con propagación del trombo y subsecuente infarto, lo que acarrea peor pronóstico (NdE D). En la trombosis venosa cerebral secundaria a infecciosas otológicas se recomienda el tratamiento combinado de antibióticos, quirúrgico y anticoagulación (NdE D). En los neonatos, los tiempos de recanalización pueden ser más cortos, por lo que se puede repetir la resonancia entre las 6 y las 12 semanas para evaluar la suspensión del tratamiento con heparina de bajo peso molecular (NdE D). Representa casi la mitad de los casos de ACV. La etiología es diferente en los adultos, y entre el período neonatal y el pediátrico. Aquí se excluyen las formas traumáticas, la hemorragia intraventricular del prematuro y la transformación hemorrágica del ACV isquémico. ¿Cuáles son las causas de accidente cerebrovascular hemorrágico en el período neonatal? Las causas de ACV hemorrágico en neonatos a término son diversas e incluyen las coagulopatías, la trombocitopenia y, menos frecuentemente, alteraciones vasculares estructurales. A pesar de que en la mayoría de los neonatos con ACV hemorrágico no se identifica una causa específica, se han descrito algunos factores de riesgo, como la cesárea de emergencia, el distrés fetal y el sexo masculino. En neonatos con infarto hemorrágico asociado a porencefalia, glaucoma o cataratas deberían buscarse mutaciones en los genes COL4A1 y COL4A2 . Debe descartarse la deficiencia de vitamina K en los pacientes que no recibieron la suplementación posnatal. Además, esta deficiencia puede ocurrir en hijos de madres que recibieron warfarina, fenitoína o barbitúricos durante el embarazo. Otras causas descritas son la hemofilia A y otras coagulopatías hereditarias. ¿Cuáles son las causas de accidente cerebrovascular hemorrágico en pediatría? En alrededor del 75% de los casos con ACV hemorrágico pediátrico espontáneo no traumático se documentan lesiones estructurales, y las malformaciones arteriovenosas son las más frecuentes (NdE B). Otras causas estructurales son los aneurismas, las fistulas arteriovenosas y las malformaciones cavernomatosas (NdE B). Aproximadamente, en el 10% de los casos no se documenta la causa. Dentro de las causas hematológicas hereditarias, las más comunes son la hemofilia A (deficiencia del factor VIII) o la B (deficiencia del factor IX) y la enfermedad de von Willebrand. Otras menos frecuentes son deficiencia del factor VII, el factor II y el factor XIII. Dentro de las causas adquiridas, la más común es la púrpura trombocitopénica idiopática. La coagulopatía también puede relacionarse con el fallo hepático, la coagulación diseminada aguda o ser yatrógena, por ejemplo, secundaria al uso de anticoagulantes en pacientes con cirugía cardiovascular o en circulación extracorpórea (NdE B). Las infecciones sistémicas o del sistema nervioso central son otra causa. Los pacientes con anemia de células falciformes también tienen un riesgo mayor de ACV hemorrágico. Comorbilidades comunes son anomalías genéticas o vasculares, enfermedad cardíaca/cirugía, colagenopatías y errores congénitos del metabolismo (NdE B). ¿Cuándo debemos pensar en un accidente cerebrovascular hemorrágico? Las manifestaciones clínicas son irritabilidad, alteración de la conciencia o crisis epilépticas, afectación de los pares craneanos y trastornos visuales o cerebelosos . La cefalea y el déficit neurológico en lactantes pueden ser más inespecíficos y se relacionan con la localización (NdE B). El neonato puede tener un deterioro súbito y catastrófico similar al paciente mayor con ACV de gran volumen o disminución del estado de alerta, hipotonía, movimientos oculares anormales, dificultad respiratoria y convulsiones (NdE B). ¿Cuáles son los estudios complementarios recomendados? La TC es de alta sensibilidad para detectar hemorragias. Es de utilidad en pacientes con deterioro de la conciencia (NdE B). Las secuencias de RM (DWI, SWI o GRE, FLAIR, y angio-RM arterial y venosa) son ideales especialmente en el paciente más estable, ya que permiten diferenciar la transformación hemorrágica de sangre arterial o venosa y el infarto de hemorragia primaria, y se recomiendan en la fase aguda del ACV hemorrágico. La angiografía cerebral es la técnica de referencia para el estudio de malformaciones (NdE D). Se deberán identificar los factores de riesgo fácilmente corregibles, como la trombocitopenia, la coagulopatía o la hipertensión. En casos particulares, cuando el paciente esté estabilizado, se recomienda descartar trastornos hemorragíparos, teniendo en cuenta la historia familiar y los hallazgos del laboratorio. ¿Cuáles son los tratamientos recomendados? Mantener la perfusión cerebral adecuada, evitar el resangrado, el control de crisis y la monitorización continua electroencefalográfica (NdE D). No existen estudios con nimodipina o tranexámicos en niños . Las indicaciones quirúrgicas están relacionadas con el manejo de complicaciones como la hipertensión endocraneana. No existen estudios aleatorizados para el drenaje ventricular, la craneotomía o la colocación de un monitor de presión intracraneana (NdE C). Las intervenciones quirúrgicas para la prevención de resangrados dependen de cada tipo de lesión . Las causas de ACV hemorrágico en neonatos a término son diversas e incluyen las coagulopatías, la trombocitopenia y, menos frecuentemente, alteraciones vasculares estructurales. A pesar de que en la mayoría de los neonatos con ACV hemorrágico no se identifica una causa específica, se han descrito algunos factores de riesgo, como la cesárea de emergencia, el distrés fetal y el sexo masculino. En neonatos con infarto hemorrágico asociado a porencefalia, glaucoma o cataratas deberían buscarse mutaciones en los genes COL4A1 y COL4A2 . Debe descartarse la deficiencia de vitamina K en los pacientes que no recibieron la suplementación posnatal. Además, esta deficiencia puede ocurrir en hijos de madres que recibieron warfarina, fenitoína o barbitúricos durante el embarazo. Otras causas descritas son la hemofilia A y otras coagulopatías hereditarias. En alrededor del 75% de los casos con ACV hemorrágico pediátrico espontáneo no traumático se documentan lesiones estructurales, y las malformaciones arteriovenosas son las más frecuentes (NdE B). Otras causas estructurales son los aneurismas, las fistulas arteriovenosas y las malformaciones cavernomatosas (NdE B). Aproximadamente, en el 10% de los casos no se documenta la causa. Dentro de las causas hematológicas hereditarias, las más comunes son la hemofilia A (deficiencia del factor VIII) o la B (deficiencia del factor IX) y la enfermedad de von Willebrand. Otras menos frecuentes son deficiencia del factor VII, el factor II y el factor XIII. Dentro de las causas adquiridas, la más común es la púrpura trombocitopénica idiopática. La coagulopatía también puede relacionarse con el fallo hepático, la coagulación diseminada aguda o ser yatrógena, por ejemplo, secundaria al uso de anticoagulantes en pacientes con cirugía cardiovascular o en circulación extracorpórea (NdE B). Las infecciones sistémicas o del sistema nervioso central son otra causa. Los pacientes con anemia de células falciformes también tienen un riesgo mayor de ACV hemorrágico. Comorbilidades comunes son anomalías genéticas o vasculares, enfermedad cardíaca/cirugía, colagenopatías y errores congénitos del metabolismo (NdE B). Las manifestaciones clínicas son irritabilidad, alteración de la conciencia o crisis epilépticas, afectación de los pares craneanos y trastornos visuales o cerebelosos . La cefalea y el déficit neurológico en lactantes pueden ser más inespecíficos y se relacionan con la localización (NdE B). El neonato puede tener un deterioro súbito y catastrófico similar al paciente mayor con ACV de gran volumen o disminución del estado de alerta, hipotonía, movimientos oculares anormales, dificultad respiratoria y convulsiones (NdE B). La TC es de alta sensibilidad para detectar hemorragias. Es de utilidad en pacientes con deterioro de la conciencia (NdE B). Las secuencias de RM (DWI, SWI o GRE, FLAIR, y angio-RM arterial y venosa) son ideales especialmente en el paciente más estable, ya que permiten diferenciar la transformación hemorrágica de sangre arterial o venosa y el infarto de hemorragia primaria, y se recomiendan en la fase aguda del ACV hemorrágico. La angiografía cerebral es la técnica de referencia para el estudio de malformaciones (NdE D). Se deberán identificar los factores de riesgo fácilmente corregibles, como la trombocitopenia, la coagulopatía o la hipertensión. En casos particulares, cuando el paciente esté estabilizado, se recomienda descartar trastornos hemorragíparos, teniendo en cuenta la historia familiar y los hallazgos del laboratorio. Mantener la perfusión cerebral adecuada, evitar el resangrado, el control de crisis y la monitorización continua electroencefalográfica (NdE D). No existen estudios con nimodipina o tranexámicos en niños . Las indicaciones quirúrgicas están relacionadas con el manejo de complicaciones como la hipertensión endocraneana. No existen estudios aleatorizados para el drenaje ventricular, la craneotomía o la colocación de un monitor de presión intracraneana (NdE C). Las intervenciones quirúrgicas para la prevención de resangrados dependen de cada tipo de lesión . Esta guía revisa las manifestaciones clínicas, los estudios complementarios y el tratamiento del ACV y la trombosis venosa y de los senos cerebrales en pediatría. La aplicación de estas recomendaciones puede variar según el caso y la disponibilidad de recursos de la institución. La Academia Iberoamericana de Neurología Pediátrica promueve la educación y la divulgación de información en pos de una mejor atención para el paciente y su familia. |
Evaluating palatal mucosal thickness in orthodontic miniscrew sites using cone-beam computed tomography | bbdcb722-9ca3-43a7-b36d-460bf0514792 | 11439259 | Dentistry[mh] | In orthodontic treatment, anchorage control is an important consideration in extraction treatments and in patients requiring molar distalization to avoid unwanted treatment results . Orthodontic miniscrews are widely used today instead of traditional anchorage appliances, Nance appliances, or transpalatal or lingual arches . Miniscrews have revolutionized anchorage control, ushering in a new era in orthodontic practice . Many studies concur that the success rates of orthodontic miniscrews are 80–95% , and most studies in the literature on miniscrew stability have documented a failure rate of less than 20% , but nearly 100% success rates have been reported for miniscrews on the palate . The stability and success of orthodontic miniscrews depends on many factors, such as the application area, the angle at which the miniscrew enters the bone, the cortical bone characteristics, the contact degree of the miniscrew with the bone, the soft tissue thickness and mobility, the craniofacial morphology, and the miniscrew’s characteristics . When the effect of soft tissues on the stability of miniscrews is evaluated, the failure risk of miniscrews surrounded by non-keratinized mucosa is found to be greater than that of miniscrews surrounded by keratinized mucosa. Although the soft tissue thicknesses of the palate mucosa differ, it is a reliable site for miniscrews as it consists of keratinized tissues . Because of the different soft tissue thicknesses on the hard palate, miniscrews of the same length will have different levels of bone-screw contact, permanent bone-screw surface area contact, and primary stability contact at various application sites. For this reason, measurements of soft tissue variation should be considered in selecting a miniscrew for the palatal region; it is an important factor affecting the stability of the miniscrew in that region . This study measured palatal soft tissue thicknesses on CBCT images in areas of potential orthodontic miniscrew application in the palatal region to improve the prognosis of orthodontic miniscrews and provide a useful miniscrew selection guide for orthodontists.
This retrospective radiological clinical study protocol was approved by the Ethics Committee of the Antalya Training and Research Hospital. The study obtained 60 CBCT images (30 female, 30 male; age range 19–45; mean age 32 ± 11) from Antalya Bilim University (Bilim Dent) that were taken between September 2019 and February 2020. The images were taken by a Galileos Comfort Plus (Sirona Dental Systems, Germany), and the CBCT imaging parameters were 98 kVp/6 mA with a 0.5 mm slice thickness. Eligibility criteria The inclusion criteria embraced subjects who had bilateral or unilateral dentition from the maxillary canines to the molars. All the subjects selected for the study had a normal craniofacial growth pattern (skeletal Class I malocclusion [0 < ANB°<5] and a normal vertical growth pattern [26 < SN/Go-Gn°<38]), with no orthodontic or prosthetic treatment history and no tooth extraction except for third molars. To eliminate genetic factors, all the subjects were individuals of Turkish origin living in the Mediterranean region. Subjects with a history of palate surgery, periodontal disease, use of a removable or fixed prosthesis or an orthodontic appliance, severe crowding, ectopically positioned teeth, or missing or impacted teeth in the maxilla were excluded from the study. Data collection The points at the cemento-enamel junction (CEJ) of the maxillary canine, first premolar, second premolar, first molar, and second molar were marked in the horizontal plane image of the CBCT and were represented as Ca, Pr1, Pr2, M1, M1-M2, M2 respectively (Fig. A and C). Additional points were marked 3 mm apart for each tooth along the surface of the palatal mucosal, from the CEJs to the surface of the middle palatine suture. Palatal soft tissue thickness was assessed by marking the measurement areas from the soft tissue surface to the hard tissue surface (Fig. B) with three intervals. In addition, in order to determine the palatal mucosa thickness in the midpalatal region, soft tissue thicknesses were measured at 3 mm intervals starting from the incisive papilla along the middle palatal suture. Two orthodontists (BK and MHB) evaluated the reliability of the measurements on 15 randomly selected subjects using a Sidexis 4 (Sirona Dental Systems, Germany) software measurement tool without calibration. To prevent potential bias, the researchers performed their CBCT estimations separately. The interclass correlation coefficient showed a high correlation between the researchers ( r = 0.951, p < 0.000), and all the measurements were performed twice by one researcher (BK) to increase reliability. Palatal soft tissue thickness was recorded to two decimal points (0.01 mm) at each site on the CBCT images. These measurements were repeated 15 days later. The investigator responsible for the data analysis did not participate in the measurement sessions. The CBCT images were visually examined on a laptop PC (Toshiba Europe, Neuss, Germany). Statistical analysis All the numerical data of the various measurement groups were calculated as mean values and standard deviations. Statistical analysis was carried out using SPSS software (Win, ver. 21.0; SPSS Inc., Chicago, IL, USA). The data from measurements at 3 mm intervals for each tooth were examined using one-way ANOVA followed by the Tukey test. Due to the maxilla’s expanding from the Ca region to the M2 region, measurements in the Ca region were up to 15 mm and those in the M2 region were up to 30 mm; p < 0.05 was considered as significant.
The inclusion criteria embraced subjects who had bilateral or unilateral dentition from the maxillary canines to the molars. All the subjects selected for the study had a normal craniofacial growth pattern (skeletal Class I malocclusion [0 < ANB°<5] and a normal vertical growth pattern [26 < SN/Go-Gn°<38]), with no orthodontic or prosthetic treatment history and no tooth extraction except for third molars. To eliminate genetic factors, all the subjects were individuals of Turkish origin living in the Mediterranean region. Subjects with a history of palate surgery, periodontal disease, use of a removable or fixed prosthesis or an orthodontic appliance, severe crowding, ectopically positioned teeth, or missing or impacted teeth in the maxilla were excluded from the study.
The points at the cemento-enamel junction (CEJ) of the maxillary canine, first premolar, second premolar, first molar, and second molar were marked in the horizontal plane image of the CBCT and were represented as Ca, Pr1, Pr2, M1, M1-M2, M2 respectively (Fig. A and C). Additional points were marked 3 mm apart for each tooth along the surface of the palatal mucosal, from the CEJs to the surface of the middle palatine suture. Palatal soft tissue thickness was assessed by marking the measurement areas from the soft tissue surface to the hard tissue surface (Fig. B) with three intervals. In addition, in order to determine the palatal mucosa thickness in the midpalatal region, soft tissue thicknesses were measured at 3 mm intervals starting from the incisive papilla along the middle palatal suture. Two orthodontists (BK and MHB) evaluated the reliability of the measurements on 15 randomly selected subjects using a Sidexis 4 (Sirona Dental Systems, Germany) software measurement tool without calibration. To prevent potential bias, the researchers performed their CBCT estimations separately. The interclass correlation coefficient showed a high correlation between the researchers ( r = 0.951, p < 0.000), and all the measurements were performed twice by one researcher (BK) to increase reliability. Palatal soft tissue thickness was recorded to two decimal points (0.01 mm) at each site on the CBCT images. These measurements were repeated 15 days later. The investigator responsible for the data analysis did not participate in the measurement sessions. The CBCT images were visually examined on a laptop PC (Toshiba Europe, Neuss, Germany).
All the numerical data of the various measurement groups were calculated as mean values and standard deviations. Statistical analysis was carried out using SPSS software (Win, ver. 21.0; SPSS Inc., Chicago, IL, USA). The data from measurements at 3 mm intervals for each tooth were examined using one-way ANOVA followed by the Tukey test. Due to the maxilla’s expanding from the Ca region to the M2 region, measurements in the Ca region were up to 15 mm and those in the M2 region were up to 30 mm; p < 0.05 was considered as significant.
The error in measuring palatal soft tissue thicknesses on CBCT images in current study was less than that in research using periodontal probing methods , which may be related to measurements’ achieving more precise results with CBCT. The researchers took measurements at 120 sites on both sides of the maxilla in all subjects in the study. Figure shows the palatal soft tissue thickness in the maxillary Ca region. As can be seen in that figure, there was no statistically significant difference between the points measured from the CEJ to the middle palate suture in the maxillary Ca region. Pr1 showed greater soft tissue thickness at the 6 and 9 mm points than at 3 mm (Fig. ). Pr1 at 9 mm had the thickest soft tissue of all the measured points (3–21 mm). Pr2 at 3 mm demonstrated thinner mucosa than at the 6, 9, 12, 15, and 18 mm points (Fig. ). Pr2 had the thickest mucosa at 12 mm, and it became thinner again as it approached the middle palatal suture. In the M1 region, no significant difference was observed between the 3 and 6 mm points (Fig. ). The thickness at 9, 12, 15, and 18 mm was significantly greater than at 3 and 6 mm, with the thickest palatal mucosa being at the 12 mm point. The comparison of mucosal thickness between teeth showed significantly greater thickness in the Ca region at the 3 mm point, in the Pr1 region at the 6 mm point, and in the Pr2 region at the 9 and 12 mm points. At the 9 mm point, the Pr1 region exhibited greater thickness than did the M1-M2 whereas the Pr2 region was thicker than the M1 and M1-M2 regions (Fig. ). At the 12 and 15 mm points, the thickness increased from anterior to posterior: the Pr1 region was thinner than the Pr2, M1, and M2 regions; the Pr2 region was thinner than the M2 region; and the M1 region was thinner than the M2 region (Fig. ). It was observed that the thickness of the palatal mucosa in the midpalatal region varied between 1.31 and 3.41 mm throughout the suture, and there was no statistically significant difference in comparison ( p > 0.05)(Fig. ).
The soft tissue thickness in the area where the miniscrew is placed is an important factor in the success of orthodontic miniscrews, but very few studies have addressed this in the related literature, and most studies have examined bone thickness. The few studies examining soft tissue thickness mostly used periodontal probes and ultrasonography, and the vast majority of them were periodontal studies for graft applications . CBCT was the preferred method in the current study because it is not invasive, as is a probe, and because it is more suitable for routine use than ultrasonography. The general findings were found to be periodontal probe and about 1 mm thicker than previous research findings using an ultrasonic device . This difference in measurements may have resulted from the population difference changes in measurement areas, or differences in angle and method. In their overview, Wilmes et al. suggested placing miniscrews in the median region for sagittal and vertical tooth movements and for patients with palatally impacted upper canines and in the paramedian region for rapid maxillary expansion. They also reported that the appropriate location for palatal mini implant placement depends on the biomechanics and appliance design to be used. However, they evaluated the researchers subjectively in terms of damaging the surrounding anatomical structures and stated that the T-Zone might be more appropriate . In this study, regardless of the mechanics used, it was investigated in terms of soft tissue whether that region is suitable for palatal mini implant placement according to its anatomical features. This study aimed to determine soft tissue thickness extensively from the gingival margin to the mid-palatal suture using CBCT in the most common orthodontic miniscrew insertion sites in the palatinal region. The research sought to determine the most reliable anatomical location for clinicians in terms of the soft tissue so as to guide the appropriate palatal implant selection for the region. In studies in the literature, soft tissue thickness has been examined based on age, and no significant difference was found. In terms of gender differences, the maxilla anterior is thicker in females and the maxilla posterior in males, but its general characteristics are similar . For this reason, the present study examined a population with an equal number of females and males aged 20–25. There was no grouping according to age and gender. This study show that the mucosal thickness in the palatal region increases from anterior (mean: 1.81 mm) to posterior (mean: 3.06 mm). There was no difference between the various points in the Ca region in terms of mucous thickness. In the Pr1 region at the 9 mm point and in the Pr2 and M2 regions at the 12 mm point, the palatal mucosa thickness was significantly greater. In the posterior region, the mucosal thickness was measured at a maximum at the 15 mm point. Ueno et al. also examined palatal mucosal thicknesses periodontally with a similar method, and results are consistent with their study . The current study aimed to identify thinner soft tissue and thicker cortical bone areas for orthodontic mini screw stability, while these researchers aimed to identify thicker mucosal areas that are more suitable for grafting. Becker et al. aimed to evaluate whether specific insertion angles are useful for orthodontic mini implants in the anterior palate. Although the starting point of the study is similar to this study, the ideal palatal implant placement areas were examined in terms of bone thickness, while in the current study, an evaluation was made in terms of palatal mucosa thickness. As a result of their studies, they found the greatest bone thickness and bone fraction values between the first and second premolars in the palatal suture and reported a decrease in the effective bone height in the posterior direction. Effective bone heights reached maximum values in the slightly anterior and lateral of the first premolars . According to the results, it was observed that the thickness of the mucosal thickness increased from a distance of 6 mm from the gingival border towards the posterior. In addition, approximately 3.5–4 mm mucosal thickness was detected in the area that is anatomically safe for implant placement, which is defined as the T-Zone, especially in the anterior region of the palate. It was found that the mucosal thickness decreased in the midpalatal area towards the posterior part. Researchers reported that optimal bone support extends from the lateral of the first premolars in the paramedian region and up to the second premolar in the median region . In this study, in parallel with the findings in terms of mucosal thickness, the palatal mucosa was found thinner in the lateral of the first premolars and up to the second premolars in the median region. Parmar et al. measured palatal mucosa using ultrasonography but used a single reference point (6 mm) . They detected the thickest part of the palatal mucosa at 6 mm between the P2 and the M1 (mean: male 3.17 mm, female 3.1 mm) and the thinnest part in the midpalate (mean: male 0.79 mm, female 0.8 mm). In current study, using a different method, detected the thinnest palatal mucosa in the midpalate; contrary to Parmar et al.’s findings, the thickest place on the palatal mucosa at the 6 mm point was determined as the Pr1 region. Differences in some findings may be due to differences in population or method. Kim et al. also measured soft and hard tissue on cadavers in the palatinal region . They found that the thickness of the palatal mucosa in the posterior region was significantly higher at the 8–12 mm level in soft tissue measurements, but they found similar mucosal thicknesses in other regions. Although the study of cortical bone thickness in research on cadavers offers important contributions to the literature, it must be remembered that cadavers may not give accurate results in the examination of soft tissues, and this may explain the differences in these results. Orthodontic miniscrews in the palatinal region are used for various purposes . Miniscrews placed in the anterior region are preferred for molar distalization and expansion. Palatal implants are used for molar intrusion and eruption of impacted canine teeth in the posterior region. In addition to its contribution to the literature, the present study has clinical implications. Although the quality and amount of cortical bone more significantly affect the stability of orthodontic miniscrews, the thickness of the attached gingiva is an important consideration in placing a miniscrew in the interdental areas . The primary stability of a miniscrew is maximized when it is placed in thinner soft tissues and thicker cortical bone areas . An effective factor in the selection of a miniscrew is the length of its transmucosal collar, which will remain in the soft tissue. It is recommended that this be as long as possible without affecting the health of adjacent tissues. Because it is important to determine the optimal screw length placed in the bone to increase primary stability, information on the thickness of the soft tissues is crucial. Researchers such as Lee et al. and Nanda and Uribe have suggested using a miniscrew of at least 6 mm in length both to increase anchorage and for more predictable results. In the light of this information and based on these results, the following proposed clinical guide for orthodontic miniscrew selection and stocking will be beneficial in terms of stability: a 3 mm transmucosal collar on screws in the anterior maxilla (Ca and Pr1), a 4 mm transmucosal collar on screws in the posterior maxilla (Pr2, M1, M2), a 5 mm transmucosal collar on screws in the palate (Pr2), and a 1 mm transmucosal collar on screws in the midpalate. While miniscrews with shorter transmucosal collars are preferred in the buccal region, longer neck screws are preferred in the palatal region, buccal shelf, and infrazygomatic area. Various researchers in the literature have reported that the interseptal area between the P2 and the M1 in the maxilla is the most suitable place for miniscrew placement . In the palatal interdental region, the palatal gingiva is thick and keratinized. It is therefore suitable for a miniscrew, and soft tissue problems are not common. Considering these anatomical and clinical findings, it can be said that orthodontic mini screws with an average length of 7 mm and a transmucosal collar of 4 mm are more suitable for palatal regions. Soft tissue thickness in the palatal region was examined in a study by Lyu et al., who made measurements in reference to the mid-palatal suture from only four teeth (P1, P2, M1, M2) . In the result, they found no significant difference in soft tissue according to age and gender. Although similar methods were used in the current study, the researchers’ main goal was to determine the appropriate anatomical location for the screws used in miniscrew-assisted rapid palatal expansion (MARPE). In the current study, it was aimed to determine the most appropriate locations of mini screws used in the care of the posterior region, which is characterized by molar intrusion, distalization mechanics and eruption of impacted canines, as well as MARPE application in the anterior region. In the results of study; based on the findings, a guide for orthodontists was prepared (Fig. ). In the schematized figure, the mucous thicknesses in the areas where the orthodontic miniscrew is applied are shown in colors in the palatal region. Green areas are the most suitable areas for palatal implants in terms of soft tissue, where soft tissue thickness is less than 3 mm. Red areas, on the other hand, are risky areas for palatal implants in terms of soft tissue, where soft tissue thickness is more than 4 mm. Many factors should be considered in order to determine the most suitable location for the palatal implants. In terms of soft tissue evaluation, current study will guide clinical applications and future studies.
Soft tissue thickness is an important factor in orthodontic miniscrew stabilization. The mucosal thickness in the palatal region increases from anterior to posterior. In terms of soft tissue, the most suitable place for miniscrew placement is 6 mm from the gingival margin of the teeth. If mini implants are placed in thick soft tissue areas, mini implant failure rates are much higher due to tipping moments . Therefore, clinicians should avoid areas with thick soft tissues for palatal screw placement.
|
An Automatic Lie Detection Model Using EEG Signals Based on the Combination of Type 2 Fuzzy Sets and Deep Graph Convolutional Networks | 8ac9801c-e746-4662-8931-756ed2d856f9 | 11175191 | Forensic Medicine[mh] | In recent decades, truth detection and lie detection tests have attracted the attention of many enthusiasts due to the increase in security threats and crime prevention and control. Many efforts have been made to design effective lie detection systems, and thus, advanced neuroscience-based methods for behavioral research have piqued the interest of scientists and researchers . The most popular technique for detecting confirmation of hidden information is the polygraph. This approach is predicated on the idea that lying can cause various physiological reactions that can be seen and documented with the right equipment. Physiological responses are used in the polygraph to study the body’s involuntary alterations . A polygraph assesses involuntary body changes such as skin conductance, heart rate, blood pressure, and breaths per minute . To determine the subject’s level of honesty, the operator of the polygraph machine compares the measured physiological values to the expected normal levels of physiological signals following the test. However, despite its good performance, the polygraph is untrustworthy because experienced criminals can maintain normal physiological functions while being interrogated by the examiner with a polygraph and deceive both the examiner and the polygraph machine. As a result, the polygraph test results are not legal or valid . However, in the recent decade, technologies beyond the polygraph, such as brain signals or electroencephalogram (EEG), have been created to identify truths and lies accurately. EEG waves can help discriminate between truth and lies. EEG is employed in various medical applications in patients, such as brain–computer interface (BCI) and epilepsy diagnosis . Brain signals are among the human electrical signals. Nerve cells in the brain produce electrical impulses that change in distinct wave patterns regularly . EEG is the recording of electrical activity on the head using electrodes. Electroencephalography can identify lying by analyzing aberrant brain wave variations. These signals are challenging to classify because of their instability and low signal-to-noise ratio (SNR) . After recording the signal, the primary purpose is to interpret, analyze, and transform the waves into a human-readable format for input for various devices. For this purpose, recent years have seen the development of research into the creation of lie detection systems based on EEG, which is discussed below. Abutalebi and colleagues studied the extraction of EEG features in P300-based lie detection. As a result, these researchers developed a novel technique based on specific features and statistical classification. In this study, the researchers used Ag/AgCl electrodes in the Fz (frontal area), Cz (central area), and Pz (parietal lobe) locations of the 10–20 system to record EEG signals at a sampling rate of 256 Hz. The best features in this study were determined as input feature vectors for the classifier using a genetic algorithm (GA). The researchers chose morphological, frequency, and time series features. According to this study, the rate of correct diagnosis based on the two classes of guilty and innocent is as high as 86%. Amir and colleagues investigated lie detection using EEG signal processing during interrogations. In this study, frequency bands of brain waves were first extracted. The second step involved extracting morphological features such as amplitude, peak, and delay from existing waves. This study used a standard 10–20 system to record five channels of EEG signals. The study concluded that increasing the number of electrodes in the signal recording yielded more accurate results for distinguishing truth from lies. Mohammad and colleagues investigated how human emotions change while lying using EEG and electrooculography (EOG) signals. This study had ten participants ranging in age from 18 to 28. EEG electrodes were applied to the patient’s scalp using a standard 10–20 system with 32 channels. Furthermore, the sampling rate used to record the signal for each channel was 2000 samples per second. In this study, the delta waves in the supine position had the greatest effect on separation, resulting in a classification accuracy of 67%. Furthermore, the remaining theta, alpha, beta, and gamma waves had maximum accuracy of 52.15%, 55.10%, 79.6%, and 13%, respectively. In this study, the researchers determined that electroencephalography is an accurate and sensitive method for measuring emotional expression while lying. Gao and colleagues surveyed P300-based lie detection techniques. They developed a new method to improve the SNR ratio of the P300 wave, which is used to increase the accuracy of separating lies from truth. In this study, 14 EEG channels from 34 patients were recorded. The P300 wave with a high signal-to-noise ratio was obtained using a new spatial denoising method based on independent component analysis (ICA). This study extracted features in the time domain as well as the frequency domain. This study used the support vector machine (SVM) classifier to classify the feature vector. The maximum accuracy obtained in this study was reported to be 96%. Simbolon and colleagues presented an intelligent system for lie detection based on EEG signals using an SVM classifier. They used Fz, Cz, Pz, O1, and O2 channels to record the signal. The features used in this study were mean, standard deviation, median, maximum, and minimum. The researchers reported a final accuracy of around 70%. Although the classification accuracy was low in this study, it could distinguish between all classes (both false and true). The study’s second advantage is the use of minimal signal-recording electrodes. Saini and colleagues investigated the classification of EEG signals using various features for lie detection. This paper described a novel approach to extracting and integrating domain features with an SVM classifier. EEG data were collected using the international 10–20 electrode placement system, which consisted of channels C3, C4, P3, Pz, P4, O1, O2, and Oz. The Pz channel produced the best results in the analysis of recorded electrodes. This study employed time, frequency, wavelet transform (WT), and empirical mode decomposition (EMD) parameters. Finally, 40 features were extracted from the data and classified with an SVM classifier. The researchers reported a maximum accuracy rate of 98%. Despite the high accuracy in separating the classes, this research has a high computational volume and is not suitable for use in real-time systems. Yohan and colleagues proposed a lie detection system that used EEG signals from SVM, K-nearest neighbor (KNN), artificial neural networks (ANNs), and linear classifiers (LRs). The recorded signal was processed with a fast Fourier transform (FFT) to extract features. Among the classifiers tested, the SVM classifier had the highest accuracy (86%) for classifying lie and truth. Bagel and colleagues used deep convolutional networks to distinguish between truth and lies based on EEG data automatically. Their research aimed to develop a deep learning-based model capable of distinguishing truth from lies while not controlling emotions or physiological expressions. The proposed model was trained and validated using the DRYAD dataset. In this dataset, 30 people were randomly assigned to the guilty and innocent groups, and the stimulus was evaluated while brain signals were recorded. These researchers proposed a network in which low-level features were extracted for the first layers. Furthermore, their proposed network had varying numbers of neurons and modified rectified linear unit (ReLU), hyperbolic tangent, and sigmoid activation functions. The accuracy reported for classification using the proposed method by these researchers was 84%. Dodia and colleagues suggested an Extreme Learning Machines (ELMs)-based lie detection system using EEG signals. The researchers recorded the EEG signal using 16 Ag/AgCl electrodes. In their study, the recorded signal was first preprocessed to eliminate noise. The signal was then analyzed using algorithms such as the Fourier transform (FT) to extract features. The researchers’ study identified features such as mean, variance, maximum, minimum, skewness, elongation, and power. Finally, the feature vector was classified with the ELM classifier. The maximum reported accuracy for the classification proposed by these researchers was 88%. Kang and colleagues created a lie detection system using deep learning. This study employed independent component analysis (ICA) and clustering techniques. In addition, this study used a functional connection network (FCN) classifier to classify the lie and truth classes. This study discovered that lying improves information exchange between the frontal and temporal lobes. The final accuracy reported in this study was 88%. Boddu and colleagues demonstrated a lie detection system based on EEG signals. This study enhanced EEG channels using the particle swarm optimization (PSO) algorithm. Based on this, only PSO-selected channels were used in the study. The proposed approach in this study, which is based on SVM classification, achieved an accuracy of 96%. The classifier’s high accuracy was one of the study’s advantages; however, one of its limitations was the use of class feature extraction and selection. A review of previous studies on the automatic detection of truth from lies using EEG signals reveals that, while many studies have been conducted in this field, there are still numerous limitations. These limitations and challenges are thoroughly examined below: (A) All prior research (apart from a single instance) retrieved the feature vector from the signal using conventional, manual techniques. It has been demonstrated that using manual and conventional approaches necessitates having prior problem-solving skills. This means that a characteristic retrieved from one issue or subject could not be desirable in another, reducing the classification accuracy. This problem has also been noted in earlier research. Furthermore, manual and conventional feature extraction techniques may raise the training process’s computational efficiency. Based on this, it is possible to conclude that manual and traditional feature extraction does not guarantee that the selected/extracted feature is best for the classifier. As a result, the examined techniques, which relied on laborious manual processes and conventional approaches, need to offer high reliability for automatically separating truth from falsehood. (B) It can be said that the EEG datasets used in previous research are only based on visual stimulation and are not based on questions and answers from the participants. To find the way to the practical field of the present research, it is necessary to design a more comprehensive database that records the signal based on auditory and speech stimuli so that it can be used in lie detection systems based on EEG signals. The proposed method in this study for automatically distinguishing truth from falsehood is based on feature learning on EEG minimal channels. It combines deep graph convolutional and type 2 fuzzy networks to overcome the challenges above while demonstrating high reliability in practice. The contribution of this study can be summarized as follows: Providing an automatic lie detection system based on EEG signals with an accuracy of more than 95%. Collecting a standard database based on sentence questions and answers for the first time among previous research. Providing an automatic algorithm that uses a deep learning approach and type 2 fuzzy networks without needing a feature selection/extraction block diagram. The proposed model was evaluated in noisy environments, achieving accuracy above 90% in a wide range of different SNRs. The rest of the article is organized as follows: examines the algorithms used in this study. describes this research’s proposed method, which includes data registration, architectural design, etc. presents the simulation results and compares the present study with algorithms and recent research. Finally, is related to the conclusion. This section begins with a description of the database for a lie detection system. Following that, the mathematical background of graph convolutional networks will be investigated. 2.1. General Model of Generative Adversarial Networks (GANs) In recent years, GANs have gained significant attention as a vital subfield of deep learning. In 2014, J. Goodfellow and colleagues introduced these networks . In machine learning, GANs handle unsupervised learning tasks. Two models that automatically identify and pick up patterns in the input data are part of these networks. We refer to these two models as discriminator and generator. To analyze, record, and duplicate changes in the dataset, the discriminator and the generator compete with one another. New samples that can be sensibly selected from the original dataset can be produced using GANs. The discriminator is trained using fictitious data produced by the generator. The generator gains the ability to generate usable data. Negative training samples are those that are produced for the discriminator. The generator creates a sample by using a fixed-length random noise vector as input. The generator’s primary objective is to deceive the discriminator into assigning the correct title to its output. Real data and fake data produced by the generator are separated by the discriminator. There are two distinct sources of training data for the discriminator. During training, the generator creates fake samples, which the discriminator uses as negative samples, while real data samples are used as positive samples. In mathematical terms, the following equation is minimized in GAN networks during the training phase: (1) log ( 1 − D ( G ( Z ) ) ) min max G D V ( G , D ) = E x − P d a t a [ log D ( x ) ] + E p z ( z ) [ log ( 1 − D ( G ( Z ) ) ] In the above equation, the discriminator (D) must be obtained in such a way that it is possible to distinguish real and artificial data from each other. The equation introduced above cannot be solved in a closed form and requires repeated algorithms. Also, to avoid the problem of overfitting the data, for every k optimization of function D , generator function ( G ) is also optimized once . 2.2. General Model of Graph Convolutional Network In 2016, Michael Deferard and colleagues initially put out the fundamental concept of the GCN. These researchers have applied signal processing to graphs and graph spectral theory for the first time, allowing for the derivation of convolutional functions and the use of convolutional networks in the setting of graph theory. Particularly significant in graph theory are the adjacency and degree matrices. An adjacency matrix is used to link each vertex in the graph. Moreover, the degree matrix may be obtained by having the adjacency matrix. The diagonal elements of this matrix, which is a diagonal matrix, are equal to the sum of the edges connecting to the appropriate vertex of the matrix. The degree matrix can be represented as D ∈ R N × N and the graph matrix as W ∈ R N × N , where the i -th diagonal element of the degree matrix is defined as follows : (2) D i i = ∑ i W i j The Laplacian matrix can also be defined in the form of the following relation: (3) L = D − W ∈ R N × N (4) L = U Λ U T According to the above relation, as it is known, the subtraction of the degree matrices and the adjacency matrix forms the Laplacian matrix. This matrix is used to calculate graph basis functions. Graph basis functions can be obtained using Singular Value Decomposition (SVD) in the Laplacian matrix. Also, the Laplacian matrix can be defined by considering the matrix of eigenvectors and the matrix of singular values in relation (5). According to Equation (5), the eigenvector matrix’s columns correspond to the Laplacian matrix’s eigenvectors. Fourier transform is also possible based on these eigenvectors, and Fourier bases can be defined by having diagonal eigenvalues including Λ = d i a g ( [ λ 0 , … , λ N − 1 ] ) in the form of the following relationship: (5) U = [ u 0 , … , u N − 1 ] ∈ R N × N For better understanding, the Fourier transform and inverse Fourier transform of a signal q ∈ R N can be defined in relations (7) and (8), respectively: (6) q ^ = U T q (7) q = U U T q = U q ^ According to Equation (7), q ^ represents the Fourier transform of the graph. Also, based on Equation (8), the feature vector for a signal such as q with Fourier bases and Fourier transform of the graph is possible. The graph convolution operator can also be calculated by having the convolution of two signals in the graph domain by the Fourier transform of each signal. For better understanding, the convolution of two signals z and y along with the operator ∗ g is defined as the following relationship: (8) z ∗ g = U ( ( U T z ) ⊙ ( U T y ) ) In the above relation, g filter function describes a graph convolution operator in combination with neural networks. According to the above relation, z is the version filtered by g ( L ) : (9) y = g ( L ) z By placing the Laplacian matrix and decomposing it into singular values and eigenvectors, graph convolution can be defined as follows : (10) y = g ( L ) z = U g ( Λ ) U T z = U ( g ( Λ ) ) ⊙ ( U T z ) = U ( U T ( U g ( Λ ) ) ) ⊙ ( U T z ) = z ∗ g ( U g ( Λ ) ) 2.3. General Model of Type 2 Fuzzy (TF-2) Professor Zadeh introduced type 2 fuzzy (TF-2) sets in 1975 as a means of problem-solving and developing type 1 fuzzy (TF-1). Membership functions in TF-2 systems have membership degrees, setting them apart from TF-1 systems. TF-2 sets can withstand a wide range of uncertainties, including noise. These systems are helpful in designing control systems and predicting uncertain time series. However, these functions can also be used as activation functions in deep learning networks. As is well known, activation functions in deep learning networks have a significant impact on learning. The activation functions commonly used in deep learning networks include ReLU and Leaky-ReLU. These functions help to solve the gradient removal problem and improve the performance of deep learning networks. The main weakness of these functions is that their input and output relationships are nonlinear . According to the introduced ability of TF-2 systems in this study, these sets have been used instead of ReLU and Leaky-ReLU activation functions in deep learning networks to deal with various uncertainties such as the nonlinearity of relationships between input and output, as well as to solve the noise effect. As stated above, the functions of these sets in deep learning networks can be defined as follows: (11) f σ ; γ = P σ k ( σ ) , if σ > 0 N σ ( − σ ) , if σ ≤ 0 According to the above relationship, k can be defined as follows: (12) k σ = 1 2 1 α + σ − α σ + − 1 + α − 1 + α σ When we have the mathematical derivatives of the introduced parameters, we can learn the γ = [ α , P , N ] parameters, which should be updated with each network iteration. The equation below demonstrates how to update these parameters: (13) ∂ L ∂ γ C = ∑ j ∂ L ∂ f c ( σ c j ) ∂ f c ( σ c j ) ∂ γ c The number of layers, the observation element, and the objective function in deep learning networks are related to parameters c , j , and L , respectively, according to the equation above. ∂ L ∂ f c ( σ c j ) also represents the slope emanating from the deep layers, and the total slope is equal to the following equation: (14) ∂ f c ( σ c ) ∂ a c = p c σ c 2 ( 1 α c σ − 1 + σ c − 1 ( a c + σ c − α c σ c ) 2 + σ c ( 1 − a c ) ( a c σ c − 1 ) 2 ) if σ c > 0 − N c σ c 2 ( 1 α c σ + 1 + σ c + 1 ( a c − σ c + α c σ c ) 2 + σ c ( 1 − a c ) ( a c σ c + 1 ) 2 if σ c ≤ 0 and we have: (15) ∂ f c ( σ c ) ∂ P C = σ c k c ( σ c ) , if σ c > 0 0 , if σ c ≤ 0 ∂ f c ( σ c ) ∂ N C = 0 , if σ c > 0 σ c k c ( − σ c ) , if σ c ≤ 0 k c is also obtained from the parameters update law as follows: (16) Δ γ = ρ Δ γ + ξ ∂ L ∂ γ This equation represents the amount of movement and the training rate, respectively. Compared to the total number of weights in deep learning networks, the number of adjustable and learning parameters in TF-2 sets is only 3C (where C is the number of hidden layers). This decreases the computational complexity significantly. To address different uncertainties, these sets have been used in this study’s graph convolutional networks instead of standard activation functions . In recent years, GANs have gained significant attention as a vital subfield of deep learning. In 2014, J. Goodfellow and colleagues introduced these networks . In machine learning, GANs handle unsupervised learning tasks. Two models that automatically identify and pick up patterns in the input data are part of these networks. We refer to these two models as discriminator and generator. To analyze, record, and duplicate changes in the dataset, the discriminator and the generator compete with one another. New samples that can be sensibly selected from the original dataset can be produced using GANs. The discriminator is trained using fictitious data produced by the generator. The generator gains the ability to generate usable data. Negative training samples are those that are produced for the discriminator. The generator creates a sample by using a fixed-length random noise vector as input. The generator’s primary objective is to deceive the discriminator into assigning the correct title to its output. Real data and fake data produced by the generator are separated by the discriminator. There are two distinct sources of training data for the discriminator. During training, the generator creates fake samples, which the discriminator uses as negative samples, while real data samples are used as positive samples. In mathematical terms, the following equation is minimized in GAN networks during the training phase: (1) log ( 1 − D ( G ( Z ) ) ) min max G D V ( G , D ) = E x − P d a t a [ log D ( x ) ] + E p z ( z ) [ log ( 1 − D ( G ( Z ) ) ] In the above equation, the discriminator (D) must be obtained in such a way that it is possible to distinguish real and artificial data from each other. The equation introduced above cannot be solved in a closed form and requires repeated algorithms. Also, to avoid the problem of overfitting the data, for every k optimization of function D , generator function ( G ) is also optimized once . In 2016, Michael Deferard and colleagues initially put out the fundamental concept of the GCN. These researchers have applied signal processing to graphs and graph spectral theory for the first time, allowing for the derivation of convolutional functions and the use of convolutional networks in the setting of graph theory. Particularly significant in graph theory are the adjacency and degree matrices. An adjacency matrix is used to link each vertex in the graph. Moreover, the degree matrix may be obtained by having the adjacency matrix. The diagonal elements of this matrix, which is a diagonal matrix, are equal to the sum of the edges connecting to the appropriate vertex of the matrix. The degree matrix can be represented as D ∈ R N × N and the graph matrix as W ∈ R N × N , where the i -th diagonal element of the degree matrix is defined as follows : (2) D i i = ∑ i W i j The Laplacian matrix can also be defined in the form of the following relation: (3) L = D − W ∈ R N × N (4) L = U Λ U T According to the above relation, as it is known, the subtraction of the degree matrices and the adjacency matrix forms the Laplacian matrix. This matrix is used to calculate graph basis functions. Graph basis functions can be obtained using Singular Value Decomposition (SVD) in the Laplacian matrix. Also, the Laplacian matrix can be defined by considering the matrix of eigenvectors and the matrix of singular values in relation (5). According to Equation (5), the eigenvector matrix’s columns correspond to the Laplacian matrix’s eigenvectors. Fourier transform is also possible based on these eigenvectors, and Fourier bases can be defined by having diagonal eigenvalues including Λ = d i a g ( [ λ 0 , … , λ N − 1 ] ) in the form of the following relationship: (5) U = [ u 0 , … , u N − 1 ] ∈ R N × N For better understanding, the Fourier transform and inverse Fourier transform of a signal q ∈ R N can be defined in relations (7) and (8), respectively: (6) q ^ = U T q (7) q = U U T q = U q ^ According to Equation (7), q ^ represents the Fourier transform of the graph. Also, based on Equation (8), the feature vector for a signal such as q with Fourier bases and Fourier transform of the graph is possible. The graph convolution operator can also be calculated by having the convolution of two signals in the graph domain by the Fourier transform of each signal. For better understanding, the convolution of two signals z and y along with the operator ∗ g is defined as the following relationship: (8) z ∗ g = U ( ( U T z ) ⊙ ( U T y ) ) In the above relation, g filter function describes a graph convolution operator in combination with neural networks. According to the above relation, z is the version filtered by g ( L ) : (9) y = g ( L ) z By placing the Laplacian matrix and decomposing it into singular values and eigenvectors, graph convolution can be defined as follows : (10) y = g ( L ) z = U g ( Λ ) U T z = U ( g ( Λ ) ) ⊙ ( U T z ) = U ( U T ( U g ( Λ ) ) ) ⊙ ( U T z ) = z ∗ g ( U g ( Λ ) ) Professor Zadeh introduced type 2 fuzzy (TF-2) sets in 1975 as a means of problem-solving and developing type 1 fuzzy (TF-1). Membership functions in TF-2 systems have membership degrees, setting them apart from TF-1 systems. TF-2 sets can withstand a wide range of uncertainties, including noise. These systems are helpful in designing control systems and predicting uncertain time series. However, these functions can also be used as activation functions in deep learning networks. As is well known, activation functions in deep learning networks have a significant impact on learning. The activation functions commonly used in deep learning networks include ReLU and Leaky-ReLU. These functions help to solve the gradient removal problem and improve the performance of deep learning networks. The main weakness of these functions is that their input and output relationships are nonlinear . According to the introduced ability of TF-2 systems in this study, these sets have been used instead of ReLU and Leaky-ReLU activation functions in deep learning networks to deal with various uncertainties such as the nonlinearity of relationships between input and output, as well as to solve the noise effect. As stated above, the functions of these sets in deep learning networks can be defined as follows: (11) f σ ; γ = P σ k ( σ ) , if σ > 0 N σ ( − σ ) , if σ ≤ 0 According to the above relationship, k can be defined as follows: (12) k σ = 1 2 1 α + σ − α σ + − 1 + α − 1 + α σ When we have the mathematical derivatives of the introduced parameters, we can learn the γ = [ α , P , N ] parameters, which should be updated with each network iteration. The equation below demonstrates how to update these parameters: (13) ∂ L ∂ γ C = ∑ j ∂ L ∂ f c ( σ c j ) ∂ f c ( σ c j ) ∂ γ c The number of layers, the observation element, and the objective function in deep learning networks are related to parameters c , j , and L , respectively, according to the equation above. ∂ L ∂ f c ( σ c j ) also represents the slope emanating from the deep layers, and the total slope is equal to the following equation: (14) ∂ f c ( σ c ) ∂ a c = p c σ c 2 ( 1 α c σ − 1 + σ c − 1 ( a c + σ c − α c σ c ) 2 + σ c ( 1 − a c ) ( a c σ c − 1 ) 2 ) if σ c > 0 − N c σ c 2 ( 1 α c σ + 1 + σ c + 1 ( a c − σ c + α c σ c ) 2 + σ c ( 1 − a c ) ( a c σ c + 1 ) 2 if σ c ≤ 0 and we have: (15) ∂ f c ( σ c ) ∂ P C = σ c k c ( σ c ) , if σ c > 0 0 , if σ c ≤ 0 ∂ f c ( σ c ) ∂ N C = 0 , if σ c > 0 σ c k c ( − σ c ) , if σ c ≤ 0 k c is also obtained from the parameters update law as follows: (16) Δ γ = ρ Δ γ + ξ ∂ L ∂ γ This equation represents the amount of movement and the training rate, respectively. Compared to the total number of weights in deep learning networks, the number of adjustable and learning parameters in TF-2 sets is only 3C (where C is the number of hidden layers). This decreases the computational complexity significantly. To address different uncertainties, these sets have been used in this study’s graph convolutional networks instead of standard activation functions . This section will outline the suggested approach for creating an automatic system that detects lies using EEG signals. This part covers how to record a database, pre-processing of data, designed network architecture, optimization of designed architecture parameters and how to allocate training and test data. The study’s suggested flowchart is graphically depicted in . depicts the collection of a standard database based on EEG signals classified as truth or lie. The data will then be pre-processed using steps such as notch filtering, Butterworth filtering, data enhancement, and normalization. Following that, for feature selection/extraction and classification, the proposed network architecture, which combines TF-2 sets and graph convolutional networks, will be utilized. Finally, the data will be classified into truth and lies. 3.1. Data Collection In order to collect data, 20 people (10 men and 10 women) of average age (20 to 35) with no underlying ailment were requested to take the lie detection test. First, the volunteers are informed that they are participating in the experiment voluntarily and that they have the right to leave at any time if they are dissatisfied with the experimental processes. The Tabriz University Faculty of Electrical and Computer Science’s ethics committee issued the necessary permits for signal recording (IR.Tabriz.1399.2.1). The subjects were asked two days before the trial not to consume caffeinated or energy drinks for 48 h. They were also urged to bathe before the test and avoid applying hair conditioners. The Open BCI device recorded EEG signals according to the 10–20 standard. In this work, the data are recorded at a sampling frequency of 500 Hz, and EEG is measured with 16 channels of silver chloride. Also, EEG signals were recorded in bipolar form. To record the signal, channels A1 and A2 were used as references, with impedance matching set to less than 8 KΩ. After receiving informed consent from the individuals, they were asked to answer questions in two separate scenarios. The questions included first and last names, father’s and mother’s names, places of education, birth, domicile, and national identification numbers. In the first scenario, participants are required to answer questions while EEG data are recorded accurately. After capturing the signal from the first scenario, the subjects are instructed to answer the identical questions that were wrong in the second scenario. Then, after the completion of signal registration, the first and second scenarios are labeled true and false, respectively. Each scenario’s signal recording process took 30 s. Accordingly, there were 15,000 samples (30 s × 500 Hz) for each lie and truth class. To avoid EOG noise, participants were asked to close their eyes while answering the questions. An example of the signals recorded from two scenarios of truth and lie from the F Z channel is shown in . According to this figure, there is no significant visual difference between the two different labels, which indicates the necessity of designing an automatic lie detection system. Also, depicts one of the individuals during signal recording with the Open BCI device. 3.2. Pre-Processing of EEG Data As is evident, the data must be cleansed before entering the proposed network. As a result, this subsection describes in detail the pre-processing performed on the registered database. The executed pre-processing consists of five steps: in the first phase, according to studies , only channels Fz, Cz, Pz, O1, and O2 were employed, while the remaining EEG channels were left out. Decreasing the quantity of EEG channels diminishes the computational intricacy of the algorithm. Consequently, this enhances the algorithm’s efficiency and enables the model’s implementation in real-time applications. The second stage was using a Notch filter to remove the 50 Hz frequency of city electricity from the data. In the third phase, a 2nd-order Butterworth filter was applied to the data in the frequency range of 0.05 to 60 Hz to remove the participants’ random movements from the recordings. In the fourth step, GAN networks were utilized to increase the amount of recorded data and train the proposed network more effectively. The GAN network trains two subnetworks simultaneously: generator and discriminator. The generating network generates a 1 × 7500 dimensional signal from a 100-dimensional vector with a uniform distribution. This network’s five 1D-convolutional layers are being tested through trial and error. The layers’ diameters are 512, 1024, 2048, 4096, and 7500, respectively. Each layer employs batch normalization, whereas the network activation function is Leaky-ReLU. The network’s learning rate and number of iterations are 0.0001 and 200, respectively. The discriminant network accepts an 1 × 7500 dimensional vector as input and decides on the output (whether the signal is real or not). Furthermore, this network is made up of five dense fully connected layers. After employing this network, the data dimensions grew from 7500 to 10,000. In the fifth stage, the data are normalized between 0 and 1 to aid network training. 3.3. Graph Design A proximity matrix is generated after determining the functional connectivity of EEG channels. This can be accomplished by evaluating the correlation between the channels and showing the results as an EEG channel connection matrix. A threshold is specified for the connectivity matrix’s sparse approximation to remove the network adjacency matrix. The produced graph is fed into the suggested model, which selects/extracts and classifies features. 3.4. Customized Architecture This subsection presents a proprietary network architecture for automatic lie detector detection. After using the dropout layer, the input is transmitted to six graph convolutional layers activated by TF-2. The dynamic information included in EEG signals is extracted using graph convolutional layers. After passing through batch normalization, the data will be triggered again using the TF-2 function. Following this phase, a dropout layer is added to prevent overfitting. Finally, the output is a flattening layer divided into two classes of truth and falsehood utilizing the ultimately linked layer and the Softmax activator. illustrates the described design graphically. In the customized design based on the convolutional graph, the number of graph nodes equals the number of channels considered. Thus, in the first convolution layer, each vertex receives 10,000 samples. shows that the coefficients of S 1 , S 2 , S 3 , S 4 , S 5 , S 6 , and S6 represent each layer’s Chebi Sheff polynomial expansion and differ between them. The dimensionality reduction in the layers of the proposed network is shown in . 3.5. Training, Validation, and Test Series The trial-and-error method determined the appropriate architecture for the proposed network. shows the selected ideal parameters, such as the number of layers, layer type, optimization algorithms, filters, etc. Data for training, validation, and test sets are similarly allocated randomly, with dimensions of 70%, 20%, and 10%, respectively. In order to collect data, 20 people (10 men and 10 women) of average age (20 to 35) with no underlying ailment were requested to take the lie detection test. First, the volunteers are informed that they are participating in the experiment voluntarily and that they have the right to leave at any time if they are dissatisfied with the experimental processes. The Tabriz University Faculty of Electrical and Computer Science’s ethics committee issued the necessary permits for signal recording (IR.Tabriz.1399.2.1). The subjects were asked two days before the trial not to consume caffeinated or energy drinks for 48 h. They were also urged to bathe before the test and avoid applying hair conditioners. The Open BCI device recorded EEG signals according to the 10–20 standard. In this work, the data are recorded at a sampling frequency of 500 Hz, and EEG is measured with 16 channels of silver chloride. Also, EEG signals were recorded in bipolar form. To record the signal, channels A1 and A2 were used as references, with impedance matching set to less than 8 KΩ. After receiving informed consent from the individuals, they were asked to answer questions in two separate scenarios. The questions included first and last names, father’s and mother’s names, places of education, birth, domicile, and national identification numbers. In the first scenario, participants are required to answer questions while EEG data are recorded accurately. After capturing the signal from the first scenario, the subjects are instructed to answer the identical questions that were wrong in the second scenario. Then, after the completion of signal registration, the first and second scenarios are labeled true and false, respectively. Each scenario’s signal recording process took 30 s. Accordingly, there were 15,000 samples (30 s × 500 Hz) for each lie and truth class. To avoid EOG noise, participants were asked to close their eyes while answering the questions. An example of the signals recorded from two scenarios of truth and lie from the F Z channel is shown in . According to this figure, there is no significant visual difference between the two different labels, which indicates the necessity of designing an automatic lie detection system. Also, depicts one of the individuals during signal recording with the Open BCI device. As is evident, the data must be cleansed before entering the proposed network. As a result, this subsection describes in detail the pre-processing performed on the registered database. The executed pre-processing consists of five steps: in the first phase, according to studies , only channels Fz, Cz, Pz, O1, and O2 were employed, while the remaining EEG channels were left out. Decreasing the quantity of EEG channels diminishes the computational intricacy of the algorithm. Consequently, this enhances the algorithm’s efficiency and enables the model’s implementation in real-time applications. The second stage was using a Notch filter to remove the 50 Hz frequency of city electricity from the data. In the third phase, a 2nd-order Butterworth filter was applied to the data in the frequency range of 0.05 to 60 Hz to remove the participants’ random movements from the recordings. In the fourth step, GAN networks were utilized to increase the amount of recorded data and train the proposed network more effectively. The GAN network trains two subnetworks simultaneously: generator and discriminator. The generating network generates a 1 × 7500 dimensional signal from a 100-dimensional vector with a uniform distribution. This network’s five 1D-convolutional layers are being tested through trial and error. The layers’ diameters are 512, 1024, 2048, 4096, and 7500, respectively. Each layer employs batch normalization, whereas the network activation function is Leaky-ReLU. The network’s learning rate and number of iterations are 0.0001 and 200, respectively. The discriminant network accepts an 1 × 7500 dimensional vector as input and decides on the output (whether the signal is real or not). Furthermore, this network is made up of five dense fully connected layers. After employing this network, the data dimensions grew from 7500 to 10,000. In the fifth stage, the data are normalized between 0 and 1 to aid network training. A proximity matrix is generated after determining the functional connectivity of EEG channels. This can be accomplished by evaluating the correlation between the channels and showing the results as an EEG channel connection matrix. A threshold is specified for the connectivity matrix’s sparse approximation to remove the network adjacency matrix. The produced graph is fed into the suggested model, which selects/extracts and classifies features. This subsection presents a proprietary network architecture for automatic lie detector detection. After using the dropout layer, the input is transmitted to six graph convolutional layers activated by TF-2. The dynamic information included in EEG signals is extracted using graph convolutional layers. After passing through batch normalization, the data will be triggered again using the TF-2 function. Following this phase, a dropout layer is added to prevent overfitting. Finally, the output is a flattening layer divided into two classes of truth and falsehood utilizing the ultimately linked layer and the Softmax activator. illustrates the described design graphically. In the customized design based on the convolutional graph, the number of graph nodes equals the number of channels considered. Thus, in the first convolution layer, each vertex receives 10,000 samples. shows that the coefficients of S 1 , S 2 , S 3 , S 4 , S 5 , S 6 , and S6 represent each layer’s Chebi Sheff polynomial expansion and differ between them. The dimensionality reduction in the layers of the proposed network is shown in . The trial-and-error method determined the appropriate architecture for the proposed network. shows the selected ideal parameters, such as the number of layers, layer type, optimization algorithms, filters, etc. Data for training, validation, and test sets are similarly allocated randomly, with dimensions of 70%, 20%, and 10%, respectively. This part will show the suggested model’s outcomes. The proposed architecture was designed using the Python programming language, and the data preparation simulations were carried out in the MATLAB 2019a environment. The Google Colab ver 2024 Premium edition with a GPU t60 and 64 GB of RAM also produced the findings. This research evaluated the results based on standard criteria such as accuracy, precision, sensitivity, and specificity. The evaluation of the above formulas can be defined as follows: (17) a c c u r a c y : T P + T N T P + T N + F P + F N (18) p r e c i s i o n : T P T P + F P (19) s e n s i t i v i t y : T P T P + F N (20) s p e c i f i c i t y : T N T N + F P According to the above relationships, TP , TN , FN , and FP represent the true positive, true negative, false negative, and false positive ratio, respectively. This section has three subsections. The first subsection displays the optimization findings for the network architecture to visually demonstrate that the architecture considered for the current application is ideal. The second subsection shows the outcomes of the suggested model for the automated detection of lie detectors. The third and last portion compares the results with contemporary algorithms and research, one by one. 4.1. Architecture Optimization Results The outcomes of the suggested network’s optimization are shown in this subsection. For this reason, demonstrates that the proposed model’s selection of six graph convolutional layers was appropriate for computation and efficiency. This chart shows that adding layers with a number higher than six increases computing efficiency while maintaining nearly stable accuracy in the network. Furthermore, we have considered polynomial coefficients in several ways when designing the suggested architecture; the outcomes are shown in . This chart shows that the network performs best when the coefficients of S 1 - S 5 = 1 are taken into account. 4.2. Results of Simulation depicts the accuracy and error of the proposed network for automatic detection of lie detectors using fuzzy sets (proposed model), ReLU, and Leaky ReLU activation functions. As previously stated, 200 repetitions are considered for the proposed model (network), with stability beginning at 192 repeats. The network error has decreased after iteration 192. The significance of adopting TF-2 sets is demonstrated in this figure. displays many evaluation criteria for distinguishing lies and facts, such as accuracy, precision, sensitivity, specificity, and the kappa coefficient. As it is known, all of the obtained values exceed 95%. depicts the confusion matrix and the receiver operating characteristic curve (ROC) plot analysis. According to this image, as the confusion matrix indicates, just two samples are incorrectly recognized in the suggested model, demonstrating the model’s perfect performance. Furthermore, the ROC diagram shows that the classification results are between 0.9 and 1 on the left side. displays the TSNE graph for raw EEG data and the FC layer. Based on the given figure, it is evident that the examples from two classes, truth and false, were combined in the raw data state. However, after inputting into the proposed network, the samples were successfully segregated into true and false classes in the final layer (fully connected layer). This indicates that the network has demonstrated high effectiveness in accurately classifying the two courses of truth and lies. As is well known, EEG signals have a low SNR, and random movements of participants, such as blinking, might impair classification accuracy. As a result, the used model should have strong noise resistance. This study combined graph convolutional networks with TF2 to prevent a drop in classification accuracy due to noise. Gaussian white noise with a normal distribution was injected into the data at various SNR levels to demonstrate the model’s efficiency. depicts the performance of TF2 (proposed model) when compared to ReLU and Leaky-ReLU activation functions. As previously stated, the performance of the suggested model when using TF-2 functions can be more resistant to external noises than ReLU and Leaky ReLU activation functions. 4.3. Comparison with Previous Algorithms and Studies This subsection will compare the proposed model’s performance to other recent one-on-one research. compares existing research and their methods with the proposed model. As is evident, the proposed technique outperforms recent investigations. So, the accuracy of the proposed model is 98.2%. However, the highest values of this coefficient for the and studies are 96 and 96.45%, respectively. The highest accuracy achieved among the studies is related to , which is around 98%. However, this study uses feature selection/extraction and manual classification. As mentioned in the Introduction, manual methods are unsuitable for real-time applications because they cause computational complexity. None of the research employed a reference database to classify data. As a result, a one-on-one comparison with these studies appears unfair. As a result, we simulated our registered database using recently developed conventional methods and compared the results to our model. For this objective, pre-trained AlexNet , ResNet60 , and InceptionV3 networks were compared to the proposed model. The results are shown in . As shown in the figure, the proposed algorithm converged to the ideal value faster. Furthermore, as is well known, the suggested model has the highest level of accuracy when compared to other networks. Despite its promising results, this research, like earlier ones, has limitations. This work utilized GAN networks to augment the data and prevent the model from overfitting during training. The size of the registered database can be enhanced in the future, eliminating the need to add data artificially. In addition, wet electrodes were utilized to record the signal in this work, which can be explored in future investigations of dry electrode performance. The outcomes of the suggested network’s optimization are shown in this subsection. For this reason, demonstrates that the proposed model’s selection of six graph convolutional layers was appropriate for computation and efficiency. This chart shows that adding layers with a number higher than six increases computing efficiency while maintaining nearly stable accuracy in the network. Furthermore, we have considered polynomial coefficients in several ways when designing the suggested architecture; the outcomes are shown in . This chart shows that the network performs best when the coefficients of S 1 - S 5 = 1 are taken into account. depicts the accuracy and error of the proposed network for automatic detection of lie detectors using fuzzy sets (proposed model), ReLU, and Leaky ReLU activation functions. As previously stated, 200 repetitions are considered for the proposed model (network), with stability beginning at 192 repeats. The network error has decreased after iteration 192. The significance of adopting TF-2 sets is demonstrated in this figure. displays many evaluation criteria for distinguishing lies and facts, such as accuracy, precision, sensitivity, specificity, and the kappa coefficient. As it is known, all of the obtained values exceed 95%. depicts the confusion matrix and the receiver operating characteristic curve (ROC) plot analysis. According to this image, as the confusion matrix indicates, just two samples are incorrectly recognized in the suggested model, demonstrating the model’s perfect performance. Furthermore, the ROC diagram shows that the classification results are between 0.9 and 1 on the left side. displays the TSNE graph for raw EEG data and the FC layer. Based on the given figure, it is evident that the examples from two classes, truth and false, were combined in the raw data state. However, after inputting into the proposed network, the samples were successfully segregated into true and false classes in the final layer (fully connected layer). This indicates that the network has demonstrated high effectiveness in accurately classifying the two courses of truth and lies. As is well known, EEG signals have a low SNR, and random movements of participants, such as blinking, might impair classification accuracy. As a result, the used model should have strong noise resistance. This study combined graph convolutional networks with TF2 to prevent a drop in classification accuracy due to noise. Gaussian white noise with a normal distribution was injected into the data at various SNR levels to demonstrate the model’s efficiency. depicts the performance of TF2 (proposed model) when compared to ReLU and Leaky-ReLU activation functions. As previously stated, the performance of the suggested model when using TF-2 functions can be more resistant to external noises than ReLU and Leaky ReLU activation functions. This subsection will compare the proposed model’s performance to other recent one-on-one research. compares existing research and their methods with the proposed model. As is evident, the proposed technique outperforms recent investigations. So, the accuracy of the proposed model is 98.2%. However, the highest values of this coefficient for the and studies are 96 and 96.45%, respectively. The highest accuracy achieved among the studies is related to , which is around 98%. However, this study uses feature selection/extraction and manual classification. As mentioned in the Introduction, manual methods are unsuitable for real-time applications because they cause computational complexity. None of the research employed a reference database to classify data. As a result, a one-on-one comparison with these studies appears unfair. As a result, we simulated our registered database using recently developed conventional methods and compared the results to our model. For this objective, pre-trained AlexNet , ResNet60 , and InceptionV3 networks were compared to the proposed model. The results are shown in . As shown in the figure, the proposed algorithm converged to the ideal value faster. Furthermore, as is well known, the suggested model has the highest level of accuracy when compared to other networks. Despite its promising results, this research, like earlier ones, has limitations. This work utilized GAN networks to augment the data and prevent the model from overfitting during training. The size of the registered database can be enhanced in the future, eliminating the need to add data artificially. In addition, wet electrodes were utilized to record the signal in this work, which can be explored in future investigations of dry electrode performance. This study presents a fully automatic model for detecting truth from lies using EEG signals. This study’s proposed model is based on the combination of TF-2 sets and graph convolutional networks and is end-to-end, eliminating the need for a feature selection/extraction block diagram. In this study, a standard database of EEG signals from 20 subjects was collected. The classification findings revealed that the suggested model has a high accuracy of 98%, which is quite promising compared to previous studies. The algorithm’s promising performance allows the suggested model to be applied in various lie detection applications. In future research, we intend to use the proposed algorithm as a real-time model for lie detection using minimal channels of EEG signals. |
Impact of a transformative health literacy model for Thai older adults with hypertension | 123dd264-a734-47e7-8fe2-30be77a7013f | 11438067 | Health Literacy[mh] | Thai society is moving towards an aging society continuously due to the rapidly changing population structure. Thai people are living longer and mortality rates are decreasing due to efficient medical technology systems. This results in Thailand's future being filled with a predominant elderly population group, with a tendency to transform into a fully aged society and ultimately become a super-aged society . Furthermore, the physical decline and reduced bodily functions that come with old age often lead to the majority of elderly individuals suffering from diseases of the circulatory system, particularly hypertension, which may be linked to stroke and ultimately result in being bedridden and unable to care for themselves . The medical system highlights the significant global impact of chronic diseases, with a particular emphasis on hypertension. This condition affects more than one billion individuals worldwide, including a notable burden of 10% in Surat Thani province, Thailand . Hypertension, if left untreated, can result in severe consequences such as heart disease, stroke, kidney damage, vision impairment, and an increased risk of aneurysm . Hypertension-related fatalities contribute to 25.3% per 100,000 people, causing 9.4 million deaths, half from strokes . Especially among the elderly population, the prevalence of high blood pressure reaches up to 5 percent. If it cannot be controlled, it may lead to the occurrence of stroke and become a burden for the family . Therefore, it is imperative to establish interventions aimed at both preventing the onset of disease and mitigating the impact of the disease on health to prevent it from becoming more severe . Health literacy is vital in healthcare, encompassing cognitive abilities and skills. It involves acquiring, understanding, evaluating, and applying health information, enabling informed decisions . Health literacy empowers patients to navigate medical complexities, access healthcare resources, and manage health actively, fostering empowerment . This is particularly relevant for patients with chronic conditions, such as those afflicted with hypertension. Some studies have found that individuals with high blood pressure who possess good health literacy may have an impact on preventing the onset of disease or reducing the severity of the disease that has already occurred . The health literacy skills can create the awareness of health, and improve self-care behaviors to maintain good health for hypertension patients . Transformative learning offers an alternative educational paradigm that can lead to cognitive and behavioral shifts. Grounded in self-awareness and critical introspection, this approach facilitates positive behavioral changes by challenging existing beliefs, promoting openness, and embracing new perspectives, contributing to the improvement of hypertension management . Furthermore, the social cognitive learning theory asserts that individuals acquire knowledge and skills by engaging in observational learning, replicating seen behaviors, and receiving feedback on their own actions . According to this idea, it is proposed that individuals have the potential to improve their health literacy through the process of seeing and emulating the health behaviors of individuals . Therefore, this research employed a learning process utilizing the transformative learning theory and social cognitive learning theory to enhance health literacy, aiming to facilitate a transformative change in the health literacy levels of individuals with high blood pressure. This is anticipated to contribute to preventing or mitigating the severity of the consequences associated with high blood pressure. Based on previous research, it has been identified that there is lack of research support for the theoretical foundations of health literacy programs in individuals with high blood pressure . Consequently, this research aims to fill these knowledge gaps by developing a health literacy program based on the transformative learning model and social cognitive learning theory to develop the health literacy among Thai hypertension older adult patients. Conceptual framework The study's framework encompasses social cognitive learning theory and transformative learning concept . The transformative learning model emphasizes the importance of critical thinking and reflection, allowing individuals to challenge their existing beliefs and perspectives. Also, social cognitive learning theory posits that people learn through observing others, imitating their behaviors, and receiving feedback on their own actions. This theory suggests that individuals can enhance health literacy by observing and modeling the health behaviors of others who are knowledgeable and skilled in managing health effectively . By incorporating social cognitive learning theory and transformative learning model into health literacy interventions, individuals may develop the necessary skills to make informed decisions about their health. Illustrated in Fig. .
The study's framework encompasses social cognitive learning theory and transformative learning concept . The transformative learning model emphasizes the importance of critical thinking and reflection, allowing individuals to challenge their existing beliefs and perspectives. Also, social cognitive learning theory posits that people learn through observing others, imitating their behaviors, and receiving feedback on their own actions. This theory suggests that individuals can enhance health literacy by observing and modeling the health behaviors of others who are knowledgeable and skilled in managing health effectively . By incorporating social cognitive learning theory and transformative learning model into health literacy interventions, individuals may develop the necessary skills to make informed decisions about their health. Illustrated in Fig. .
This study employed an expepriment. All procedures adhered to guidelines, and informed consent was obtained from participants and/or their guardians. Experiment and number of replicates The research was conducted within the community of Surat Thani province. The number of replicates consisted of 36 Thai hypertensive older adult patients who were over 60 years old. Inclusion criteria include at least ten years of hypertension experience, good communication, being Thai hypertension older adult patients in the community, and having motivation to attend the study. Exclusion criteria include at request withdrawal or decline consent to participate in the research. Study tools Research instruments were comprised of (1) the transformative health literacy model and (2) the health literacy in hypertension scale. In this research, the transformative health literacy model is an intervention provided to the sample group, and the health literacy scale is used to assess the effectiveness of this model. The transformative health literacy model The transformative health literacy model emerged from an extensive review of literature encompassing transformative learning, and social cognitive learning theory . This model focuses on Bandura's social cognitive theory, highlighting observational learning, self-efficacy beliefs, and the interplay of personal factors, environment, and behavior. The model underscores health-related knowledge acquisition's transformative nature, promoting informed choices for optimal health . The model followed a structured approach involving four steps, each spanning 4 h for a total of 16 h: (a) Teaching hypertension disease knowledge, (b) Communication and sharing experiences, (c) Analysis and discussion of hypertension information, (d) Decision-making for behavior changes. The health literacy in hypertension scale. The health literacy in hypertension scale, utilized for data collection as a primary outcome, was developed from literature and research on health literacy and hypertension. It included 27 items measured on a 5-point Likert scale. Content validation involved three health literacy and two hypertension experts, and each item had the index of item-objective congruence (IOC) value higher than 0.05. A pilot test with 30 separated Thai hypertension patients yielded a Cronbach’s alpha value of 0.89 for reliability In this regard, the criteria for interpreting the health literacy in hypertension scale are divided into 5 levels, an average score of 5.00–4.21 is considered the highest level, 4.20–3.41 is considered the high level, 3.40–2.61 is considered the moderate level, 2.60–1.81 is considered the low level, and 1.80–1.00 is considered the lowest level. Data collection Prior to the transformative health literacy model, the experimental group's "health literacy in hypertension disease" was measured as pretest scores. During the experimental phase, the experimental group underwent the model. Post-experiment, scores were collected. A follow-up measurement occurred 2 months later using the same instrument due to previous research evidence suggesting that the use of transformative learning contributes to cognitive changes in research participants lasting for 2 months Data analysis This study employed t-test dependence statistical analysis using statistical software.
The research was conducted within the community of Surat Thani province. The number of replicates consisted of 36 Thai hypertensive older adult patients who were over 60 years old. Inclusion criteria include at least ten years of hypertension experience, good communication, being Thai hypertension older adult patients in the community, and having motivation to attend the study. Exclusion criteria include at request withdrawal or decline consent to participate in the research.
Research instruments were comprised of (1) the transformative health literacy model and (2) the health literacy in hypertension scale. In this research, the transformative health literacy model is an intervention provided to the sample group, and the health literacy scale is used to assess the effectiveness of this model.
The transformative health literacy model emerged from an extensive review of literature encompassing transformative learning, and social cognitive learning theory . This model focuses on Bandura's social cognitive theory, highlighting observational learning, self-efficacy beliefs, and the interplay of personal factors, environment, and behavior. The model underscores health-related knowledge acquisition's transformative nature, promoting informed choices for optimal health . The model followed a structured approach involving four steps, each spanning 4 h for a total of 16 h: (a) Teaching hypertension disease knowledge, (b) Communication and sharing experiences, (c) Analysis and discussion of hypertension information, (d) Decision-making for behavior changes.
The health literacy in hypertension scale, utilized for data collection as a primary outcome, was developed from literature and research on health literacy and hypertension. It included 27 items measured on a 5-point Likert scale. Content validation involved three health literacy and two hypertension experts, and each item had the index of item-objective congruence (IOC) value higher than 0.05. A pilot test with 30 separated Thai hypertension patients yielded a Cronbach’s alpha value of 0.89 for reliability In this regard, the criteria for interpreting the health literacy in hypertension scale are divided into 5 levels, an average score of 5.00–4.21 is considered the highest level, 4.20–3.41 is considered the high level, 3.40–2.61 is considered the moderate level, 2.60–1.81 is considered the low level, and 1.80–1.00 is considered the lowest level.
Prior to the transformative health literacy model, the experimental group's "health literacy in hypertension disease" was measured as pretest scores. During the experimental phase, the experimental group underwent the model. Post-experiment, scores were collected. A follow-up measurement occurred 2 months later using the same instrument due to previous research evidence suggesting that the use of transformative learning contributes to cognitive changes in research participants lasting for 2 months
This study employed t-test dependence statistical analysis using statistical software.
The pre-test mean scores of health literacy related to hypertension for the experimental group have a higher average level of health literacy than the mean scores of the post-test and follow-up phases at a significance level of 0.05, as shown in Table .
The study's results shed light on the transformative health literacy model's effects on the health literacy related to hypertension. Notably, their health literacy related to hypertension significantly improved at post and follow-up phase of experiment, reinforcing the model's efficacy. Developed by merging social cognitive learning theory and transformative learning, which aimed to foster learning, self-awareness, and behavioral changes . The transformative learning model, exemplified in this context, involved the evolution of perspectives through reflection and exposure to new experiences, such as video clips and shared experiences among group members facing similar challenges . The video clip covered information on hypertension, complications of hypertension, and how to manage oneself while suffering from hypertension. After playing the video clip, there was a group activity in which hypertension patients engaged in discussions and shared their feelings. This was done to utilize group processes to raise awareness among patients, encourage them to make changes in their health behaviors, and develop healthier habits. In particular, the acquisition of knowledge about hypertension by the group leader served as a basis for transformative experiences, facilitating the development of well-informed choices . Therefore, this model contributed to hypertension patients gaining accurate knowledge about self-care methods, developing awareness of health adaptations, and fostering the need to improve health. This could be linked to the acquisition of self-care skills and the cultivation of improved health behaviors . Moreover, the effectiveness of Bandura's theory in the realm of health literacy is underscored. This theory emphasizes how observational learning, self-efficacy, and environmental factors collectively contribute to the acquisition of health knowledge and the decision-making process . This integrated perspective emphasizes the synergies between transformative learning, health literacy, and Bandura's theoretical framework, reinforcing their collective role in fostering informed health choices and outcomes. In summary, the model can improve the health literacy related to hypertension among Thai hypertension older adult patients. Healthcare professionals can use this model, aiding in prevention and reducing the severity of hypertension-related issues in healthcare settings. Limitations This study used data obtained only from Thai hypertension patients in Surat Thani province, in the southern region of Thailand. Additionally, the use of older adult participants may be another limitation in generalizing the research findings to the entire population of hypertensive patients.
This study used data obtained only from Thai hypertension patients in Surat Thani province, in the southern region of Thailand. Additionally, the use of older adult participants may be another limitation in generalizing the research findings to the entire population of hypertensive patients.
This study aimed to investigate the effects of a transformative health literacy model on Thai hypertension older adult patients. The results, indicated a significant improvement in health literacy related to hypertension among the experimental group. This research contributes to the existing knowledge base because it addresses a gap in health literacy programs for individuals with high blood pressure, offering theoretical foundations grounded in transformative learning and social cognitive learning theory.
|
Apport de la simulation dans l'apprentissage de l'examen du fond d'œil. | a006fb59-d196-4cb2-8c77-1aa9ca97fe2c | 8974430 | Ophthalmology[mh] | L'examen du fond d'œil est une partie essentielle de l'examen clinique en ophtalmologie qui permet aux ophtalmologistes, aux neurologues et aux médecins de famille de diagnostiquer de nombreuses pathologies. Son apprentissage se faisait classiquement par la pratique directe sur les patients . Ce principe de formation est actuellement confronté à l'évolution de notre société et les patients acceptent de moins en moins d'être examinés par des praticiens en formation. La simulation est vraisemblablement l'une des plus grands progrès de la formation médicale. Elle permet, en particulier, de mettre la théorie en pratique et d'apprendre de ses erreurs sans conséquences immédiates sur les patients . Véritable complément à la formation classique en ophtalmologie, la simulation peut être intégrée dans le cursus des étudiants, internes et résidents en formation . L’objectif de ce travail est d’évaluer l’impact des séances d’apprentissage par simulation procédurale sur la maitrise de l’examen du fond d’œil et sur l’acquisition des connaissances théoriques en ophtalmologie.
Population d’étude Il s'agit d'une étude prospective incluant des étudiants qui ont bénéficié de séances d'apprentissage par simulation procédurale pendant leur stage en ophtalmologie. Déroulement de la séance de simulation Une seule séance de simulation d'ophtalmologie a été organisée pour chaque groupe d’étudiants pendant la période de stage dans le centre de simulation "CeSim", dans une salle reproduisant une salle de consultation d'ophtalmologie avec tout le matériel nécessaire au scénario. Tous les participants ont donné leur consentement oral éclairé pour participer à l’étude. Les devis de planification des scénarii étaient rédigés préalablement. Les quatre thèmes étudiés étaient : la rétinopathie diabétique, la rétinopathie hypertensive, l’œdème papillaire et l’occlusion de la veine centrale de la rétine. Chaque séance de simulation s’est déroulée en trois étapes : La première étape commence par l'accueil des apprenants, elle se poursuit toujours par un pré-test, sous forme de 5 questions à choix multiples (QCM) qui permet d'évaluer les connaissances et les prérequis des apprenants. La deuxième étape est la formation proprement dite, qui comporte 3 temps : -Un temps de démonstration (20 minutes) : le formateur présentait le modèle et effectuait une première démonstration de la réalisation du fond d'œil sur le modèle en détaillant la technique, ses différentes étapes et le matériel nécessaire à sa réalisation. Le simulateur était monté dans une tête conçue avec des pupilles réglables et des photographies amovibles de 35 mm placées à l'intérieur de l'œil ; ces photos simulaient la rétine, et pouvaient être visualisées au moyen d'un ophtalmoscope manuel classique . -Un temps de mise en situation de 15 minutes au cours duquel les apprenants réalisent eux-mêmes les différentes étapes de l'examen du fond d'œil. . -Un temps de débriefing qui vient juste après la fin de la mise en situation, se fait de façon collective, dirigé par le formateur dont l'objectif est d'amener les apprenants à restituer un feedback constructif avec ses différents temps : descriptif, analyse et synthèse. Durant la troisième étape, les apprenants étaient évalués par un post-test comprenant les mêmes QCM présentés au pré-test. Une note sur 10 est calculée pour le pré-test et pour le post-test. La performance spécifique du fond d'œil a été évaluée par le formateur durant la phase de mise en situation selon un score de performance inspiré de l'exemple publié par le Conseil International d'Ophtalmologie (ICO) . Ce score est composé de 8 items , chaque item étant noté de zéro à un selon la pertinence de la réalisation du geste, ce qui donne un maximum de 8 points. En dernier lieu, un questionnaire de satisfaction a été diffusé pour évaluer l'organisation, l'intérêt scientifique, la valeur pédagogique, l'appréciation générale de la session de simulation. Chaque item était noté de 1 à 4 avec un maximum possible de 16. Les résultats ont été exprimés par la médiane et l’intervalle interquartile pour les variables quantitatives et par les effectifs et les pourcentages pour les variables qualitatives. Le test de Wilcoxon a permis la comparaison des moyennes avant et après test chez le groupe d’étudiants. Les résultats ont été considérés comme significatifs si p ≤ 0,05.
Il s'agit d'une étude prospective incluant des étudiants qui ont bénéficié de séances d'apprentissage par simulation procédurale pendant leur stage en ophtalmologie.
Une seule séance de simulation d'ophtalmologie a été organisée pour chaque groupe d’étudiants pendant la période de stage dans le centre de simulation "CeSim", dans une salle reproduisant une salle de consultation d'ophtalmologie avec tout le matériel nécessaire au scénario. Tous les participants ont donné leur consentement oral éclairé pour participer à l’étude. Les devis de planification des scénarii étaient rédigés préalablement. Les quatre thèmes étudiés étaient : la rétinopathie diabétique, la rétinopathie hypertensive, l’œdème papillaire et l’occlusion de la veine centrale de la rétine. Chaque séance de simulation s’est déroulée en trois étapes : La première étape commence par l'accueil des apprenants, elle se poursuit toujours par un pré-test, sous forme de 5 questions à choix multiples (QCM) qui permet d'évaluer les connaissances et les prérequis des apprenants. La deuxième étape est la formation proprement dite, qui comporte 3 temps : -Un temps de démonstration (20 minutes) : le formateur présentait le modèle et effectuait une première démonstration de la réalisation du fond d'œil sur le modèle en détaillant la technique, ses différentes étapes et le matériel nécessaire à sa réalisation. Le simulateur était monté dans une tête conçue avec des pupilles réglables et des photographies amovibles de 35 mm placées à l'intérieur de l'œil ; ces photos simulaient la rétine, et pouvaient être visualisées au moyen d'un ophtalmoscope manuel classique . -Un temps de mise en situation de 15 minutes au cours duquel les apprenants réalisent eux-mêmes les différentes étapes de l'examen du fond d'œil. . -Un temps de débriefing qui vient juste après la fin de la mise en situation, se fait de façon collective, dirigé par le formateur dont l'objectif est d'amener les apprenants à restituer un feedback constructif avec ses différents temps : descriptif, analyse et synthèse. Durant la troisième étape, les apprenants étaient évalués par un post-test comprenant les mêmes QCM présentés au pré-test. Une note sur 10 est calculée pour le pré-test et pour le post-test. La performance spécifique du fond d'œil a été évaluée par le formateur durant la phase de mise en situation selon un score de performance inspiré de l'exemple publié par le Conseil International d'Ophtalmologie (ICO) . Ce score est composé de 8 items , chaque item étant noté de zéro à un selon la pertinence de la réalisation du geste, ce qui donne un maximum de 8 points. En dernier lieu, un questionnaire de satisfaction a été diffusé pour évaluer l'organisation, l'intérêt scientifique, la valeur pédagogique, l'appréciation générale de la session de simulation. Chaque item était noté de 1 à 4 avec un maximum possible de 16. Les résultats ont été exprimés par la médiane et l’intervalle interquartile pour les variables quantitatives et par les effectifs et les pourcentages pour les variables qualitatives. Le test de Wilcoxon a permis la comparaison des moyennes avant et après test chez le groupe d’étudiants. Les résultats ont été considérés comme significatifs si p ≤ 0,05.
Au total 48 apprenants de DCEM2, répartis sur 4 groupes de 12 étudiants ont assisté à 4 séances d’apprentissage par simulation. Vingt-neuf étaient de sexe féminin (60%) et 19 de sexe masculin (40%), soit une sex-ratio de 0,65. Tous les apprenants avaient déjà assisté à une séance de simulation dans une autre discipline auparavant (100%). L'évaluation initiale des connaissances théoriques des apprenants a révélé un score global médian au pré-test de 5 /10 (4/10-6/10) . Aucun des apprenants n'a obtenu un score complet. L'évaluation post-formation a montré un score global médian post-test de 9 /10 (8/10-10/10) . Vingt-trois apprenants ont obtenu un score complet. Concernant la contribution de cette formation dans l'acquisition de connaissances théoriques, on note une amélioration globale des scores du post-test par rapport au pré-test avec un delta-test médian global, reflétant le gain obtenu, de +4.00 (+2,00-+6,00) . Nous avons noté une corrélation négative significative entre le pré-test et le delta-test global, avec un R=-0.82 et p<0.0001 selon le test de corrélation de Spearman. Cela signifie que plus le score initial est bas, plus l'impact sur l'amélioration des connaissances est important. Tous les apprenants (100%) ont réussi à améliorer leurs connaissances théoriques suite à cette formation. Les scores de performances spécifiques à la réalisation du fond d’œil ont été évalués pour les 48 apprenants. Le score global médian de performance spécifique était à 5,5/8 (5/8 à 7/8) . Il n’existait pas de corrélation entre le score global médian de performance spécifique et les résultats du post-test avec R=-0,07 selon le test de corrélation de Spearman. C’est-à-dire que la performance atteinte était indépendante des connaissances acquises à la fin de la formation. Tous les apprenants ont répondu au questionnaire de satisfaction en fin de séance de formation. Aucun item n'a été jugé peu satisfaisant par un apprenant. Les apprenants ont jugé l'organisation des ateliers, l'intérêt scientifique, la valeur pédagogique et l'appréciation générale des ateliers comme excellents dans 69%, 52%, 56% et 53% respectivement. Nous avons noté une corrélation positive significative entre l’amélioration des connaissances et la satisfaction des apprenants avec R=0,80 et p<0,0001. C’est-à-dire que plus l’apprenant améliorait ses connaissances lors de la séance de formation plus il était satisfait de la séance.
Dans notre étude prospective, nous avons montré que les ateliers de formation par simulation procédurale avaient un impact positif sur l’amélioration des connaissances théoriques, mais aussi sur l’acquisition de compétences spécifiques à la réalisation du fond d’œil. Un total de quarante-huit apprenants ont participé aux 4 ateliers de simulation organisés. L'évaluation objective de l'effet de la séance de simulation sur l'acquisition des connaissances théoriques en ophtalmologie a montré un impact positif comme en témoigne l'augmentation significative de la note médiane globale de 5,00/10 à 9,00/10 (p=0,031) avec un delta de test médian global de +4,00. Nous avons montré également que cette formation par simulation avait un réel intérêt pour l'acquisition de compétences techniques avec un score médian global de performance spécifique à la réalisation du fond d'œil de 5,5/8 (5/8 à 7/8). La perception des apprenants a été objectivée par l'enquête de satisfaction en fin de séance, qui atteste que la majorité des apprenants étaient globalement satisfaits. L’examen du fond est un temps important de l’examen ophtalmologique qui permet l'analyse macroscopique de la rétine en particulier. Il est très souvent utile pour faire le bilan de retentissement de certaines affections générales courantes. Il existe un large consensus sur le fait que tous les étudiants en médecine et les médecins doivent présenter un minimum d’aptitude à la pratique de l’ophtalmoscopie directe , . L'apprentissage de l'ophtalmologie par compagnonnage repose depuis plus d'un siècle sur le dogme "see one, do one, teach one", autrement dit, l'apprenti se forme aux côtés de son senior, en assumant progressivement des responsabilités dans l'exécution des gestes et en réduisant parallèlement la supervision . La simulation est un véritable complément à la formation classique en ophtalmologie et elle est appliquée dans plusieurs types de programmes pédagogiques destinés aux étudiants, aux internes en médecine et aux résidents en formation . Chung et Watzke ont développé le premier simulateur pour l'apprentissage de l'examen du fond d'œil en 2004. Bien que le résultat initial ait été très encourageant, ce modèle présentait certains inconvénients, notamment la nécessité d'utiliser une puissance de lentille très élevée dans l'ophtalmoscope pour visualiser les images du fond d'œil et la contrainte de maintenir une orientation standard de la boîte pour que les images du fond d'œil restent anatomiquement correctes. Un deuxième prototype désigné THELMA (The Human Eye Learning Model Assistant) a été réalisé par Pao en 2007, il correspondait à une "tête de mannequin" avec des globes oculaires artificiels reproduisant un axe visuel proche de l'œil humain. Par la suite, des modèles plus récents ont été créés, notamment l'Eye Retinopathy Trainer® (développé par Adam, Rouilly Co., Sittingbourne, UK), qui correspond à une tête de mannequin de taille réelle avec des pupilles ajustables qui permettent d'accéder à une rétine plus grande et de haute qualité à travers un ophtalmoscope manuel. C'est ce même modèle que nous avons utilisé lors des sessions de simulation. Quelle que soit sa méthode, son moment ou sa finalité, l’évaluation a une place incontournable dans le processus d’apprentissage par simulation , . Elle permet de confirmer que les objectifs pédagogiques ont été atteints totalement, partiellement, ou pas du tout. L'évaluation initiale démontre le besoin de formation chez nos apprenants, puisque les résultats de l'évaluation initiale étaient insuffisants avec un score global médian au pré-test de 5/10. Ceci montre que malgré le fait qu’ils ont eu leurs cours magistraux, leurs connaissances présentent beaucoup de lacunes. En effet, leur formation non théorique doit être renforcée. L'impact de notre formation par simulation sur les connaissances des apprenants a été intéressant, puisque l'évaluation globale des apprenants après la fin de la formation a montré un score médian global post-test de 9/10. En effet, une amélioration significative des scores obtenus avec p= 0.031 et un delta-test global médian de +4.00 ont été constatés. L’utilisation d’un test recourant à des questions identiques avant (pré-test) et après (post-test) une séance de formation permet d’apprécier un gain cognitif immédiat, mais ne permet pas de préjuger de la transférabilité des apprentissages évalués . Dans une expérience similaire, Swanson et al ont montré que les réponses correctes des apprenants avant et après le test sont passées d'une moyenne de 47 % à 86 %, et cette amélioration était également significative (p = 0,001) . Nous avons trouvé une corrélation négative significative entre le score initial au pré-test et l'amélioration de ce score au post-test, dans le sens où plus le score initial est faible, plus l'impact sur l'amélioration des connaissances est important. Ceci implique que cette formation est beaucoup plus bénéfique pour les apprenants débutants que pour les confirmés, et devrait donc être adressée à ceux qui sont au début de leur formation. Quant à la performance spécifique à la réalisation du fond d'œil, le score global médian était de 5,5/8 et la plupart des apprenants ont eu la moyenne. Nous ne pouvons pas conclure quant à l'apprentissage acquis, puisque cette performance spécifique n'a pas été évaluée avant la formation. En effet, si les apprenants n'avaient aucune compétence dans la réalisation du fond d'œil, le fait que la plupart d'entre eux aient obtenu la moyenne après la démonstration réalisée en début de session par le formateur est déjà très intéressant et bénéfique. Dans une étude similaire menée avec le même simulateur (Eye Retinopathy Trainer®), Androwiki a réussi à démontrer un impact positif de la simulation sur la performance des apprenants dans la réalisation de l'examen du fond d'œil. Ces résultats démontrent que la simulation améliore non seulement les connaissances théoriques des apprenants, mais aussi leurs compétences techniques. Bien que les résultats soient très prometteurs au vu de la littérature actuelle , , il reste encore beaucoup de travail à faire pour tester la validité et la fiabilité de cet outil. Les apprenants ont qualifié la valeur pédagogique des ateliers d'excellente dans 56% des cas, ce qui est cohérent avec les résultats qui montrent que tous les apprenants ont acquis des compétences et amélioré leurs connaissances. La valeur pédagogique de ce type de formation, même de façon ponctuelle, est largement démontrée. Cependant, il serait certainement plus bénéfique que ces formations puissent être renouvelées afin de maintenir un niveau optimal de connaissances et de compétences . Cette étude présente certaines limites : le nombre de participants était relativement faible et l'évaluation de la formation n'a été réalisée qu'à court terme. Cependant, les résultats préliminaires pourraient fournir des données de base pour contribuer au développement de nouveaux projets pour d'autres types d'outils de simulation en ophtalmologie.
La simulation procédurale garde toute sa place malgré toutes les avancées technologiques et présente toujours l'avantage d'être moins coûteuse et plus accessible. Dans notre étude prospective, nous avons montré que les ateliers de formation par simulation procédurale avaient un impact positif sur l’amélioration des connaissances théoriques en ophtalmologie, mais aussi sur l’acquisition de compétences spécifiques à la réalisation du fond d’œil. C'est un moyen essentiel de préserver la sécurité des patients en limitant le risque d'erreurs, son intégration dans le programme d'études, tant pour les externes que pour les internes et les résidents de la spécialité, devrait être envisagée
|
Fish hook technique for nucleus management in manual small-incision cataract surgery: An Overview | c428baec-4adc-4aee-b127-c8f50485a78a | 9907267 | Ophthalmology[mh] | The fish hook technique was conceptualized, developed, and adopted in routine surgical practice for nucleus management in MSICS at Sagarmatha Chaudhary Eye Hospital Lahan, Nepal (SCEH), by Dr. Albrecht Hennig and colleagues around 1997. SCEH performs around 50,000 cataract surgeries/year, a majority of them being MSICS with fish hook technique. Prolapsing the nucleus into the anterior chamber with gradual clockwise or anticlockwise nudges on the lens nucleus equatorial plane using Sinskey hook or similar devices is the critical step in all methods of MSICS techniques. The fish hook technique of MSICS bypasses this step. Preparation of fish hook The fish hook, a unique nucleus management tool, is prepared by bending a 30-gauge needle. . It has double angulations, a terminal backward bend, and a lateral bend in the middle of the shaft. It occupies extremely less volume in the anterior chamber during nucleus extraction. Capsular opening in fish hook technique of MSICS The fish hook technique was initially developed for doing high-volume cataract surgery. Linear capsulotomy using a keratome knife was the usual practice. The advantage of linear capsulotomy was ease in the prolapse of the superior pole of the nucleus. Between the popped-out nuclear pole and the posterior capsule, viscoelastics would be used to create a safe plane for insertion of the fish hook. The nucleus is engaged ideally at the junction of the lower one-third and upper two-thirds before pulling it out of the bag and gradually out of the tunnel . Post IOL insertion, the large remnant of the anterior capsular flap would be fashioned into an adequate size capsular opening. However, due to asymmetric anterior capsular remnant in different zones of the capsular bag, there remains some possibility of some IOL decentration later on in capsular fibrosis. It could effectively be avoided by doing a continuous curvilinear capsulorhexis (CCC) of adequate size in proportion to the nucleus to facilitate its delivery without putting much pressure on the capsular bag. Nucleus delivery Initially, the nucleus superior pole is prolapsed from the capsular bag. Once the superior pole of the nucleus can be visualized popping out of the capsular bag, a safe plane is created between the convex surface of the posterior nucleus surface and the concavity of the capsular bag with attached epinucleus and cortex by injecting viscoelastics. The fish hook is carefully introduced between this plane [Figs. and ; ]. Near the inferior pole of the nucleus, the fish hook is slightly rotated upward for its effective engagement into the nucleus tip. Nucleus docking or hooking in the fish hook is necessary for effective transmission of pulling or delivery force for safe nucleus extraction. Safe plane should be recreated by injecting viscoelastics in case of shallowing of the anterior chamber. Once the nucleus is hooked, it is glided out of the sclero-corneal tunnel using slight pressure on the posterior lip of the tunnel with a fish hook . Like any force, the effective position of the nucleus and fish hook docking will determine the various forces. The ideal site for nucleus tip and hook engagement would be inferior to the horizontal meridian bisecting the nucleus and in line with the vertical meridian. If it is not in line with the vertical meridian, a torque vector might arise that might give a rotatory motion to the nucleus once engaged in the scleral tunnel. This might lead to the decoupling of the engaged hook and the nucleus and the hook will come out without complete nucleus extraction. The farther away the fish hook docking from the vertical midline, the stronger will be the rotating torque vector. Ideally, the fish hook should have a right-sided curve if viewed from superiorly; so, it should be inserted from the left side of the scleral tunnel so that the tip effectively lies approximately in line with the vertical meridian . Advantages Any size and type of nucleus can be delivered even in a minimally dilated pupil as the whole nucleus need not be prolapsed in AC . Intraoperative endothelial trauma is avoided as a safe cushion of viscoelastics is always maintained between the anterior nucleus surface and corneal endothelium. Repetitive trauma during nucleus prolapses into AC, and direct engagement of nucleus superior pole into the sclero-corneal tunnel further minimizes endothelial trauma. Surgical time is less when compared to other methods. Disadvantages Cheese wiring of the nucleus can be encountered during the hooking of soft cataracts. In such cases, the fish hook is not actually hooked but it just directs the nucleus to glide over it and comes out of the tunnel. Prolapsing of the superior pole can be tricky in miosed pupil. One of the most dreaded complications is the engagement of the fish hook tip to any other intraocular structure. Fish hooking of the iris can lead to iris cut-through or iridodialysis. It is extremely difficult to disengage hooked iris tissue out of the fish hook, and extreme caution and lookout should be there to avoid this complication. It usually happens during a failed nucleus extraction and the nucleus is half engaged in the scleral tunnel. Other instances are when both superior and inferior poles have popped out and the 6-o’clock pupillary margin is caught between the lens and fish hook tip. Capsular dialysis and intracapsular cataract extraction (ICCE) are other dreaded complications in inadequately sized capsulorhexis.
The fish hook, a unique nucleus management tool, is prepared by bending a 30-gauge needle. . It has double angulations, a terminal backward bend, and a lateral bend in the middle of the shaft. It occupies extremely less volume in the anterior chamber during nucleus extraction.
The fish hook technique was initially developed for doing high-volume cataract surgery. Linear capsulotomy using a keratome knife was the usual practice. The advantage of linear capsulotomy was ease in the prolapse of the superior pole of the nucleus. Between the popped-out nuclear pole and the posterior capsule, viscoelastics would be used to create a safe plane for insertion of the fish hook. The nucleus is engaged ideally at the junction of the lower one-third and upper two-thirds before pulling it out of the bag and gradually out of the tunnel . Post IOL insertion, the large remnant of the anterior capsular flap would be fashioned into an adequate size capsular opening. However, due to asymmetric anterior capsular remnant in different zones of the capsular bag, there remains some possibility of some IOL decentration later on in capsular fibrosis. It could effectively be avoided by doing a continuous curvilinear capsulorhexis (CCC) of adequate size in proportion to the nucleus to facilitate its delivery without putting much pressure on the capsular bag.
Initially, the nucleus superior pole is prolapsed from the capsular bag. Once the superior pole of the nucleus can be visualized popping out of the capsular bag, a safe plane is created between the convex surface of the posterior nucleus surface and the concavity of the capsular bag with attached epinucleus and cortex by injecting viscoelastics. The fish hook is carefully introduced between this plane [Figs. and ; ]. Near the inferior pole of the nucleus, the fish hook is slightly rotated upward for its effective engagement into the nucleus tip. Nucleus docking or hooking in the fish hook is necessary for effective transmission of pulling or delivery force for safe nucleus extraction. Safe plane should be recreated by injecting viscoelastics in case of shallowing of the anterior chamber. Once the nucleus is hooked, it is glided out of the sclero-corneal tunnel using slight pressure on the posterior lip of the tunnel with a fish hook . Like any force, the effective position of the nucleus and fish hook docking will determine the various forces. The ideal site for nucleus tip and hook engagement would be inferior to the horizontal meridian bisecting the nucleus and in line with the vertical meridian. If it is not in line with the vertical meridian, a torque vector might arise that might give a rotatory motion to the nucleus once engaged in the scleral tunnel. This might lead to the decoupling of the engaged hook and the nucleus and the hook will come out without complete nucleus extraction. The farther away the fish hook docking from the vertical midline, the stronger will be the rotating torque vector. Ideally, the fish hook should have a right-sided curve if viewed from superiorly; so, it should be inserted from the left side of the scleral tunnel so that the tip effectively lies approximately in line with the vertical meridian .
Any size and type of nucleus can be delivered even in a minimally dilated pupil as the whole nucleus need not be prolapsed in AC . Intraoperative endothelial trauma is avoided as a safe cushion of viscoelastics is always maintained between the anterior nucleus surface and corneal endothelium. Repetitive trauma during nucleus prolapses into AC, and direct engagement of nucleus superior pole into the sclero-corneal tunnel further minimizes endothelial trauma. Surgical time is less when compared to other methods.
Cheese wiring of the nucleus can be encountered during the hooking of soft cataracts. In such cases, the fish hook is not actually hooked but it just directs the nucleus to glide over it and comes out of the tunnel. Prolapsing of the superior pole can be tricky in miosed pupil. One of the most dreaded complications is the engagement of the fish hook tip to any other intraocular structure. Fish hooking of the iris can lead to iris cut-through or iridodialysis. It is extremely difficult to disengage hooked iris tissue out of the fish hook, and extreme caution and lookout should be there to avoid this complication. It usually happens during a failed nucleus extraction and the nucleus is half engaged in the scleral tunnel. Other instances are when both superior and inferior poles have popped out and the 6-o’clock pupillary margin is caught between the lens and fish hook tip. Capsular dialysis and intracapsular cataract extraction (ICCE) are other dreaded complications in inadequately sized capsulorhexis.
Previously published literature has compared various nucleus delivery techniques of MSICS including the fish hook regarding their relative safety and efficacy. Sharma et al. , in a prospective randomized interventional study, have concluded that the fish hook technique has limited utility in black cataracts. This can be due to improper hooking of the nucleus either due to a wrong assessment of the size of the nucleus or poor visibility of the hook with increasing density of the cataract when the procedure becomes entirely blind. However, with the increasing experience of the surgeon, this technique can be a boon for such cataracts as it demands very less space in the anterior chamber. Moreover, they also compared with other techniques of nucleus delivery and found that complications such as striate keratopathy, corneal edema, anterior chamber inflammatory response, retained cortical matter, secondary glaucoma, uveitis, hyphema, decentered IOL, irregular pupil, and hypotony were almost similar. Another prospective study by Patil et al . have documented the intraoperative complication rate with a fish hook to be as high as 57.58%. This probably could be due to the use of hooks made from 26 ½ gauze needles, which are thicker than the ideal fish hook. Also, most cases selected for this technique were of grade 1 or two cataracts, which are difficult to hook. On the contrary, an article by Hennig et al . reported more than 3 lakh 40 thousand successful surgery, in which they reported a complication in 3.1% of cases during the first 100 surgeries in the hand of beginners. The same author also published an article in 2002 where they had done more than 2,000 surgeries with a complication rate of 1.2%. Finally, the authors concluded that proper case selection according to the grade of cataract, pupillary dilatation, etc., is the deciding factor for selecting the technique of nucleus delivery, which we also believe to be true provided the surgeon has sufficient knowledge and expertise.
In comparison to other techniques of nucleus management, the fish hook technique is a safe, efficient, and cost-effective method of nucleus delivery in the MSICS method of cataract surgery, much useful in centers with high volume load. We suppose that this minireview will add to the knowledge of the fish hook technique to spread globally, which is at present restricted to a particular part of the world. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
Nil.
There are no conflicts of interest.
www.ijo.in
|
Transcriptomic and proteomic profiling identifies feline fibrosarcoma as clinically amenable model for aggressive sarcoma subtypes | b661bc41-3320-4c9c-adc3-5c3ddc88fc3b | 11713505 | Biochemistry[mh] | Fibrosarcomas (FSA) are tumors of mesenchymal origin that affect humans and cats alike . In both, FSA are characterized by locally aggressive growth and tissue invasion that result in high rates of tumor recurrence as well as low sensitivity to radio- and chemotherapy . In humans, FSA are classified as ‘infantile FSA’ and ‘adult FSA’. Adult FSA are very rare, highly malignant tumors with a poor prognosis . They are diagnosed by exclusion of other soft-tissue sarcoma subtypes and their low incidence is challenging in many aspects, ranging from insufficient understanding of the underlying pathological mechanisms and genetic aspects, absence of diagnostic markers, to a lack of efficient therapies for affected patients. Hence, clinical progress in treating FSA is undermined by the small number of affected patients, a situation that is further exacerbated by the absence of valid disease models. Interestingly, naturally developing tumors in domestic animals present potentially valuable models to further develop our understanding of tumor biology and treatment. By integrating naturally occurring cancers observed in veterinary patients into the analytic and developmental pipeline for novel therapies, comparative oncology aims to improve cancer management . Comparative oncology is particularly promising for FSA: while exceedingly rare in humans with an incidence of around 0.2 cases in 100,000 , FSA in domestic cats are frequent, representing between 12-41 % of all feline cutaneous tumors . Clinically, feline FSA represents an aggressive, infiltrative tumor type prone to local recurrence and with a metastasic rate between 10 and 24 % . As such, the combination of high prevalence, similar histomorphology and clinical behavior suggests feline FSA as highly interesting as naturally occurring and clinically amenable models for adult FSA. Identification of molecular homology between human and feline FSA could therefore facilitate development and structured clinical assessment of novel therapies that are difficult or impossible to assess in the human setting, while simultaneously unlocking novel treatment options for affected cats. Feline FSA is currently classified into two possible entities, namely feline injection site sarcoma (FISS) – the more aggressive and frequent form - and non-injection site FSA. FISS is a malignant tumor that originates from the excessive growth of fibroblasts and myofibroblasts in regions of persistent inflammation, especially at injection sites . Although the association between FISS and vaccines is strongest, other medical procedures that involve injection, such as the use of long-acting steroids and antibiotics, nonabsorbable suture material, insulin, and microchips have also been linked with its development . Hence, inflammation, regardless of its origin, appears to facilitate the development of FISS. While it has been suggested that FISS differs from the non-injection associated FSA based on histopathological features , the criteria to distinguish between the two conditions cannot be considered as conclusive . Hence, it remains unknown whether and how the different manifestations of FSA differ on a molecular level. The current gold standard for treatment of FSA in all species is a complete surgical excision. However, distinguishing tumor from unaffected normal tissue (NT) poses significant challenges due to highly infiltrative growth that requires large surgical margins to ensure clean resections . This is especially true in cats, as studies have repeatedly demonstrated that the first cut is the best chance for cure. Due to the invasive nature of the disease, the current recommendation for surgical margins in cats is 5 cm lateral and two fascial planes deep, representing the most aggressive surgical dose in sarcoma resection across species . Incomplete resections cause tumor relapse and a significantly negative impact on prognosis. Additional neo-/adjuvant radio- and chemotherapy protocols offer limited success . Therapeutic outcome for FSA in all species could greatly benefit from targeted modalities, such as precise tumor visualization using targeted dyes to improve resection or targeted delivery of cytotoxic payloads, particularly in the metastatic setting. However, development of such targeted modalities is hindered by the lack of molecular data identifying targets that differentiate FSA from unaffected NT. While feline and adult FSA share histomorphologic traits and exhibit similar clinical behavior, there is a striking lack of detailed molecular characterization for both entities. Thus, it remains unclear how feline and adult FSA compare on a molecular level. Moreover, it remains obscure what differentiates tumor from healthy surrounding NT in either species, preventing identification of tumor-specific targets that could be leveraged in the context of targeted therapy or targeted visualization strategies to improve patient outcome. As such, the shortage of data on both molecular cross-species homology and the differences between tumor and NT impedes the translational potential of feline FSA for development of novel therapeutic approaches that could greatly benefit human patients. We have established a powerful approach to profile transcriptomic and proteomic changes in spatially defined areas of archival formalin-fixed paraffin embedded (FFPE) patient tumors using LCM followed by RNAseq and LC-MS/MS . Here, we apply this approach to profile 30 cases of feline FSA and matched unaffected NT to gain detailed insight into the molecular changes and therapeutic vulnerabilities in spontaneous feline FSA.
Ethics approval and consent to participate No animals were killed for the purpose of this research project, as the analyzed tissue had been surgically removed for curative reasons with the consent of the patient owners. According to the Swiss Animal Welfare Law Art. 3 c, Abs. 4 the preparation of tissues in the context of agricultural production, diagnostic or curative operations on the animal or for determining the health status of animal populations is not considered an animal experiment and, thus, does not require an animal experimentation license. The use of FFPE material from feline patients which was obtained for diagnostic reasons therefore does not require a formal ethics approval and complies with national guidelines. The project was subjected to an institutional ethics review and approved by the Ethics Committee of the Faculty of Medicine, University of Zurich (MeF-Ethik-2024-01). Selection of cases for LCM Fibrosarcoma and matched unaffected NT (skeletal muscle (SM), adipose tissue (AT) and connective tissue (CT)) were concurrently isolated using laser-capture microdissection from FFPE tissue of 30 feline FSA samples that were provided by the Institute of Veterinary Pathology of the Vetsuisse Faculty Zurich. Based on the clinical history and the anatomic location, an injection-related origin (i.e. FISS) cannot be ruled out for any of the cases (i.e. all patients have received vaccinations). However, in the absence of definite markers to differentiate between FISS and non-FISS FSA, and the unresolved question whether the two subtypes really differ on a molecular level, all tumors were considered as ‘FSA’. All samples were either from the Small Animal Hospital of Zurich or external cases sent in by veterinarians practicing in Switzerland. Cases were reviewed and selected by a certified pathologist (FG) according to the criteria indicated by . Paraffin blocks were routinely kept at room temperature. Tissue processing for LCM was performed as previously described . provides clinical details for all cases included in the study. Laser-capture microdissection (LCM) Laser-capture microdissection was performed using the ArcturusXT TM Laser Capture Microdissection System (Thermo Scientific) as described in . Areas of interest identified by a certified pathologist (FG) were isolated according to the manufacturer's protocol and the criteria described in . Selectivity of isolation was verified by microscopic examination of the LCM cap as well as the excised region after microdissection. 2 caps were collected per case and tissue. After excision, the thermoplastic membranes containing captured tissue were peeled off the caps using a sterile scalpel and forceps and subsequently stored in a 1.5 ml centrifuge tube (EppendorfⓇSafe-Lock tubes) and frozen at −20°C until further processing. Sample preparation for proteomic analysis For proteomic analysis, all samples were processed in a single batch. For protein extraction, sterile blades and forceps were used to peel off the thermoplastic membranes containing captured cells from the cap, which were then transferred into a sterile Eppendorf® Safe-Lock tube. Microdissected tissue was rehydrated by adding 900 μl of heptane and incubating for 10 min at 30°C in a thermomixer (800 rpm). After centrifugation (20′000 x g , 10 min), the heptane was removed, and the step was repeated. Subsequently, the membranes were washed with 900 μl of ethanol (5 min, RT, 1′000 rpm), 200 μl of 90 % ethanol (5 min, RT, 1′000 rpm) and 200 μl of 75 % ethanol (5 min, RT, 1′000 rpm). The samples were stored at -80°C overnight. The samples were then prepared using a commercial iST Kit (Pre-Omics, Germany) with an updated version of the protocol, as described in . Liquid chromatography-mass spectrometry analysis LC-MS/MS analysis was performed on an Orbitrap Fusion Lumos (Thermo Scientific) equipped with a Digital PicoView source (New Objective) and coupled to an M-Class UPLC (Waters). Solvent composition of the two channels was 0.1 % formic acid for channel A and 99.9 % acetonitrile in 0.1 % formic acid for channel B. Column temperature was 50°C. For each sample 3 µl of peptides were loaded on a commercial ACQUITY UPLC M-Class Symmetry C18 Trap Column (100Å, 5 µm, 180 µm x 20 mm, Waters) connected to a ACQUITY UPLC M-Class HSS T3 Column (100Å, 1.8 µm, 75 µm X 250 mm, Waters). The peptides were eluted at a flow rate of 300 nl/min. After a 3 min initial hold at 5 % B, a gradient from 5 to 24 % B in 80 min and 22 to 36 % B in additional 10 min was applied. The column was cleaned after the run by increasing to 95 % B and holding 95 % B for 10 min prior to re-establishing loading condition. Samples were measured in randomized order. For the analysis of the individual samples, the mass spectrometer was operated in data-independent mode (DIA). DIA scans covered a range from 400 to 1000 m/z in windows of 16 m/z. The resolution of the DIA windows was set to 30′000, with an AGC target value of 500′000, the maximum injection time set to 50 ms and a fixed normalized collision energy (NCE) of 33 %. Each instrument cycle was completed by a full MS scan monitoring 350 to 1500 m/z at a resolution of 120′000. The mass spectrometry proteomics data were handled using the local laboratory information management system (LIMS) . LC-MS/MS data processing The acquired MS raw data were processed for identification and quantification using FragPipe (version 19.0), MSFragger (version 3.6), and Philosopher (version 4.8.1) . Spectra were searched against a Uniprot Felis catus database (taxonomy ID 9685, downloaded on 11.05.2023) concatenated to its reversed decoy database, and common protein contaminants. MSFragger-DIA mode for direct identification of peptides from DIA data was used. Strict trypsin digestion with a maximum of two missed cleavages was set. Carbamidomethylation of cysteine was selected as a fixed modification, while methionine oxidation was set as variable modifications. EasyPQP was used to generate a DIA-NN-compatible spectral library. Subsequent quantification was performed with DIA-NN version 1.8.2. LC-MS/MS data analysis Differential protein expression analysis was performed using the r-package prolfqua . The intensities were first log 2 transformed and then z-transformed so that the sample mean and variance were equal. Next, we fitted a linear model with a single factor (tissue) to each protein, and tissue differences (protein log 2 fold changes (log 2 (FC)) were estimated and tested using the model parameters. To increase the statistical power, the variance estimates were moderated using the empirical Bayes approach, which exploits the parallel structure of the high throughput experiment . Finally, the p-values are adjusted using the Benjamini and Hochberg procedure to obtain the false discovery rate (FDR). RNA isolation and sequencing RNA was isolated using the Covaris® truXTRAC FFPE RNA kit and the Covaris® E220 focused ultrasonicator as established . RNA abundance and quality was analyzed using the 4200 or 2200 Tape Station Software using the High Sensitivity RNA ScreenTape kit (Agilent Technologies), according to the manufacturer's protocol. For RNAseq, all samples were processed in a single batch. 10 ng of RNA diluted to a concentration of 0.33 ng/μl in a total volume of 30 μl was submitted for RNA sequencing, as detailed in . The SMARTer Stranded Total RNAseq Kit-Pico Input Mammalian (Clontech/Takara Bio USA) was used according to the manufacturer's protocol for RNA library preparation and ribosomal RNA depletion. Single-read sequencing (125 bp) was performed in a single batch for all samples of the same tumor type and species using the Illumina HiSeq 4000 according to standard protocols of the Functional Genomics Centre Zurich (FGCZ). RNAseq data processing The raw reads were cleaned by removing adapter sequences, trimming low-quality ends, and filtering reads with low quality (phred quality < 20) using Trimmomatic (version 0.36) . Sequence pseudoalignment of the resulting high-quality reads to the feline reference genome Felis_catus_9.0, Release_102-2021-02-02, and quantification of gene-level expression was carried out using Kallisto (version 0.44) . Gene counts were imported into the R/Bioconductor package EdgeR (R, version 3.6.1, EdgeR, version 3.28), and trimmed mean of M values normalization size factors were calculated to adjust for sample differences in library size. The generalized linear model was used to detect differentially expressed genes incorporating adjusted (Benjamini and Hochberg method) p-values. Pie chart Pie charts were plotted using the PieChart function from the lessR R package . Venn diagram Venn diagrams were produced either using ggvenn R package or the BioVenn R package for proportional diagrams. Identification of tumor and NT-specific proteins was performed by separating data according to tissue group and filtering by row mean !=0 to ensure presence in at least one sample. The intersection of each tissue group was used to calculate overlapping proteins and separate tissue-specific targets. Furthermore, the intersection of differentially expressed genes and proteins between tumor and the different NT was used to identify common differential expressed genes and proteins. Heatmap Heatmaps were generated using R package ComplexHeatmap with row clustering distance was set to “Euclidean” and RowAnnotation according to overall high, mid and low expression. HALLMARK and pathway analysis of high and lowly expressed proteins was performed with molecular signatures database (MSigDB) . Barcodeplot Cross-species comparative analysis of tumor-specific expression was performed using the barcode enrichment plot from limma . The proteomic dataset by Tang et al, 2024 was used as external human dataset, and the canine data was from . All target identifiers from the external datasets were summarized at the gene level using BioMart . Raw data was log 2 normalized and genes were ranked according to their mean expression across all samples. Plots show the ranked position indicating the expression in the feline cohort (x-axis) compared to the ranked expression in the external human and canine datasets (line extension of the y-axis). Only common genes in feline, canine and human were included in the barcodeplot analysis. Pearson correlation analysis of ranked position was used to confirm significance. The top 100 common highly expressed genes from each plot were identified as the leading edge and selected for input in the Venn diagram. Gene set enrichment analysis (GSEA) and over representation analysis (ORA) For GSEA, ORA and KEGG pathway analysis, the tool WebGestalt ( http://www.webgestalt.org ) or the molecular signatures database (MSigDB v2023.2.Hs) were used. Additional pathway analysis was performed with the help of QIAGEN Ingenuity Pathway Analysis (QIAGEN Inc., https://digitalinsights.qiagen.com/IPA ) comparing the differentially expressed proteins in tumor to each NT. Proteomic and genomic data integration In addition to gene set enrichment analysis, we also performed ssGSEA using the public server from GenePattern ( https://www.genepattern.org/#gsc.tab=0 ) to calculate separate enrichment scores for each pairing of a gene set and tumour sample. Principle component analysis was performed applying prcomp on normalized protein intensity values. 2D and 3D visualization was achieved using R packages ggplot2 and scatterplot3d , with the first two or three principal components as x, y and z axis values respectively. For the comparison of the transcriptomic and proteomic data set, the online tool Shiny App ( https://fgcz-shiny.uzh.ch/connect/ ) run by the Functional Genomics Center Zurich, was used. Uniprot protein identifiers were first converted to ensembl and then gene names. Feline genes (Felis_catus_9.0) were converted to human orthologues using Ensembl BioMart (release 100) prior to analysis with MetaCore . For the pathway analysis, the web tool MetaCore from Clarivate AnalyticsTM was used ( https://portal.genego.com ). Survival analysis Association of gene expression with disease-free interval or overall survival in the human TCGA-SARC dataset ( http://cancergenome.nih.gov/ ) was performed using GEPIA 2.0 ( http://gepia.cancer-pku.cn ). Cell culture FSII and FSIII cells were a kind donation from Prof. M. Reinacher (Department of Veterinary Pathology, Justus-Liebig-University of Giessen, Germany) . Cells were cultured under standard conditions @ 37°C in humid atmosphere with 5 % CO 2 in Gibco™ DMEM, low glucose, GlutaMAX™ Supplement with 15 % FCS (Gibco), MEM-Nonessential amino acids (Gibco) and antibiotic-antimycotic supplement (Gibco), and regularly tested for mycoplasma. Twenty-four hours before treatment, 2,500 cells were seeded in 100 μl complete medium into 96 well plates. Drugs used were: Vincristin (Teva Pharma AG), Vinorelbine (Sandoz Pharmaceuticals AG), Vinblastin Sulfate (Teva Pharma AG), Cytarabine (Pfizer Switzerland), Actinomycin D (Sigma), Carboplatin (Accord Healthcare AG), Doxorubicin (Teva Pharma AG), Gemcitabine (Fresenius), ATMi (KU-55933, Sigma), ATRi (AZ-20, Selleck Chemicals), and PARPi (Olaparib, Selleck Chemicals). For all experiments, stock solutions of inhibitors were serially diluted in complete medium to obtain the required concentrations and used to replace the seeding medium. Medium was replaced after 96 hours in experiments lasting 6 days. After the incubation period, medium was replaced with fresh medium containing 0.025 mg/ml Resazurin in PBS, and plates were further incubated at 37°C. Sample fluorescence was measured after 2 to 4 hours incubation using the BioTek Synergy H1 Plate Reader (Agilent Technologies) set to ex = 560 and em = 590. Mean values of 4 to 6 replicate wells were calculated for each treatment point and cell line and normalized to control treated cells. Graphical display of results GraphPad Prism, Shiny App and MetaCore were used for calculation of IC50 and visual representation of the results, along with selected R packages previously mentioned.
No animals were killed for the purpose of this research project, as the analyzed tissue had been surgically removed for curative reasons with the consent of the patient owners. According to the Swiss Animal Welfare Law Art. 3 c, Abs. 4 the preparation of tissues in the context of agricultural production, diagnostic or curative operations on the animal or for determining the health status of animal populations is not considered an animal experiment and, thus, does not require an animal experimentation license. The use of FFPE material from feline patients which was obtained for diagnostic reasons therefore does not require a formal ethics approval and complies with national guidelines. The project was subjected to an institutional ethics review and approved by the Ethics Committee of the Faculty of Medicine, University of Zurich (MeF-Ethik-2024-01).
Fibrosarcoma and matched unaffected NT (skeletal muscle (SM), adipose tissue (AT) and connective tissue (CT)) were concurrently isolated using laser-capture microdissection from FFPE tissue of 30 feline FSA samples that were provided by the Institute of Veterinary Pathology of the Vetsuisse Faculty Zurich. Based on the clinical history and the anatomic location, an injection-related origin (i.e. FISS) cannot be ruled out for any of the cases (i.e. all patients have received vaccinations). However, in the absence of definite markers to differentiate between FISS and non-FISS FSA, and the unresolved question whether the two subtypes really differ on a molecular level, all tumors were considered as ‘FSA’. All samples were either from the Small Animal Hospital of Zurich or external cases sent in by veterinarians practicing in Switzerland. Cases were reviewed and selected by a certified pathologist (FG) according to the criteria indicated by . Paraffin blocks were routinely kept at room temperature. Tissue processing for LCM was performed as previously described . provides clinical details for all cases included in the study.
Laser-capture microdissection was performed using the ArcturusXT TM Laser Capture Microdissection System (Thermo Scientific) as described in . Areas of interest identified by a certified pathologist (FG) were isolated according to the manufacturer's protocol and the criteria described in . Selectivity of isolation was verified by microscopic examination of the LCM cap as well as the excised region after microdissection. 2 caps were collected per case and tissue. After excision, the thermoplastic membranes containing captured tissue were peeled off the caps using a sterile scalpel and forceps and subsequently stored in a 1.5 ml centrifuge tube (EppendorfⓇSafe-Lock tubes) and frozen at −20°C until further processing.
For proteomic analysis, all samples were processed in a single batch. For protein extraction, sterile blades and forceps were used to peel off the thermoplastic membranes containing captured cells from the cap, which were then transferred into a sterile Eppendorf® Safe-Lock tube. Microdissected tissue was rehydrated by adding 900 μl of heptane and incubating for 10 min at 30°C in a thermomixer (800 rpm). After centrifugation (20′000 x g , 10 min), the heptane was removed, and the step was repeated. Subsequently, the membranes were washed with 900 μl of ethanol (5 min, RT, 1′000 rpm), 200 μl of 90 % ethanol (5 min, RT, 1′000 rpm) and 200 μl of 75 % ethanol (5 min, RT, 1′000 rpm). The samples were stored at -80°C overnight. The samples were then prepared using a commercial iST Kit (Pre-Omics, Germany) with an updated version of the protocol, as described in .
LC-MS/MS analysis was performed on an Orbitrap Fusion Lumos (Thermo Scientific) equipped with a Digital PicoView source (New Objective) and coupled to an M-Class UPLC (Waters). Solvent composition of the two channels was 0.1 % formic acid for channel A and 99.9 % acetonitrile in 0.1 % formic acid for channel B. Column temperature was 50°C. For each sample 3 µl of peptides were loaded on a commercial ACQUITY UPLC M-Class Symmetry C18 Trap Column (100Å, 5 µm, 180 µm x 20 mm, Waters) connected to a ACQUITY UPLC M-Class HSS T3 Column (100Å, 1.8 µm, 75 µm X 250 mm, Waters). The peptides were eluted at a flow rate of 300 nl/min. After a 3 min initial hold at 5 % B, a gradient from 5 to 24 % B in 80 min and 22 to 36 % B in additional 10 min was applied. The column was cleaned after the run by increasing to 95 % B and holding 95 % B for 10 min prior to re-establishing loading condition. Samples were measured in randomized order. For the analysis of the individual samples, the mass spectrometer was operated in data-independent mode (DIA). DIA scans covered a range from 400 to 1000 m/z in windows of 16 m/z. The resolution of the DIA windows was set to 30′000, with an AGC target value of 500′000, the maximum injection time set to 50 ms and a fixed normalized collision energy (NCE) of 33 %. Each instrument cycle was completed by a full MS scan monitoring 350 to 1500 m/z at a resolution of 120′000. The mass spectrometry proteomics data were handled using the local laboratory information management system (LIMS) .
The acquired MS raw data were processed for identification and quantification using FragPipe (version 19.0), MSFragger (version 3.6), and Philosopher (version 4.8.1) . Spectra were searched against a Uniprot Felis catus database (taxonomy ID 9685, downloaded on 11.05.2023) concatenated to its reversed decoy database, and common protein contaminants. MSFragger-DIA mode for direct identification of peptides from DIA data was used. Strict trypsin digestion with a maximum of two missed cleavages was set. Carbamidomethylation of cysteine was selected as a fixed modification, while methionine oxidation was set as variable modifications. EasyPQP was used to generate a DIA-NN-compatible spectral library. Subsequent quantification was performed with DIA-NN version 1.8.2.
Differential protein expression analysis was performed using the r-package prolfqua . The intensities were first log 2 transformed and then z-transformed so that the sample mean and variance were equal. Next, we fitted a linear model with a single factor (tissue) to each protein, and tissue differences (protein log 2 fold changes (log 2 (FC)) were estimated and tested using the model parameters. To increase the statistical power, the variance estimates were moderated using the empirical Bayes approach, which exploits the parallel structure of the high throughput experiment . Finally, the p-values are adjusted using the Benjamini and Hochberg procedure to obtain the false discovery rate (FDR).
RNA was isolated using the Covaris® truXTRAC FFPE RNA kit and the Covaris® E220 focused ultrasonicator as established . RNA abundance and quality was analyzed using the 4200 or 2200 Tape Station Software using the High Sensitivity RNA ScreenTape kit (Agilent Technologies), according to the manufacturer's protocol. For RNAseq, all samples were processed in a single batch. 10 ng of RNA diluted to a concentration of 0.33 ng/μl in a total volume of 30 μl was submitted for RNA sequencing, as detailed in . The SMARTer Stranded Total RNAseq Kit-Pico Input Mammalian (Clontech/Takara Bio USA) was used according to the manufacturer's protocol for RNA library preparation and ribosomal RNA depletion. Single-read sequencing (125 bp) was performed in a single batch for all samples of the same tumor type and species using the Illumina HiSeq 4000 according to standard protocols of the Functional Genomics Centre Zurich (FGCZ).
The raw reads were cleaned by removing adapter sequences, trimming low-quality ends, and filtering reads with low quality (phred quality < 20) using Trimmomatic (version 0.36) . Sequence pseudoalignment of the resulting high-quality reads to the feline reference genome Felis_catus_9.0, Release_102-2021-02-02, and quantification of gene-level expression was carried out using Kallisto (version 0.44) . Gene counts were imported into the R/Bioconductor package EdgeR (R, version 3.6.1, EdgeR, version 3.28), and trimmed mean of M values normalization size factors were calculated to adjust for sample differences in library size. The generalized linear model was used to detect differentially expressed genes incorporating adjusted (Benjamini and Hochberg method) p-values.
Pie charts were plotted using the PieChart function from the lessR R package .
Venn diagrams were produced either using ggvenn R package or the BioVenn R package for proportional diagrams. Identification of tumor and NT-specific proteins was performed by separating data according to tissue group and filtering by row mean !=0 to ensure presence in at least one sample. The intersection of each tissue group was used to calculate overlapping proteins and separate tissue-specific targets. Furthermore, the intersection of differentially expressed genes and proteins between tumor and the different NT was used to identify common differential expressed genes and proteins.
Heatmaps were generated using R package ComplexHeatmap with row clustering distance was set to “Euclidean” and RowAnnotation according to overall high, mid and low expression. HALLMARK and pathway analysis of high and lowly expressed proteins was performed with molecular signatures database (MSigDB) .
Cross-species comparative analysis of tumor-specific expression was performed using the barcode enrichment plot from limma . The proteomic dataset by Tang et al, 2024 was used as external human dataset, and the canine data was from . All target identifiers from the external datasets were summarized at the gene level using BioMart . Raw data was log 2 normalized and genes were ranked according to their mean expression across all samples. Plots show the ranked position indicating the expression in the feline cohort (x-axis) compared to the ranked expression in the external human and canine datasets (line extension of the y-axis). Only common genes in feline, canine and human were included in the barcodeplot analysis. Pearson correlation analysis of ranked position was used to confirm significance. The top 100 common highly expressed genes from each plot were identified as the leading edge and selected for input in the Venn diagram.
For GSEA, ORA and KEGG pathway analysis, the tool WebGestalt ( http://www.webgestalt.org ) or the molecular signatures database (MSigDB v2023.2.Hs) were used. Additional pathway analysis was performed with the help of QIAGEN Ingenuity Pathway Analysis (QIAGEN Inc., https://digitalinsights.qiagen.com/IPA ) comparing the differentially expressed proteins in tumor to each NT.
In addition to gene set enrichment analysis, we also performed ssGSEA using the public server from GenePattern ( https://www.genepattern.org/#gsc.tab=0 ) to calculate separate enrichment scores for each pairing of a gene set and tumour sample. Principle component analysis was performed applying prcomp on normalized protein intensity values. 2D and 3D visualization was achieved using R packages ggplot2 and scatterplot3d , with the first two or three principal components as x, y and z axis values respectively. For the comparison of the transcriptomic and proteomic data set, the online tool Shiny App ( https://fgcz-shiny.uzh.ch/connect/ ) run by the Functional Genomics Center Zurich, was used. Uniprot protein identifiers were first converted to ensembl and then gene names. Feline genes (Felis_catus_9.0) were converted to human orthologues using Ensembl BioMart (release 100) prior to analysis with MetaCore . For the pathway analysis, the web tool MetaCore from Clarivate AnalyticsTM was used ( https://portal.genego.com ).
Association of gene expression with disease-free interval or overall survival in the human TCGA-SARC dataset ( http://cancergenome.nih.gov/ ) was performed using GEPIA 2.0 ( http://gepia.cancer-pku.cn ).
FSII and FSIII cells were a kind donation from Prof. M. Reinacher (Department of Veterinary Pathology, Justus-Liebig-University of Giessen, Germany) . Cells were cultured under standard conditions @ 37°C in humid atmosphere with 5 % CO 2 in Gibco™ DMEM, low glucose, GlutaMAX™ Supplement with 15 % FCS (Gibco), MEM-Nonessential amino acids (Gibco) and antibiotic-antimycotic supplement (Gibco), and regularly tested for mycoplasma. Twenty-four hours before treatment, 2,500 cells were seeded in 100 μl complete medium into 96 well plates. Drugs used were: Vincristin (Teva Pharma AG), Vinorelbine (Sandoz Pharmaceuticals AG), Vinblastin Sulfate (Teva Pharma AG), Cytarabine (Pfizer Switzerland), Actinomycin D (Sigma), Carboplatin (Accord Healthcare AG), Doxorubicin (Teva Pharma AG), Gemcitabine (Fresenius), ATMi (KU-55933, Sigma), ATRi (AZ-20, Selleck Chemicals), and PARPi (Olaparib, Selleck Chemicals). For all experiments, stock solutions of inhibitors were serially diluted in complete medium to obtain the required concentrations and used to replace the seeding medium. Medium was replaced after 96 hours in experiments lasting 6 days. After the incubation period, medium was replaced with fresh medium containing 0.025 mg/ml Resazurin in PBS, and plates were further incubated at 37°C. Sample fluorescence was measured after 2 to 4 hours incubation using the BioTek Synergy H1 Plate Reader (Agilent Technologies) set to ex = 560 and em = 590. Mean values of 4 to 6 replicate wells were calculated for each treatment point and cell line and normalized to control treated cells.
GraphPad Prism, Shiny App and MetaCore were used for calculation of IC50 and visual representation of the results, along with selected R packages previously mentioned.
Spatially defined multiomic profiling of tumor and matched NT in a cohort of 30 feline FSA The cohort of feline FSA is composed of 30 primary tumors from 27 European Shorthair, two Maine Coon and one Persian mix breed cats, of which 17 were female (8 neutered) and 13 male (2 neutered) . Median age was 12 years, and anatomical sites affected (as per medical reports) included abdomen, back, chest, flank, hindlimb, neck, shoulder, and thigh ( A). Based on the clinical history (vaccinations) and the anatomic location, an injection-related origin (i.e. FISS) cannot be ruled out for any of the cases. However, in the absence of definite markers to differentiate between FISS and non-FISS FSA, all tumors were considered as ‘FSA’. To gain insight into the proteomic and transcriptomic landscape of these tumors, we applied LCM to isolate tumor and matched unaffected connective tissue (CT), adipose tissue (AT) and skeletal muscle (SM) from all cases in this cohort ( B). All the matched normal tissues (NT) are frequently found in the vicinity of FSA and hence present tissue that the tumor needs to be differentiated from for targeting purposes in a clinical setting. Subsequently, LCM-isolated samples were analyzed by LC-MS/MS and RNAseq, respectively. The final sample set analyzed by LC-MS/MS consisted of a total of 98 tissue samples (27 tumor, 27 AT, 24 CT and 20 SM), while the final RNAseq dataset was composed of 77 specimens (30 tumor, 16 AT, 11 CT and 20 SM) ( C). In total, proteomic analysis detected 5′302 different proteins in all tumor samples, 4′296 in CT, 1′289 in AT and 4′094 in SM, with an average of 4554 proteins detected per tumor sample, 2′139 in CT, 489 in AT and 2′468 in SM ( D, Supplementary Tables 1 and 2, and Supplementary Fig. 1). Of these, 2′324 proteins were commonly detected in every T, 389 in CT, 97 in AT and 1′197 in SM. Transcriptomic analysis identified a total of 13277 transcripts across all tumors, 13′277 in CT, 13′218 in AT and 13′194 in SM ( D, Supplementary Table 3). 7′454 transcripts were shared across every tumor sample, 6′251 in CT, 6′464 in AT and 5′022 in SM. As such, this represents the first detailed proteomic and transcriptomic dataset of feline FSA and its surrounding NT. Transcriptomic profiling of feline FSA identifies transcripts highly overexpressed in tumor compared to unaffected NT Principal component analysis (PCA) using all identified transcripts clearly separated tumor from the different normal tissue types within the first three principal components ( A). Of note, the overlap between AT and CT was presumably due to AT having low RNA contents in general and the presence of fibroblasts as a structural feature in AT, which contributes a CT-like expression signature. This supported the validity of our approach to analyze spatially defined tissue regions using RNAseq and highlighted the difference between the tissue types as the major source of variability, overriding any potential effects due to differences in breed, anatomical location of the tumor or other clinical features. Analysis of differentially expressed genes (DEGs) between tumor and each normal tissue (cut-off values for significance: log 2 (FC) > 1 and < -1, FDR < 0.05) identified 1′163 significantly up- and 1′331 significantly downregulated targets between tumor and AT, 638 up- and 1′102 downregulated between T and CT and 2′736 up- and 2′072 downregulated in T vs SM ( B). Gene set enrichment analysis (GSEA) of expression changes using the KEGG database revealed a strong enrichment of pathways related to cell cycle, DNA replication and repair and RNA production in tumor tissue compared to all three NTs separately (Supplementary Fig. 2). In contrast, AT was characterized by pathways involved in lipolysis, AMPK, PPAR, and Adipocytokine signaling, while CT featured cytokine-cytokine receptor interactions, cell adhesion molecules and complement cascade. SM was characterized by typical muscle-related pathways, including muscle contraction, adrenergic and insulin signaling, confirming the specificity of tissue isolation (Supplementary Fig. 2). Reactome pathway analysis further supported these findings (Supplementary Fig. 3). Of the transcripts significantly upregulated in tumor, 436 targets were commonly upregulated by a log 2 (FC) > 1 across all three individual comparisons, representing potential candidates for markers that discriminate tumor from all NTs ( C and Supplementary Table 4). Overrepresentation analysis of these 436 targets using KEGG pathway analysis revealed involvement in one carbon metabolism, DNA replication, cell cycle and p53 signaling, among others ( D). The top 20 targets highly upregulated in tumor compared to all NT (ranked according to the log 2 (FC) T vs CT) include transcripts encoding for COL11A1, TNC, PTK7, and P4HA3 ( E and F). Unsupervised hierarchical clustering of tumor tissue alone revealed a somewhat heterogeneous structure, suggesting several subclusters within the data ( G). GSEA with the HALLMARK and Reactome databases revealed an enrichment of epithelial to mesenchymal transition, Myc targets, mTORC1 signaling, translation, infectious disease and nervous system development among the highly expressed genes. In contrast, the lowly expressed genes were enriched for pathways including G2M checkpoint, E2F targets, and cell cycle signaling events ( G). In summary, feline FSA display a distinct transcriptional profile strongly dominated by pathways centered around the cell cycle, DNA repair and DNA replication that clearly differentiates them from unaffected NT. Proteomic profiling of feline FSA and matched NT reveals potential tumor-specific markers Similarly to the RNA data, PCA differentiated between the four tissue types within the first three principal components on the protein level ( A). Analysis of differentially expressed proteins between tumor and each NT (cut-off values for significance: log 2 (FC) > 1 and < -1, FDR < 0.05) identified 826 significantly up- and 282 significantly downregulated proteins between tumor and AT, 992 up- and 782 downregulated proteins between T and CT and 808 up- and 1′020 downregulated proteins in T vs SM ( B). GSEA of expression changes using KEGG pathways between tumor and the NTs identified pathways involved in ribosome or protein assembly, protein processing in the endoplasmic reticulum and antigen processing and presentation as positively enriched in tumor tissue (Supplementary Fig. 4). Of the proteins detected as significantly upregulated in tumor, 312 were shared across all three individual comparisons, representing potential tumor-specific targets ( C and Supplementary Table 5). GSEA of these targets using HALLMARK revealed involvement in PI3K-Akt-mTor signaling, G2M checkpoint, Myc targets and epithelial to mesenchymal transition, among others ( D). The top 5 overexpressed proteins in tumor compared to NT (ranked according to the log 2 (FC) T vs CT) included SFRP2, KDM5A, CMTM5, HSPA5 and FN1 ( E). Of all detected proteins, 19 were found only in samples of CT, 30 were specific to SM, and 625 were exclusive to tumor tissue, while none were detected only in AT ( F and Supplementary Table 6). ORA using HALLMARK pathway of these 625 tumor exclusive proteins identified enrichment of Mitotic spindle, E2F targets, G2M checkpoint, inflammatory response, and Myc targets among others (Supplementary Fig. 5). 137 of these proteins were detected in >80 % of cases (i.e. 22/27), 77 in 90 % (i.e. 24/27) and 6 proteins were present in every single tumor sample analyzed (Supplementary Table 7). These 6 tumor-exclusive proteins detected in every single sample comprised MARCKSL1, IKBIP, COPZ1, TIMP1, FAM50A and DPM3 ( G). As feline FSA are considered highly malignant forms of STS, we next evaluated whether these tumor-exclusive proteins were associated with tumor aggressiveness. To this end, we assessed the association of their expression with disease-free interval or overall survival in human STS using the TCGA-SARC dataset ( H). Indeed, this analysis found high levels of IKBIP, MARCKSL1 and COPZ1 levels to be associated with shorter disease-free interval or survival (IKBIP: disease-free interval (p = 0.089), MARCKSL1: overall survival (p = 0.0038), COPZ1: disease-free survival (p = 0.098)). Therefore, these data suggest that high expression of these proteins is associated with a negative impact on human STS. It is well-established that increased levels of RNA do not necessarily translate to increased protein levels. Correlation between proteomic and transcriptomic data using the log 2 (FC) values from comparisons of T vs AT, CT, and SM were very moderate (r = 0.37 for AT, r = 0.37 for CT, and r = 0.53 for SM; Supplementary Fig. 6). As such, this demonstrates that transcriptomic and proteomic analysis of patient tissue yields complementary information and enables a more comprehensive view than either analysis alone. To understand which of the significantly upregulated proteins in tumor were also upregulated on the RNA level, we computed the overlap between the datasets. The Venn diagram revealed an overlap of 29 shared targets, including FN1, POSTN, and RUNX2, further validating the upregulation of these targets in feline FSA ( I and Supplementary Table 8). Analysis of the expression differences between tumor and NT using QIAGEN Ingenuity Pathway Analysis detected eukaryotic translation initiation, SRP-dependent co-translational protein targeting to membrane and EIF2 signaling as the top canonical pathways in all three comparisons of tumor vs NT ( J). Identification of top upstream regulators revealed TP53 and MYC activation in tumor tissue ( K). Finally, assessment of the activation status of the top 20 activated canonical pathways further reinforced the massive emphasis on RNA- and translation-related pathways in feline FSA, as well as involvement of WNT and hedgehog signaling ( L). In conclusion, the proteomic signature clearly differentiates feline FSA tumor tissue from unaffected AT, CT and SM, revealing a massive dependence on translation-related pathways and a significant number of proteins either strongly overexpressed in or restricted to tumor tissue that could potentially serve as tumor-specific markers. Feline FSA comprises subtypes characterized by neuronal, fibroblastic and inflammatory expression patterns potentially associated with differences in clinical behavior Assessment of patient outcome within the cohort allowed identification of two subgroups of patients with differing clinical outcome. Five patients that showed worse survival time than expected (i.e. survival < 500 days when excised with clean margins or < 60 days with unclean margins or cases with metastatic disease/systemic failure) were classified as ‘highly aggressive’ (HA), while 7 patients that surpassed survival > 500 days after resection (some even despite R1 margins) and without metastases were classified as ‘low-aggressive’ (LA; and Supplementary Table 9). Differential gene expression analysis between these two groups using log 2 (FC) > 2, p < 0.01 detected 79 significantly deregulated targets (11 up- and 69 downregulated), which also clearly separated both groups by unsupervised clustering ( A and B). GSEA revealed a strong enrichment of immune-related pathways including phagocytosis, engulfment, antigen processing and presentation, adaptive immune response, and B cell receptor signaling in the HA group ( C). In contrast, processes involved in transmembrane transport, synaptic membrane adhesion, regulation of presynapse assembly and myelination as well as cilium movement were enriched in LA tumors, potentially hinting at a more neuronal-like differentiation state of these tumors ( C). To assess functional differences between these two groups on the protein level, we next investigated the respective LC-MS/MS data using single sample Gene Set Enrichment Analysis (ssGSEA) focusing on the top 20 pathways with highest variance. Interestingly, this identified clear differences in the inflammatory response pathway as well as the complement and coagulation cascade, both of which were highly enriched in LA tumors as opposed to HA tumors ( D). Moreover, the HA tumors were characterized by a significantly higher enrichment of DNA replication and DNA mismatch repair-associated pathways. These data suggest that differences in immune-related and neuronal RNA expression patterns are associated with tumor aggressiveness in feline FSA due to functional differences in the activation status of inflammatory response, complement and coagulation as well as DNA replication. To further explore feline FSA in molecular detail, we expanded the analysis to all included cases. Interestingly, PCA of tumor samples using all transcripts suggested the existence of 3 different tumor clusters: while 17 samples grouped tightly together to form a main cluster (C1), a second cluster (C2) composed of 7 tumors was clearly separated along PC1 from the main bulk of specimens, and a third cluster of 5 tumors was separated along PC2 (C3) ( E). To understand the molecular differences driving these three clusters, we further assessed the main loading factors driving PC1 and PC2. ORA of PC1 loadings using Gene Ontology of Biological Processes revealed highly significant enrichment of pathways involved in neuron development and differentiation as well as ion transmembrane transport ( F), while PC2 loadings > 0.2 were enriched for processes involving inflammatory responses, the immune system and leukocyte activation ( G). Finally, assessment of PC2 loadings < -0.02 revealed processes centered around extracellular matrix structure and organization, suggesting overrepresentation of mesenchymal functions associated with fibroblast function ( G). As such, these results suggest the existence of three separate tumor subtypes within the morphological feline FSA cluster. As the features that led to separation of C2 along PC1 were highly reminiscent of the neuronal features identified in the LA group, we manually curated the gene signature highly expressed in LA tumors to include only neuron-related targets (as per GSEA) and applied this to perform unsupervised hierarchical clustering of the full patient cohort. Strikingly, this revealed a clear separation of the samples in the C2 cluster from the other tumors ( H). Hence, the tumors in cluster C2 seem to correspond to a neuronal-like STS subtype that is associated with less aggressive clinical behavior. Of note, these tumors showed expression of the two neuronal markers Sox10 and GFAP. Moreover, closer inspection of the main features driving the separation of C3 along PC2 identified the immunosuppressive factors IDO1, CTLA4 and CD80 among the top 12 loadings. Based on this, we assessed expression of these and other well-annotated immunosuppressive features (CD274/PD-L1, PDCD1/PD-1, CD86) as well as markers for T-cells (CD3, CD8A, CD8B) and B-cells (CD19, DRA, JCHAIN) ( H). While T- and B-cell markers were present in both C2 and C3 tumor samples, C1 tumors appeared to be much less immune-infiltrated. Importantly, expression of immune-inhibitory molecules was strongly restricted to tumors of the C3 cluster, whereas the neuronal-like C2 tumors showed much less evidence of immunosuppression. In conclusion, feline FSA comprise three different molecular subtypes characterized by neuronal-like, fibroblastic and inflammatory expression patterns that may influence clinical behavior. These results highlight the need for more refined molecular diagnostic approaches to improve classification and diagnosis of feline FSA. Feline FSA displays a high grade of molecular homology with both canine FSA and human fibroblastic sarcomas allows identification of tumor markers and therapeutic vulnerabilities Given the suggested similarity between feline, canine and adult FSA and in view of feline FSA as potentially useful model for the human condition, we next wanted to exploit the degree of molecular similarity across these three species. Due to the lack of transcriptomic data on adult FSA, we assessed interspecies similarity on the protein level, taking advantage of a recent proteomic dataset comprising 8 cases of ‘other fibroblastic sarcomas’ (other FS), the only available dataset for adult FSA , which however only contains matched normal CT for two cases. We postulated that if feline and human tumors were to share a high level of molecular homology, protein expression in the feline and human datasets should exhibit a similar expression pattern. To test this hypothesis and compare the two species, we ranked all tumor-derived proteins based on their expression in the feline dataset from low to high and assessed the enrichment of targets from ‘other FS’ in this list. Strikingly, the 50 % highly expressed genes in human STS (red vertical bars) were strongly enriched towards highly expressed genes in felines (right side) and vice versa for the lowly expressed proteins, demonstrating a very high correlation and wide-ranging conservation in protein expression between the two species ( A). To further understand the overlap in significantly overexpressed features in feline FSA with human other FS, we computed the overlap between these two datasets. Interestingly, 251 of the 297 feline overexpressed proteins (15 did not have an annotated gene name and hence could not be compared) were also detected in human other FS (Supplementary Table 10). Importantly, 207 of these proteins had an average expression level among the top 32 % of all expressed proteins in human tumors, suggesting that these might represent good markers to differentiate tumor from adjacent NT also in human patients. Cross-comparison with the data for 26 human MFS, which are also fibroblastic tumors, revealed a highly similar picture, with 200 common proteins that had an average expression level among the top 29 % expressed proteins. ORA of the 199 proteins shared across both comparisons revealed a massive enrichment for RNA processing and ribosome-related processes, suggesting that targeting transcription could represent a potential therapeutic vulnerability in these tumors ( B). As such, our feline dataset serves to identify features significantly overexpressed in tumor compared to normal tissue that are shared across species, supporting stratification of potential tumor-markers in the human dataset. To extend our cross-species analysis to also include canine FSA, we next compared our proteomic dataset to a canine FSA LC-MS/MS dataset that we had generated using the same approach . Again, comparison between the feline and the canine protein datasets revealed a very significant enrichment of highly and lowly expressed genes, respectively ( C), suggesting a strong conservation of protein expression between the two species. Despite the wide-ranging homology between feline and canine FSA, feline FSA generally display a more aggressive clinical behavior than the canine counterpart. To address whether we could identify molecular features driving this clinical observation, we performed ssGSEA using Wikipathway for all canine and feline proteomic tumor samples. This revealed striking differences between the two species: in contrast to canine FSA, feline tumor samples had a strong enrichment for type I and II interferon signaling, phagocytosis and transactions involving DNA replication and repair mechanisms, including nucleotide excision repair and DNA mismatch repair ( D). This suggested a tumor-promoting role for interferon-mediated immune transactions and DNA replication and -repair related features. To further assess interspecies differences, we performed ORA analysis using KEGG pathways to compare proteins found uniquely in the feline but not the canine tumor samples ( E). This revealed a significant overrepresentation of ATM signaling, a key component in DNA damage signaling and repair, as well as retinoblastoma gene activity ( F). As DNA repair pathways have been identified as interesting potentially druggable targets in a subset of human STS and have sparked currently running clinical trials , we further wanted to assess the relationship of the respective genes with clinical outcome in the human TCGA dataset. To do this, we generated a gene signature containing the relevant genes and examined their association with overall survival and disease-free interval (Supplementary Table 11). Strikingly, compared to patients with low expression of the signature, patients with high expression of our feline DNA repair signature had a significantly shorter disease-free interval as well as overall survival ( G and H). As such, these results suggest feline FSA to represent highly interesting models for clinical assessment of therapies for very aggressive forms of human STS. Adjuvant systemic therapy to improve tumor control after surgical excision would be highly beneficial but is complicated by the high chemoresistance of feline FSA. We therefore set out to explore potential therapeutic vulnerabilities based on our molecular insights using two feline FSA cell lines (FSII and FSIII, I - M). Consistent with a strong dependence on RNA transcription, both cell lines displayed a striking sensitivity towards Actinomycin D, an inhibitor of RNA transcription, with IC50 concentrations in the low nanomolar range or below ( I and J). Further, in accordance with strong overrepresentation of mitotic spindle and G2M checkpoint activation, cells were sensitive towards the Vinca-Alkaloids Vincristin, Vinblastin, and Vinorelbin that interfere with microtubule polymerization and therefore block mitotic cell division, as well as Doxorubicin, a DNA intercalating agent. In contrast, the nucleoside analogues Cytarabine and Gemcitabine, as well as Carboplatin (a DNA cross-linking agent) failed to exert any significant effect on the feline FSA cells. Based on the overrepresentation of DNA repair signaling pathways in feline FSA, we next assessed the influence of specific inhibitors of ATM (KU-55933, ATMi), ATR (AZ-20, ATRi) and PARP (Olaparib, PARPi). While cells did not display sensitivity to ATMi, both ATRi and PARPi were able to induce significant reduction in cell viability after 6 days of incubation ( K - M). This identifies the potential for ATR and PARP inhibitors for the treatment of feline FSA patients. As such, our comprehensive molecular characterization identifies feline FSA as interesting and clinically amenable models for aggressive human STS, and identify therapeutic vulnerabilities based on their molecular features.
The cohort of feline FSA is composed of 30 primary tumors from 27 European Shorthair, two Maine Coon and one Persian mix breed cats, of which 17 were female (8 neutered) and 13 male (2 neutered) . Median age was 12 years, and anatomical sites affected (as per medical reports) included abdomen, back, chest, flank, hindlimb, neck, shoulder, and thigh ( A). Based on the clinical history (vaccinations) and the anatomic location, an injection-related origin (i.e. FISS) cannot be ruled out for any of the cases. However, in the absence of definite markers to differentiate between FISS and non-FISS FSA, all tumors were considered as ‘FSA’. To gain insight into the proteomic and transcriptomic landscape of these tumors, we applied LCM to isolate tumor and matched unaffected connective tissue (CT), adipose tissue (AT) and skeletal muscle (SM) from all cases in this cohort ( B). All the matched normal tissues (NT) are frequently found in the vicinity of FSA and hence present tissue that the tumor needs to be differentiated from for targeting purposes in a clinical setting. Subsequently, LCM-isolated samples were analyzed by LC-MS/MS and RNAseq, respectively. The final sample set analyzed by LC-MS/MS consisted of a total of 98 tissue samples (27 tumor, 27 AT, 24 CT and 20 SM), while the final RNAseq dataset was composed of 77 specimens (30 tumor, 16 AT, 11 CT and 20 SM) ( C). In total, proteomic analysis detected 5′302 different proteins in all tumor samples, 4′296 in CT, 1′289 in AT and 4′094 in SM, with an average of 4554 proteins detected per tumor sample, 2′139 in CT, 489 in AT and 2′468 in SM ( D, Supplementary Tables 1 and 2, and Supplementary Fig. 1). Of these, 2′324 proteins were commonly detected in every T, 389 in CT, 97 in AT and 1′197 in SM. Transcriptomic analysis identified a total of 13277 transcripts across all tumors, 13′277 in CT, 13′218 in AT and 13′194 in SM ( D, Supplementary Table 3). 7′454 transcripts were shared across every tumor sample, 6′251 in CT, 6′464 in AT and 5′022 in SM. As such, this represents the first detailed proteomic and transcriptomic dataset of feline FSA and its surrounding NT.
Principal component analysis (PCA) using all identified transcripts clearly separated tumor from the different normal tissue types within the first three principal components ( A). Of note, the overlap between AT and CT was presumably due to AT having low RNA contents in general and the presence of fibroblasts as a structural feature in AT, which contributes a CT-like expression signature. This supported the validity of our approach to analyze spatially defined tissue regions using RNAseq and highlighted the difference between the tissue types as the major source of variability, overriding any potential effects due to differences in breed, anatomical location of the tumor or other clinical features. Analysis of differentially expressed genes (DEGs) between tumor and each normal tissue (cut-off values for significance: log 2 (FC) > 1 and < -1, FDR < 0.05) identified 1′163 significantly up- and 1′331 significantly downregulated targets between tumor and AT, 638 up- and 1′102 downregulated between T and CT and 2′736 up- and 2′072 downregulated in T vs SM ( B). Gene set enrichment analysis (GSEA) of expression changes using the KEGG database revealed a strong enrichment of pathways related to cell cycle, DNA replication and repair and RNA production in tumor tissue compared to all three NTs separately (Supplementary Fig. 2). In contrast, AT was characterized by pathways involved in lipolysis, AMPK, PPAR, and Adipocytokine signaling, while CT featured cytokine-cytokine receptor interactions, cell adhesion molecules and complement cascade. SM was characterized by typical muscle-related pathways, including muscle contraction, adrenergic and insulin signaling, confirming the specificity of tissue isolation (Supplementary Fig. 2). Reactome pathway analysis further supported these findings (Supplementary Fig. 3). Of the transcripts significantly upregulated in tumor, 436 targets were commonly upregulated by a log 2 (FC) > 1 across all three individual comparisons, representing potential candidates for markers that discriminate tumor from all NTs ( C and Supplementary Table 4). Overrepresentation analysis of these 436 targets using KEGG pathway analysis revealed involvement in one carbon metabolism, DNA replication, cell cycle and p53 signaling, among others ( D). The top 20 targets highly upregulated in tumor compared to all NT (ranked according to the log 2 (FC) T vs CT) include transcripts encoding for COL11A1, TNC, PTK7, and P4HA3 ( E and F). Unsupervised hierarchical clustering of tumor tissue alone revealed a somewhat heterogeneous structure, suggesting several subclusters within the data ( G). GSEA with the HALLMARK and Reactome databases revealed an enrichment of epithelial to mesenchymal transition, Myc targets, mTORC1 signaling, translation, infectious disease and nervous system development among the highly expressed genes. In contrast, the lowly expressed genes were enriched for pathways including G2M checkpoint, E2F targets, and cell cycle signaling events ( G). In summary, feline FSA display a distinct transcriptional profile strongly dominated by pathways centered around the cell cycle, DNA repair and DNA replication that clearly differentiates them from unaffected NT.
Similarly to the RNA data, PCA differentiated between the four tissue types within the first three principal components on the protein level ( A). Analysis of differentially expressed proteins between tumor and each NT (cut-off values for significance: log 2 (FC) > 1 and < -1, FDR < 0.05) identified 826 significantly up- and 282 significantly downregulated proteins between tumor and AT, 992 up- and 782 downregulated proteins between T and CT and 808 up- and 1′020 downregulated proteins in T vs SM ( B). GSEA of expression changes using KEGG pathways between tumor and the NTs identified pathways involved in ribosome or protein assembly, protein processing in the endoplasmic reticulum and antigen processing and presentation as positively enriched in tumor tissue (Supplementary Fig. 4). Of the proteins detected as significantly upregulated in tumor, 312 were shared across all three individual comparisons, representing potential tumor-specific targets ( C and Supplementary Table 5). GSEA of these targets using HALLMARK revealed involvement in PI3K-Akt-mTor signaling, G2M checkpoint, Myc targets and epithelial to mesenchymal transition, among others ( D). The top 5 overexpressed proteins in tumor compared to NT (ranked according to the log 2 (FC) T vs CT) included SFRP2, KDM5A, CMTM5, HSPA5 and FN1 ( E). Of all detected proteins, 19 were found only in samples of CT, 30 were specific to SM, and 625 were exclusive to tumor tissue, while none were detected only in AT ( F and Supplementary Table 6). ORA using HALLMARK pathway of these 625 tumor exclusive proteins identified enrichment of Mitotic spindle, E2F targets, G2M checkpoint, inflammatory response, and Myc targets among others (Supplementary Fig. 5). 137 of these proteins were detected in >80 % of cases (i.e. 22/27), 77 in 90 % (i.e. 24/27) and 6 proteins were present in every single tumor sample analyzed (Supplementary Table 7). These 6 tumor-exclusive proteins detected in every single sample comprised MARCKSL1, IKBIP, COPZ1, TIMP1, FAM50A and DPM3 ( G). As feline FSA are considered highly malignant forms of STS, we next evaluated whether these tumor-exclusive proteins were associated with tumor aggressiveness. To this end, we assessed the association of their expression with disease-free interval or overall survival in human STS using the TCGA-SARC dataset ( H). Indeed, this analysis found high levels of IKBIP, MARCKSL1 and COPZ1 levels to be associated with shorter disease-free interval or survival (IKBIP: disease-free interval (p = 0.089), MARCKSL1: overall survival (p = 0.0038), COPZ1: disease-free survival (p = 0.098)). Therefore, these data suggest that high expression of these proteins is associated with a negative impact on human STS. It is well-established that increased levels of RNA do not necessarily translate to increased protein levels. Correlation between proteomic and transcriptomic data using the log 2 (FC) values from comparisons of T vs AT, CT, and SM were very moderate (r = 0.37 for AT, r = 0.37 for CT, and r = 0.53 for SM; Supplementary Fig. 6). As such, this demonstrates that transcriptomic and proteomic analysis of patient tissue yields complementary information and enables a more comprehensive view than either analysis alone. To understand which of the significantly upregulated proteins in tumor were also upregulated on the RNA level, we computed the overlap between the datasets. The Venn diagram revealed an overlap of 29 shared targets, including FN1, POSTN, and RUNX2, further validating the upregulation of these targets in feline FSA ( I and Supplementary Table 8). Analysis of the expression differences between tumor and NT using QIAGEN Ingenuity Pathway Analysis detected eukaryotic translation initiation, SRP-dependent co-translational protein targeting to membrane and EIF2 signaling as the top canonical pathways in all three comparisons of tumor vs NT ( J). Identification of top upstream regulators revealed TP53 and MYC activation in tumor tissue ( K). Finally, assessment of the activation status of the top 20 activated canonical pathways further reinforced the massive emphasis on RNA- and translation-related pathways in feline FSA, as well as involvement of WNT and hedgehog signaling ( L). In conclusion, the proteomic signature clearly differentiates feline FSA tumor tissue from unaffected AT, CT and SM, revealing a massive dependence on translation-related pathways and a significant number of proteins either strongly overexpressed in or restricted to tumor tissue that could potentially serve as tumor-specific markers.
Assessment of patient outcome within the cohort allowed identification of two subgroups of patients with differing clinical outcome. Five patients that showed worse survival time than expected (i.e. survival < 500 days when excised with clean margins or < 60 days with unclean margins or cases with metastatic disease/systemic failure) were classified as ‘highly aggressive’ (HA), while 7 patients that surpassed survival > 500 days after resection (some even despite R1 margins) and without metastases were classified as ‘low-aggressive’ (LA; and Supplementary Table 9). Differential gene expression analysis between these two groups using log 2 (FC) > 2, p < 0.01 detected 79 significantly deregulated targets (11 up- and 69 downregulated), which also clearly separated both groups by unsupervised clustering ( A and B). GSEA revealed a strong enrichment of immune-related pathways including phagocytosis, engulfment, antigen processing and presentation, adaptive immune response, and B cell receptor signaling in the HA group ( C). In contrast, processes involved in transmembrane transport, synaptic membrane adhesion, regulation of presynapse assembly and myelination as well as cilium movement were enriched in LA tumors, potentially hinting at a more neuronal-like differentiation state of these tumors ( C). To assess functional differences between these two groups on the protein level, we next investigated the respective LC-MS/MS data using single sample Gene Set Enrichment Analysis (ssGSEA) focusing on the top 20 pathways with highest variance. Interestingly, this identified clear differences in the inflammatory response pathway as well as the complement and coagulation cascade, both of which were highly enriched in LA tumors as opposed to HA tumors ( D). Moreover, the HA tumors were characterized by a significantly higher enrichment of DNA replication and DNA mismatch repair-associated pathways. These data suggest that differences in immune-related and neuronal RNA expression patterns are associated with tumor aggressiveness in feline FSA due to functional differences in the activation status of inflammatory response, complement and coagulation as well as DNA replication. To further explore feline FSA in molecular detail, we expanded the analysis to all included cases. Interestingly, PCA of tumor samples using all transcripts suggested the existence of 3 different tumor clusters: while 17 samples grouped tightly together to form a main cluster (C1), a second cluster (C2) composed of 7 tumors was clearly separated along PC1 from the main bulk of specimens, and a third cluster of 5 tumors was separated along PC2 (C3) ( E). To understand the molecular differences driving these three clusters, we further assessed the main loading factors driving PC1 and PC2. ORA of PC1 loadings using Gene Ontology of Biological Processes revealed highly significant enrichment of pathways involved in neuron development and differentiation as well as ion transmembrane transport ( F), while PC2 loadings > 0.2 were enriched for processes involving inflammatory responses, the immune system and leukocyte activation ( G). Finally, assessment of PC2 loadings < -0.02 revealed processes centered around extracellular matrix structure and organization, suggesting overrepresentation of mesenchymal functions associated with fibroblast function ( G). As such, these results suggest the existence of three separate tumor subtypes within the morphological feline FSA cluster. As the features that led to separation of C2 along PC1 were highly reminiscent of the neuronal features identified in the LA group, we manually curated the gene signature highly expressed in LA tumors to include only neuron-related targets (as per GSEA) and applied this to perform unsupervised hierarchical clustering of the full patient cohort. Strikingly, this revealed a clear separation of the samples in the C2 cluster from the other tumors ( H). Hence, the tumors in cluster C2 seem to correspond to a neuronal-like STS subtype that is associated with less aggressive clinical behavior. Of note, these tumors showed expression of the two neuronal markers Sox10 and GFAP. Moreover, closer inspection of the main features driving the separation of C3 along PC2 identified the immunosuppressive factors IDO1, CTLA4 and CD80 among the top 12 loadings. Based on this, we assessed expression of these and other well-annotated immunosuppressive features (CD274/PD-L1, PDCD1/PD-1, CD86) as well as markers for T-cells (CD3, CD8A, CD8B) and B-cells (CD19, DRA, JCHAIN) ( H). While T- and B-cell markers were present in both C2 and C3 tumor samples, C1 tumors appeared to be much less immune-infiltrated. Importantly, expression of immune-inhibitory molecules was strongly restricted to tumors of the C3 cluster, whereas the neuronal-like C2 tumors showed much less evidence of immunosuppression. In conclusion, feline FSA comprise three different molecular subtypes characterized by neuronal-like, fibroblastic and inflammatory expression patterns that may influence clinical behavior. These results highlight the need for more refined molecular diagnostic approaches to improve classification and diagnosis of feline FSA.
Given the suggested similarity between feline, canine and adult FSA and in view of feline FSA as potentially useful model for the human condition, we next wanted to exploit the degree of molecular similarity across these three species. Due to the lack of transcriptomic data on adult FSA, we assessed interspecies similarity on the protein level, taking advantage of a recent proteomic dataset comprising 8 cases of ‘other fibroblastic sarcomas’ (other FS), the only available dataset for adult FSA , which however only contains matched normal CT for two cases. We postulated that if feline and human tumors were to share a high level of molecular homology, protein expression in the feline and human datasets should exhibit a similar expression pattern. To test this hypothesis and compare the two species, we ranked all tumor-derived proteins based on their expression in the feline dataset from low to high and assessed the enrichment of targets from ‘other FS’ in this list. Strikingly, the 50 % highly expressed genes in human STS (red vertical bars) were strongly enriched towards highly expressed genes in felines (right side) and vice versa for the lowly expressed proteins, demonstrating a very high correlation and wide-ranging conservation in protein expression between the two species ( A). To further understand the overlap in significantly overexpressed features in feline FSA with human other FS, we computed the overlap between these two datasets. Interestingly, 251 of the 297 feline overexpressed proteins (15 did not have an annotated gene name and hence could not be compared) were also detected in human other FS (Supplementary Table 10). Importantly, 207 of these proteins had an average expression level among the top 32 % of all expressed proteins in human tumors, suggesting that these might represent good markers to differentiate tumor from adjacent NT also in human patients. Cross-comparison with the data for 26 human MFS, which are also fibroblastic tumors, revealed a highly similar picture, with 200 common proteins that had an average expression level among the top 29 % expressed proteins. ORA of the 199 proteins shared across both comparisons revealed a massive enrichment for RNA processing and ribosome-related processes, suggesting that targeting transcription could represent a potential therapeutic vulnerability in these tumors ( B). As such, our feline dataset serves to identify features significantly overexpressed in tumor compared to normal tissue that are shared across species, supporting stratification of potential tumor-markers in the human dataset. To extend our cross-species analysis to also include canine FSA, we next compared our proteomic dataset to a canine FSA LC-MS/MS dataset that we had generated using the same approach . Again, comparison between the feline and the canine protein datasets revealed a very significant enrichment of highly and lowly expressed genes, respectively ( C), suggesting a strong conservation of protein expression between the two species. Despite the wide-ranging homology between feline and canine FSA, feline FSA generally display a more aggressive clinical behavior than the canine counterpart. To address whether we could identify molecular features driving this clinical observation, we performed ssGSEA using Wikipathway for all canine and feline proteomic tumor samples. This revealed striking differences between the two species: in contrast to canine FSA, feline tumor samples had a strong enrichment for type I and II interferon signaling, phagocytosis and transactions involving DNA replication and repair mechanisms, including nucleotide excision repair and DNA mismatch repair ( D). This suggested a tumor-promoting role for interferon-mediated immune transactions and DNA replication and -repair related features. To further assess interspecies differences, we performed ORA analysis using KEGG pathways to compare proteins found uniquely in the feline but not the canine tumor samples ( E). This revealed a significant overrepresentation of ATM signaling, a key component in DNA damage signaling and repair, as well as retinoblastoma gene activity ( F). As DNA repair pathways have been identified as interesting potentially druggable targets in a subset of human STS and have sparked currently running clinical trials , we further wanted to assess the relationship of the respective genes with clinical outcome in the human TCGA dataset. To do this, we generated a gene signature containing the relevant genes and examined their association with overall survival and disease-free interval (Supplementary Table 11). Strikingly, compared to patients with low expression of the signature, patients with high expression of our feline DNA repair signature had a significantly shorter disease-free interval as well as overall survival ( G and H). As such, these results suggest feline FSA to represent highly interesting models for clinical assessment of therapies for very aggressive forms of human STS. Adjuvant systemic therapy to improve tumor control after surgical excision would be highly beneficial but is complicated by the high chemoresistance of feline FSA. We therefore set out to explore potential therapeutic vulnerabilities based on our molecular insights using two feline FSA cell lines (FSII and FSIII, I - M). Consistent with a strong dependence on RNA transcription, both cell lines displayed a striking sensitivity towards Actinomycin D, an inhibitor of RNA transcription, with IC50 concentrations in the low nanomolar range or below ( I and J). Further, in accordance with strong overrepresentation of mitotic spindle and G2M checkpoint activation, cells were sensitive towards the Vinca-Alkaloids Vincristin, Vinblastin, and Vinorelbin that interfere with microtubule polymerization and therefore block mitotic cell division, as well as Doxorubicin, a DNA intercalating agent. In contrast, the nucleoside analogues Cytarabine and Gemcitabine, as well as Carboplatin (a DNA cross-linking agent) failed to exert any significant effect on the feline FSA cells. Based on the overrepresentation of DNA repair signaling pathways in feline FSA, we next assessed the influence of specific inhibitors of ATM (KU-55933, ATMi), ATR (AZ-20, ATRi) and PARP (Olaparib, PARPi). While cells did not display sensitivity to ATMi, both ATRi and PARPi were able to induce significant reduction in cell viability after 6 days of incubation ( K - M). This identifies the potential for ATR and PARP inhibitors for the treatment of feline FSA patients. As such, our comprehensive molecular characterization identifies feline FSA as interesting and clinically amenable models for aggressive human STS, and identify therapeutic vulnerabilities based on their molecular features.
Human adult FSA represent very rare STS that are diagnosed based on exclusion due to a lack of subtype-specific diagnostic markers . Specific molecular data on adult FSA is still exceedingly scarce with a total of n = 8 patients that have been analyzed by proteomic profiling , and no available RNAseq data to date. This lack of detailed molecular data on these tumors and how they differ from adjacent normal tissue impedes identification of diagnostic and therapeutic targets to develop novel approaches for affected patients. Moreover, even if novel therapy approaches are identified, the scarcity of the disease makes clinical assessment in affected patients practically impossible, a situation that is further exacerbated by the absence of clinically relevant models. Here, feline FSA represent a potential solution to support clinical assessment of novel therapies to benefit patients of both species. However, with transcriptomic data from only n = 3 FISS patients available , the molecular fingerprint of feline FSA remains obscure. This precludes identification of novel therapeutic strategies as hinders unbiased cross-species comparisons to assess the values and limitations of the feline model to inform on therapies to benefit patients of both species. To address this gap of knowledge, we provide a detailed molecular landscape of 30 cases feline FSA and its matched NT using tissue-resolved multiomic profiling that allows identification of tumor-specific targets and detailed insight into the molecular underpinnings of these feline tumors to improve diagnosis, prognosis, and treatment strategies for both species. While spatial RNA sequencing approaches have recently emerged as to deliver spatially resolved transcriptomic insight into patient tissue , discovery-based detection of proteins within the tissue context is only emerging, and most available approaches, such as imaging mass cytometry, rely on antibody-dependent detection of a small number of targets . This limits protein detection to predefined targets for which high-quality antibodies exist and is not directly adaptable to other species given difficulties in epitope conservation that interfere with antibody-mediated detection. Our tissue-resolved approach that can be applied to archival material represents an ideal workflow to species-independent discovery-driven assessment of patient tissue for both RNA and protein, especially in the case of STS, where rarity and heterogeneity of the disease add additional challenges to molecular investigations. Both transcriptomic and proteomic analysis of patient tissue offer exciting tools to complement genetic data and support translational research. Moreover, the combined assessment can significantly contribute towards understanding the molecular mechanisms driving STS growth and progression, identifying novel biomarkers or therapy responses and identification of novel therapeutic targets. In addition, proteins represent the largest and most functional group of druggable targets, and transcript levels do not necessarily correlate with protein levels (Supplementary Figure 6). The latter constitutes one of the most important strengths of proteomic assessment of patient tissue. Nevertheless, combined assessment of both RNA and protein data allows a much more thorough insight into the tissue, as the coverage of transcriptomic analysis still is much broader – in this study, RNAseq identified 3.5-fold more targets than LC-MS/MS ( D). In certain tissues, detection of proteins is more difficult than others, as demonstrated by the lower number of detected proteins in AT compared to the other tissues ( D). Also, it is well-established that certain proteins are highly difficult to detect using LC-MS/MS, such as cytokines with low abundance . In such cases, RNAseq data can significantly aid data interpretation. Hence, transcriptomics and proteomics represent complementary rather than redundant viewpoints that can be used to assess different questions ( I). While the recently available large-scale proteomic studies of human STS are very interesting and highly valuable , there are important limitations to the chosen approaches that our study addresses: firstly, these analyses are based on ‘tumor-enriched’ bulk approaches (i.e. containing up to 30 % of surrounding normal tissue per sample), secondly, neither of these studies has included specifically defined matched adjacent NT, which precludes identification of targets that specifically allow differentiation of tumor from its native surroundings. Thirdly, complementary RNAseq data is available only for a very small subset of 25 angiosarcoma, which limits assessing the combined power of transcriptome and proteome analysis. Finally, only 8 cases of fibroblastic STS were included in these large cohorts, which heavily limits the available data for these very rare tumors. Here, our study across 30 feline FSA provides highly valuable data to assess expression of candidate targets on protein and RNA level across three normal tissues that these tumors need to frequently be distinguished from, with relevance also for human STS . As such, this approach allows identification of tumor-specific diagnostic markers, which is of specific relevance also for the STS field, where correct diagnosis remains a challenge, especially so in the case of adult FSA, an exceedingly rare and aggressive tumor subtype lacking any specific marker of differentiation and represents a diagnosis of exclusion . Moreover, our data is of specific value also with regards to development of targeted therapeutic approaches for STS, including strategies to selectively enrich radionuclides, fluorescent dyes, cytostatics, or CAR-T cells in the tumor using specific ligands . We anticipate refinement, validation and preclinical development of such targets in follow-up projects to support translating these insights into clinical practice. In contrast to human STS, where molecular diagnosis allows classification of >100 different STS subtypes, diagnosis of veterinary STS entities is still largely based on histomorphology – especially so for feline tumors. This lack of granularity, combined with the inherent difficulty of diagnosing STS based on histomorphology alone , likely contributes to diagnostic inaccuracy and failure to identify existing subtypes that may also differ with regards to clinical prognosis. Interestingly, on the RNA level, we identified immune-mediated features and neuronal expression signatures as distinctive features between highly vs lowly aggressive behaving tumors, respectively ( A-C). Moreover, expanding the analysis to the full dataset, we identify three subsets of tumors, C1 – C3. C1 is characterized by a fibroblastic gene expression pattern, consistent with the suggested fibroblastic origin of these tumors. Meanwhile, tumors in the C2 subset exhibit a neuronal expression signature, and the C3 subset features high expression of factors involved in immune regulation ( E-H). With extensive expression of immunosuppressive targets, tumors of the C3 subset may be amenable to immune-checkpoint blockade. C2 are reminiscent of the group of peripheral nerve sheath tumors (PNSTs). In both humans and cats PNSTs come in different clinical flavors: they can range from benign subtypes, such as Schwannomas, to highly malignant tumors . Importantly however, feline PNSTs that present histologically malignant features have never been documented to metastasize, suggesting these tumors in cats to behave in a more benign fashion than in humans . In sharp contrast to this, ‘classic’ feline FSA are highly aggressive tumors that are characterized by poorly defined tumor margins, a high tendency to infiltrate surrounding tissue, form tumor extensions and satellite lesions and frequently display a high grade of malignancy, with a moderate metastatic rate , a range of local recurrence following surgical resection between 11-80 % and reported median survival times from 390 – 901 days , resulting in a very guarded prognosis. As such, the identification of a subset of feline FSAs with neuronal-like expression pattern that may coincide with more benign clinical behavior is relevant, if validated in further studies. ssGSEA of the proteomic data between highly and lowly aggressive tumors suggests a possible connection between the benign behavior of these neuronal-like tumors with a significant enrichment of inflammatory responses and activation of the complement and coagulation cascades ( D). Interestingly, activation of the complement and coagulation system in human DDLPS has very recently been described to be associated with a significantly longer local recurrence-free survival . Though these tumors in general display less tumor-infiltrating lymphocytes than tumors with low activation scores in these pathways, the complement system is an innate immune defense mechanism that precedes activation of adaptive immunity and deficiency therein impairs both B- and T-cell responses . It is important to highlight here, that RNA and protein data yield opposing information regarding activation of several pathways, including the complement system. This is a feature that has only recently come to attention specifically also in STS, in the wake of more extensive proteomic analysis of patient tissues and may explain some of the discrepancies with previous literature linking high transcriptomic levels of complement activation with tumor malignancy . As proteins represent the functional workunits of a cell, proteomic data might represent more faithfully the actual activation status of given pathways. Hence, these data suggest the existence of several molecular subtypes of feline FSA that also differ with regards to their clinical behavior, with immune-infiltrated tumors expressing high levels of checkpoint inhibitors and thus potentially amenable for treatment with ICI, and neuronal-like tumors associated with a potentially better clinical outcome. Moreover, this clearly highlights the need for more refined molecular diagnostic approaches to improve classification and diagnosis of feline FSA. To assess the potential of feline FSA as clinically amenable models to assess therapeutic strategies in a structured manner to inform novel therapeutic approaches for human patients, we analyzed to what extent molecular features are conserved between feline and human data. Comparison of our data with the proteomic dataset on FSA and MFS by Tang et al revealed striking similarities between the two species and demonstrated the use of our feline dataset to identify and prioritize potential tumor-targets conserved across both species ( A and B). Moreover, by leveraging well-documented differences in clinical behavior between canine and feline FSA, we identified selective enrichment of DNA repair pathways in feline FSA and demonstrated a connection with tumor aggressiveness also in human STS ( D - H). The established standard treatment for feline FSA is radical surgical excision. However, in a substantial number of feline FSA patients, satisfactory tumor control cannot be achieved with surgery alone. Additional therapeutic strategies using systemic treatment could be highly beneficial – particularly also in the metastatic setting - but is complicated by the high chemoresistance of feline FSA. To identify therapeutic vulnerabilities for neo-/adjuvant use to treat feline FSA, we selected a series of clinically available drugs based on the molecular features identified above for assessment of cell sensitivity of feline FSA cell lines. This validated the striking dependence of FSA on processes involving RNA transcription (Actinomycin, Doxorubicin) as well as mitotic spindle and G2M checkpoint activation (Vinca-Alkaloids) and uncovered a potential vulnerability of FSA towards ATR and PARP inhibition ( I - M). Neo- and adjuvant treatment with the Doxorubicin-stereoisomer Epirubicin combined with surgery demonstrated superior tumor-free survival rates and disease-free interval compared to historic controls in 21 cats . Though not widely used currently, Actinomycin has been used in cats in the context of a rescue protocol for feline lymphoma , and Vinca-Alkaloids are clinically used in cats for other malignancies, but none of these has undergone thorough clinical assessment in the context of FSA. Interestingly, DNA repair pathways have been identified as potentially druggable targets in a subset of human STS and have sparked currently running clinical trials . To our knowledge, DNA repair inhibitors have not yet been assessed in the context of feline FSA or indeed any feline tumor. As such, our results uncover the therapeutic potential of ATR and PARP inhibition in the context of feline FSA for further clinical assessment. Collectively, the results presented herein will serve as a starting point for further studies specifically on feline FSA and its potential as a clinically amenable model for adult FSA. Better understanding of the biology driving these tumors will promote the development of novel diagnostic and therapeutic approaches to benefit patients from both species.
The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD055198. The RNA sequencing raw data was submitted to the GEO repository and is available under the accession id GSE275872. All other data supporting our findings is contained in the manuscript and in the supplementary figures and tables.
Mikiyo Weber: Writing – review & editing, Writing – original draft, Visualization, Validation, Resources, Methodology, Investigation. Daniel Fuchs: Writing – review & editing, Visualization, Validation, Software, Resources, Methodology, Investigation, Formal analysis, Data curation. Amiskwia Pöschel: Writing – review & editing, Visualization, Methodology, Investigation. Erin Beebe: Writing – review & editing, Supervision, Methodology, Formal analysis, Data curation. Zuzana Garajova: Methodology, Investigation. Armin Jarosch: Conceptualization, Investigation, Resources, Writing – review & editing. Laura Kunz: Writing – review & editing, Resources, Methodology, Investigation. Witold Wolski: Writing – review & editing, Software, Methodology, Investigation. Lennart Opitz: Writing – review & editing, Software, Methodology, Data curation. Franco Guscetti: Writing – review & editing, Validation, Methodology, Investigation. Mirja C. Nolff: Writing – review & editing, Writing – original draft, Supervision, Investigation, Formal analysis, Conceptualization. Enni Markkanen: Writing – review & editing, Writing – original draft, Supervision, Project administration, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
Stable, fluorescent markers for tracking synthetic communities and assembly dynamics | 33b74770-b4ef-4c03-b7ab-3eef70452492 | 11075435 | Microbiology[mh] | Plant roots are colonised by a vast diversity of microorganism, with Proteobacteria and Actinobacteria amongst the most abundant groups . These soil microorganisms are recruited in different root niches, including rhizosphere (few mm from root), rhizoplane (root surface) and endosphere (microbes between root cells) . Furthermore, plants exude up to 20% of their fixed carbon into the rhizosphere thereby shaping their root microbiome, which in turn influences plant growth . This two-way dialogue alters plant fitness, is crucial in nutrient cycling, promotes plant growth, primes plant defences and controls pathogens . The last two decades have seen an explosion in microbiome research on plants, animals and humans. Most plant studies have analysed microbiome composition by amplicon or genome sequencing under multiple conditions, including species and soil type . More recently, use of synthetic DNA spikes enables absolute quantification of microbiome members directly in environmental samples . The cutting-edge challenge is to now move beyond describing and classifying microbiomes, to understand the mechanisms of microbiome assembly. However, due to the vast diversity of microbes, this has proved to be technically challenging. A key strategy to understand microbiome assembly is to establish a simpler representative/synthetic community (SynCom) to study and fine tune plant–microbe interactions. One of the pivotal decisions to make when designing a SynCom is the choice of size, which mainly depends on the objective of the study to perform. Vorholt et al. defined that a high-complexity SynCom (more than 100 members) aims to represent the original microbiome by maintaining the diversity and thereby reducing the risk of losing keystone species and essential microbe-microbe interactions. On the other hand, in a low-complexity SynCom (less than ten members), the stochasticity is reduced, which increases experimental reproducibility and therefore it can establish a more accurate causality . Most SynComs are an attempt to produce a microbial culture collection with minimal strains representative of the original phylogenetic diversity . The profile, represented by the relative abundance of each strain in the assembled SynCom, is used as a phenotype under different conditions. An example is how the absence of coumarin, or the lack of cuticle biosynthesis, shifted the SynCom composition in Arabidopsis thaliana . A 185-member SynCom was used to interrogate the capacity of root growth inhibition (RGI), showing that Variovorax and related species within the SynCom have the capability to suppress RGI by manipulating plant hormone levels through auxin degradation . SynComs can improve plant yield, as shown by the 22-member sugarcane community which displaced 54% of the natural rhizosphere microbiota and increased sugarcane fresh weight 3.4-fold compared to non-inoculated plants . Whilst relative abundance quantification provides valuable insights, the power of absolute quantification reveals that specific microbial groups can maintain steady or increasing absolute abundance, even in scenarios where their relative abundances may decrease . Absolute quantification emerges as a superior approach, offering a more accurate understanding of microbiome assembly dynamics and mitigating potential biases inherent in relative measurements. Nui et al. measured the absolute abundance of each bacterial strain within a seven-member maize SynCom by complex culturing, including testing of 288 growth media and antibiotics combinations. The seven-membered community was stable on roots, where Enterobacter cloacae AA4 was a keystone species, as its absence led to collapse of the SynCom. This research highlights that one of the principal challenges in studying microbiome assembly is the identification and quantification of different bacteria during colonisation. Most SynCom studies rely on 16S RNA sequencing to describe assembly of the community, which only reveals relative microbial abundance on the roots. In contrast, differential culturing as used by Niu et al. allows for experimental intervention and establishes causality in microbiome assembly, although it is labour-intensive and limited to the specific organisms for which it was developed. Bacterial communities can be visualised and differentiate in situ by applying techniques based on the hybridisation of fluorescently labelled antisense 16S rRNA probes (FISH), which can be designed for broad groups (e.g. Actinobacteria or Betaproteobacteria), or for specific strains . FISH was applied to a seven-member SynCom in which each strain-specific probe was labelled to a particular fluorescent protein which can be distinguished by image deconvolution . However, FISH has limitations such as cell loss during sample fixation and low accuracy due to an imperfect probe coverage or reduced bacterial membrane permeability . In small SynComs, fluorescent proteins can be expressed in bacteria; however, the limitation is the number of distinguishable ones used at the same time. Whitaker et al. developed a technique with six unique fluorescent signatures by utilising two fluorescent proteins (GFP and mCherry) with different ribosome binding site (RBS)s to provide varied expression levels. When applied to a Bacteroides six-member SynCom colonising the guts of mice, each strain could be differentiated by linear deconvolution. Whilst this works well with strains of the same species, interspecies differentiation based on fluorescence intensity of a single fluorescent protein would require laborious tuning of expression. The aforementioned limitations led us to develop a remarkably simple differential fluorescent marking (DFM) method using three fluorescent proteins (mTagBFP, sYFP2 and mCherry) with distinct excitation and emission spectra, allowing simultaneous detection by flow cytometry or fluorescence microscopy. Using the DFM strategy, we generate and distinguish six fluorescence patterns, i.e. three single fluorescent proteins and three combinations of two. Plasmid-based protein expression can lead to issues such as gene dosage-dependent toxicity, as well as plasmid stability and host-range. Therefore, we adapted a mini-Tn 7 delivery system to generate the plasmid Tn7 suicidal low COPY for universal transfer (pTn7-SCOUT) family, enabling integration of transgenes downstream of the highly conserved chromosomal glmS gene in bacteria . This approach is compatible with our modular and hierarchical cloning system, BEVA . We tested DFM in Rhizobium leguminosarum bv. viciae 3841 (Rlv3841) and applied it to a six-member synthetic community (OxCom6), consisting of Alpha-, Beta- and Gammaproteobacteria. Using flow cytometry, we both differentiated and quantified the assembly of individual members of OxCom6 in nutrient-rich media and during colonisation of pea and barley roots. Our results demonstrate that DFM is an outstanding resource for tracking and distinguishing bacterial communities both in vitro, but more importantly, in diverse and complex environmental settings.
Primer and plasmids Primer and plasmids used in this study are shown in Table S and Table S , respectively. All pTn7-SCOUT plasmids are available in Addgene, see Table S for codes. Bacterial media and growth conditions Bacterial strains used in this work are listed in Table S . Escherichia coli strains were grown in LB at 37 °C, supplemented with antibiotics at the following concentrations: ampicillin (Ap) 100 µg·mL −1 , gentamicin (Gm) 10 µg·mL −1 , kanamycin (Km) 20 µg·mL −1 , tetracycline (Tc) 10 µg·mL −1 and, spectinomycin (Sp) 50 µg·mL −1 . The remaining strains were grown in rich Tryptone Yeast (TY) media supplemented with 20 mM succinate at 28 °C, unless specified otherwise. The following antibiotic concentrations were used: Rhizobium leguminosarum bv. viciae (Rlv3841) Gm 20 µg·mL −1 , neomycin 40 µg·mL −1 , Tc 5, Sp 100 µg mL −1 ; Ochrobactrum pituitosum AA2 and Pseudomonas fluorescens SBW25 Gm 20 µg·mL −1 ; Enterobacter cloacae AA4 Km 20 µg·mL −1 . Achromobacter xylosoxidans AT1 Km 100 µg·mL −1 and Azoarcus olearius DQS-4 Sp 200 µg·mL −1 . For the assessment of Rlv3841 labelled with DFM (Rlv3841 DFM ), mean generation time (MGT) strains were grown in UMS supplemented with 10 mM glucose and 10 mM NH 4 Cl. Plasmids were transformed into chemically competent E . coli strains DH5 [12pt]{minimal}
$$$$ α , Transformax™ EC100D™ pir + (Lucigen) and Transformax™ EC100D™ pir-116 (Lucigen). Except for the pTn7-SCOUT plasmids which were introduced into recipient bacteria by triparental conjugation with E . coli DH5 [12pt]{minimal}
$$$$ α as plasmid donor and E . coli HB101 with the helper plasmid pRK2013 . The pTn7-SCOUT plasmids were conjugated by tetraparental conjugation using E . coli Transformax™ EC100D™ pir + as plasmid donor, E . coli S17-1 containing pTNS3 as transposase and E . coli pRK2013 as helper. Nitrofurantoin 10 µg·mL −1 was used to counter select against E . coli strains. Construction of pTn7-SCOUT plasmids The pUC18R6KT-mini-Tn 7 T-Km was obtained from Addgene (catalogue no. 64969) and used as a scaffold to generate the Golden Gate level 1 master plasmid pTn7-SCOUT10. BsaI and Esp3I restrictions sites (RS) were removed, and two cloning sites added: a Golden Gate level 1 cloning site and an Esp3I cloning site to allow addition of antibiotic markers (Fig. ). Five different fragments were generated by PCR and assembled by Golden Gate using BpiI. The first fragment was amplified using oxp3349-oxp3350 from the pUC18R6KT-mini-Tn 7 T-Km multicloning site (MCS) to the BsaI RS located in the ampicillin resistance marker (Ap R ), changing a nucleotide in a serine codon (748A > G). The second fragment was amplified with oxp3351-oxp3352 from the BsaI RS located in Ap R to two Esp3I RS located in the plasmid backbone between the Ap R and R-Tn 7 . The third fragment was amplified with oxp3353-oxp3354 from the Esp3I RS in the backbone plasmid to a region between the flippase recognition site (FRT) site and 3′-end of the Km R . The fourth fragment was amplified with oxp3355-oxp3356 from the region between 5′-end of Km R and FRT site to the mini-Tn 7 MCS. The fifth fragment was amplified with oxp2980-oxp2981 from pOGG093 plasmid , which amplifies the Golden Gate level 1 cloning site containing the P lac :: lacZ [12pt]{minimal}
$$$$ α -T0 region. Fragments were amplified with DNA polymerase Q5 (NEB), cleaned (GeneJet PCR purification kit, Thermo Fisher), assembled by Golden Gate with BpiI as described by Geddes et al. , cloned in Transformax™ EC100D™ pir-116 (Lucigen), miniprepped, and Sanger sequenced. pTn7-SCOUT10 has BsaI RS compatible with Golden Gate level 1 assembly and lacZ [12pt]{minimal}
$$$$ α as cloning marker, resulting in blue/white colony colour selection when plated on media supplemented with X-gal 50 µg·mL −1 . To generate the Golden Gate level 2 master plasmid pTn7-SCOUT20, a new selection marker was constructed. The chromogenic gene tsPurple expression cassette was amplified from pOPS1522 with oxp4051-oxp4052, cloned into pTn7-SCOUT10 by Golden Gate using BsaI, transformed in Transformax™ EC100D™ pir-116 (Lucigen), miniprepped and Sanger sequenced. pTn7-SCOUT20 has BpiI RS compatible with Golden Gate level 2 assembly and tsPurple as cloning marker, resulting in purple/white colony colour selection. The antibiotic resistance cassettes within the mini-Tn 7 were cloned in pTn7-SCOUT10 and pTn7-SCOUT20 by a Golden Gate reaction using Esp3I. The pLVC-P2 modules of the gentamicin resistance marker (Gm R , pOGG009), tetracycline resistance marker (Tc R , pOGG042) and kanamycin resistance marker (Km R , pOGG008) were used . The spectinomycin resistance marker (Sp R ) was amplified with oxp3357-oxp3358 from pUC18T-mini-Tn 7 T- aad9 and cloned by Golden Gate reaction with Esp3I. A family of pTn7-SCOUT plasmids was generated: level 1 pTn7-SCOUT11 (Gm R ), pTn7-SCOUT12 (Km R ), pTn7-SCOUT13 (Tc R ), pTn7-SCOUT14 (Sp R ); and level 2 pTn7-SCOUT21 (Gm R ), pTn7-SCOUT22 (Km R ), pTn7-SCOUT23 (Tc R ), pTn7-SCOUT24 (Sp R ) (see Table , Table S ). Development of compatible flippase plasmids The antibiotic marker within the mini-Tn 7 is flanked by FRT sites, allowing its excision from the chromosome by yeast recombinase flippase (Flp) following mini-Tn 7 insertion . The pFLP2 plasmid with Amp R was obtained from Herbert P. Schweizer. The sacB - flp - cI genes were amplified with oxp3417-oxp3418, purified and assembly by Golden Gate with BsaI into the destination vectors pOGG024 (Gm R ), pOGG023 (Km R ) and pOGG277 (Tc R ). Three new pFLP2 plasmids were generated pFlp-Km (pOPS1466; flp-cl-sacB-pL1V-Lv1-neo-pBBR1-ELT3), pFlp-Gm (pOPS1467; flp-cl-sacB-pL1V-Lv1-gent-pBBR1-ELT3) and pFlp-Tc (pOPS1468; flp-cl-sacB-pL1V-Lv1-TetAR-pBBR1-ELT3), (Table , Table S ). Assembly of Golden Gate plasmids Assembly of plasmids was done by Golden Gate as described by Geddes et al. . Esp3I was used for the assembly of level 1 cloning plasmids (pL1V-Lv1), BsaI for the assembly of the expression cassette into level 1 plasmids and BpiI for assembly of level 1 modules into level 2 plasmids. Specific details about each plasmid construction are described in . att amplification, sequencing and analysis DNA extraction from each DFM strain was achieved by alkaline lysis (0.05 M NaOH, 0.25% SDS) , and used as a template to amplify by PCR the region from the 3′-end of glmS to Tn 7 -R. Primer PTn 7 R on Tn 7 -R was used as a reverse primer and a specific forward primer was designed for each strain (see Table S ). Amplification was carried out in a 50 µL PCR reaction containing 5–10 ng of isolated DNA and 2 U of Q5 DNA polymerase (NEB). PCR products were visualised on 1% agarose gels, purified (Monarch® PCR & DNA Cleanup kit, NEB) and Sanger sequenced (Eurofins). Alignment of sequences was performed using MUSCLE implemented in MEGA X software . The alignment consensus was calculated in Jalview . Development and assessment of landing pad introduction into strains To construct the Sinorhizobium meliloti CL150 containing the landing pad ( Sm LP), we followed the same procedure as described by Haskett et al. . Firstly, a 282 bp fragment containing the Tn7 attB site using oxp3192 and oxp3193 primers was PCR-amplified from Rlv3841 chromosome (Table S ). Secondly, 1 kb DNA fragments of two flanking regions of the harbour site of S . meliloti CL150 were amplified using primer pairs oxp3190-oxp3191 and oxp3194-oxp3195. These three fragments were assembled by HiFi (NEB) with pK19mobSacB digested with SmaI, resulting in plasmid pOPS1246. Plasmid pOPS1246 was introduced into S . meliloti CL150, and sucrose selection was used to stably integrate the Tn7 attB site of Rlv3841 (landing pad) into a harbour site in the chromosome by homologous recombination, resulting in Sm LP strain. To test mini-Tn 7 integration specificity into the landing in Azorhizobium caulinodans ORS571 containing landing pad (AcLP) and Sm LP, two sets of primer pairs were used to PCR-amplify the 5′-end of the Rlv3841 attB -containing site fragment to Tn7-R (oxp2986 and oxp1390) and the Tn 7 -L to the 3′-end of the Rlv3841 attB -containing site fragment (PTn7L and oxp5053). Counterselection for Flp-containing plasmids Rlv3841 containing a mini-Tn7-Gm-sfGFP (Rlv3841 G-Gm ) was conjugated with pOPS1468 (flp-cl-sacB-pL1V-Lv1-TetAR-pBBR1-ELT3), and colonies selected on TY containing Tc. Transconjugants were pooled and plated on TY supplemented with sucrose (12%). Fifty colonies were patched on TY media with and without Tc. Strains unable to grow on Tc were PCR-tested with primers oxp3878 and oxp3879, which bind between T0 and T1 and on FRT sequence. Two bands of 272 bp and 1240 bp were present in Rlv3841 G-Gm , but only the 272 bp band in the Rlv3841 G , which confirms excision of the Gm cassette. Microscopy images Microscopy images were taken of cultures of Rlv3841 DFM strains growing on TY agar plates using a Leica M165FC. Detection of fluorescent proteins was as follows: mTagBFP with ET BFP filter (10,450,329, excitation: 405/20 nm, barrier: 460/40 nm) and exposure time 0.7 s; sYFP2 with ET YFP filter (10,447,410, excitation: 500/20 nm, barrier: 535/30 nm) and exposure time 1 s; and mCherry with ET mCherry filter (10,450,195, excitation: 560/40 nm, emission: 630/74 nm) and exposure time 0.2 s. Gain was set at 1 × , saturation at 1.0 and gamma at 1.01 for all images. A mix containing equal amounts of cultures of Rlv3841 DFM and unlabelled were imaged with a Zeiss LSM 880 Airy Scan confocal microscope and analysed with ZEN Black v 3.6 software. To visualise fluorescent tags, mCherry was excited with a 561 nm wavelength laser and detected between 598 and 649 nm, sYFP2 was excited with a 488 nm wavelength laser and emission detected between 498 and 562 nm and mTag was excited with a 405 nm wavelength laser and emission detected between 440 and 490 nm. Two channels were used for the overlapping excitation and emission of sYFP2 and mTag. Channel one excited and detected mCherry and mTag, channel two excited and detected sYFP2. Flow cytometry An Amnis® Cellstream® (Luminex Ltd.) flow cytometer with autosampler, equipped with 405 nm, 488 nm and 561 nm to excite TagBFP, sfGFP/sYFP2 and mCherry respectively, was used. Flow rates were set to low speed/high sensitivity (3.66 µL·min −1 ) and 5000–20,000 events defined by our gating parameters as Bacteria population were counted for each sample. Using Cellstream® Analysis 1.3.384 software, the Bacteria population was defined as the concentrated events area when plotting size (FSC) and granularity (SSC). The bacteria population was afterwards gated based on FSC (threshold > 0) and the aspect-ratio of SSC (threshold > 0.4) defining the Singlets population. Then Singlets events were gated based on their fluorescence emission, generating three colour populations: Red, Yellow and Blue for each fluorescent protein, mCherry, sYFP2 and TagBFP, respectively . The Red population are singlets events detected in the 561–611/31 channel above 550 FI units. The Yellow population are singlets events detected 488–528/46 channel above 500 FI units. The Blue population are singlets events detected in the 405–456/51 channel above 450 FI units (Fig. S ). Afterwards, we created six Combined populations defined as presence absence of Red, Yellow and Blue colour populations. R population (exclusively Red), Y (exclusively Yellow), B (exclusively Blue), RY (exclusively Red and Yellow), RB (exclusively Red and Blue) and YB (exclusively Yellow and Blue). For instance, an event will be assigned to the R population if belongs to the Red population whilst not belonging to either the Yellow or Blue population. This implies that only signals for mCherry detection were observed. The number of events·mL −1 (emL) was recorded for each Combined population in each sample and transformed into events·g root −1 (egr). All flow cytometer data is available at http://flowrepository.org , experiment codes are shown in Table S . Growth curves to assess growth fitness To calculate the MGT of each Rlv3841, strain labelled with DFM was grown in minimum media (UMS, ). A single colony of bacteria was streaked onto 10 mL UMS agar slopes supplemented with 10 mM glucose and 10 mM NH 4 Cl and incubated for 2 days. Cultures were resuspended in 4 mL of UMS supplemented with 10 mM glucose and 10 mM NH 4 Cl and washed three times. The OD 600nm was measured and 400 µL of 10 7 cells·mL −1 were inoculated into 24-well plates (Vision Plate™, 4titude) and incubated in a plate reader (FLUOstar Omega, BMG Labtech) for 72 h, 700 rpm, 28 °C. MGT was calculated as the number of h it takes the population to double whilst in exponential growth phase . Inoculum preparation for pea root colonisation A single colony of bacteria was streaked in 10 mL of TY supplemented with 20 mM succinate agar slopes in 30 mL universal tubes. For E . cloacae AA4, O . pituitosum AA2 and P . fluorescens SBW25 cultures were incubated overnight. A . xylosoxidans AT1 cultures were incubated for 1 day and A . olearius DQS-4 and Rlv3841 for 2 days. Once grown, cultures were resuspended in 4 mL of sterile 0.9% NaCl. OD 600 nm was measured and cultures were set at 10 9 cells·mL −1 . For competition and community experiments, cultures were mixed in equal ratios at 10 9 cells·mL −1 . Inocula were diluted to 10 5 cells·mL −1 and 1 mL was added to each plant. Root colonisation experiment Pea seeds were sterilised in ethanol 70% for 1 min, followed by 5 min in 3% NaClO. Barley seeds were sterilised in ethanol 70% for 1 min, followed by 5 min in 7% NaClO plus 0.1% Tween20 (Sigma-Aldrich). Seeds were washed with sterile distilled water. Pea seeds were pregerminated on agar-water 0.8% for 3 days at 23 °C in the dark, and after 3 days were transferred into sterilised boiling tubes containing fine vermiculite and 25 mL of root nutrient solution . Sterilised barley seeds were transfer into boiling tubes containing fine vermiculite and 25 mL of root nutrient solution . At 7 days after sterilisation, each seed was inoculated with a total of 10 5 cells. At 7 days post-inoculation (dpi) (1 to 14 dpi for assembly dynamics experiment), plants were harvested by inverting and shaking the tubes. Roots were dipped in sterilised water to remove loosely attached vermiculite, separated from seed, and shoot by cutting the root below the seed, weighed, and transferred to 50-mL Falcon tubes. Then, 25 mL harvest solution (0.9% NaCl, 0.02% Silwet L-77) was added and vortexed at maximum speed for 1 min. Further, 1 mL was passed through 40 µm filters (FLOWMI™ cell strainers) and 100 µL of each sample was transferred to 96-well u-bottom plates for single cell quantification using Amnis® Cellstream® (Luminex Ltd.) flow cytometer. Quantification of background from plant roots Uninoculated pea and barley plants were grown for 14 days, and samples were treated as described above. For each DFM population, emL was recorded and converted into egr. The values obtained were defined as root background and subtracted from total egr obtained from samples with bacterial inoculation (Table S ). Statistical analysis Statistical analyses were performed on Prism 10 v10.02. Nitrogenase activity Nitrogenase activity of A . olearius DQS-4 and A. olearius DQS-4 labelled with sYFP2 (AoDQS-4 Y ) on barley plants was assessed as described by Haskett et al. .
Primer and plasmids used in this study are shown in Table S and Table S , respectively. All pTn7-SCOUT plasmids are available in Addgene, see Table S for codes.
Bacterial strains used in this work are listed in Table S . Escherichia coli strains were grown in LB at 37 °C, supplemented with antibiotics at the following concentrations: ampicillin (Ap) 100 µg·mL −1 , gentamicin (Gm) 10 µg·mL −1 , kanamycin (Km) 20 µg·mL −1 , tetracycline (Tc) 10 µg·mL −1 and, spectinomycin (Sp) 50 µg·mL −1 . The remaining strains were grown in rich Tryptone Yeast (TY) media supplemented with 20 mM succinate at 28 °C, unless specified otherwise. The following antibiotic concentrations were used: Rhizobium leguminosarum bv. viciae (Rlv3841) Gm 20 µg·mL −1 , neomycin 40 µg·mL −1 , Tc 5, Sp 100 µg mL −1 ; Ochrobactrum pituitosum AA2 and Pseudomonas fluorescens SBW25 Gm 20 µg·mL −1 ; Enterobacter cloacae AA4 Km 20 µg·mL −1 . Achromobacter xylosoxidans AT1 Km 100 µg·mL −1 and Azoarcus olearius DQS-4 Sp 200 µg·mL −1 . For the assessment of Rlv3841 labelled with DFM (Rlv3841 DFM ), mean generation time (MGT) strains were grown in UMS supplemented with 10 mM glucose and 10 mM NH 4 Cl. Plasmids were transformed into chemically competent E . coli strains DH5 [12pt]{minimal}
$$$$ α , Transformax™ EC100D™ pir + (Lucigen) and Transformax™ EC100D™ pir-116 (Lucigen). Except for the pTn7-SCOUT plasmids which were introduced into recipient bacteria by triparental conjugation with E . coli DH5 [12pt]{minimal}
$$$$ α as plasmid donor and E . coli HB101 with the helper plasmid pRK2013 . The pTn7-SCOUT plasmids were conjugated by tetraparental conjugation using E . coli Transformax™ EC100D™ pir + as plasmid donor, E . coli S17-1 containing pTNS3 as transposase and E . coli pRK2013 as helper. Nitrofurantoin 10 µg·mL −1 was used to counter select against E . coli strains.
The pUC18R6KT-mini-Tn 7 T-Km was obtained from Addgene (catalogue no. 64969) and used as a scaffold to generate the Golden Gate level 1 master plasmid pTn7-SCOUT10. BsaI and Esp3I restrictions sites (RS) were removed, and two cloning sites added: a Golden Gate level 1 cloning site and an Esp3I cloning site to allow addition of antibiotic markers (Fig. ). Five different fragments were generated by PCR and assembled by Golden Gate using BpiI. The first fragment was amplified using oxp3349-oxp3350 from the pUC18R6KT-mini-Tn 7 T-Km multicloning site (MCS) to the BsaI RS located in the ampicillin resistance marker (Ap R ), changing a nucleotide in a serine codon (748A > G). The second fragment was amplified with oxp3351-oxp3352 from the BsaI RS located in Ap R to two Esp3I RS located in the plasmid backbone between the Ap R and R-Tn 7 . The third fragment was amplified with oxp3353-oxp3354 from the Esp3I RS in the backbone plasmid to a region between the flippase recognition site (FRT) site and 3′-end of the Km R . The fourth fragment was amplified with oxp3355-oxp3356 from the region between 5′-end of Km R and FRT site to the mini-Tn 7 MCS. The fifth fragment was amplified with oxp2980-oxp2981 from pOGG093 plasmid , which amplifies the Golden Gate level 1 cloning site containing the P lac :: lacZ [12pt]{minimal}
$$$$ α -T0 region. Fragments were amplified with DNA polymerase Q5 (NEB), cleaned (GeneJet PCR purification kit, Thermo Fisher), assembled by Golden Gate with BpiI as described by Geddes et al. , cloned in Transformax™ EC100D™ pir-116 (Lucigen), miniprepped, and Sanger sequenced. pTn7-SCOUT10 has BsaI RS compatible with Golden Gate level 1 assembly and lacZ [12pt]{minimal}
$$$$ α as cloning marker, resulting in blue/white colony colour selection when plated on media supplemented with X-gal 50 µg·mL −1 . To generate the Golden Gate level 2 master plasmid pTn7-SCOUT20, a new selection marker was constructed. The chromogenic gene tsPurple expression cassette was amplified from pOPS1522 with oxp4051-oxp4052, cloned into pTn7-SCOUT10 by Golden Gate using BsaI, transformed in Transformax™ EC100D™ pir-116 (Lucigen), miniprepped and Sanger sequenced. pTn7-SCOUT20 has BpiI RS compatible with Golden Gate level 2 assembly and tsPurple as cloning marker, resulting in purple/white colony colour selection. The antibiotic resistance cassettes within the mini-Tn 7 were cloned in pTn7-SCOUT10 and pTn7-SCOUT20 by a Golden Gate reaction using Esp3I. The pLVC-P2 modules of the gentamicin resistance marker (Gm R , pOGG009), tetracycline resistance marker (Tc R , pOGG042) and kanamycin resistance marker (Km R , pOGG008) were used . The spectinomycin resistance marker (Sp R ) was amplified with oxp3357-oxp3358 from pUC18T-mini-Tn 7 T- aad9 and cloned by Golden Gate reaction with Esp3I. A family of pTn7-SCOUT plasmids was generated: level 1 pTn7-SCOUT11 (Gm R ), pTn7-SCOUT12 (Km R ), pTn7-SCOUT13 (Tc R ), pTn7-SCOUT14 (Sp R ); and level 2 pTn7-SCOUT21 (Gm R ), pTn7-SCOUT22 (Km R ), pTn7-SCOUT23 (Tc R ), pTn7-SCOUT24 (Sp R ) (see Table , Table S ).
The antibiotic marker within the mini-Tn 7 is flanked by FRT sites, allowing its excision from the chromosome by yeast recombinase flippase (Flp) following mini-Tn 7 insertion . The pFLP2 plasmid with Amp R was obtained from Herbert P. Schweizer. The sacB - flp - cI genes were amplified with oxp3417-oxp3418, purified and assembly by Golden Gate with BsaI into the destination vectors pOGG024 (Gm R ), pOGG023 (Km R ) and pOGG277 (Tc R ). Three new pFLP2 plasmids were generated pFlp-Km (pOPS1466; flp-cl-sacB-pL1V-Lv1-neo-pBBR1-ELT3), pFlp-Gm (pOPS1467; flp-cl-sacB-pL1V-Lv1-gent-pBBR1-ELT3) and pFlp-Tc (pOPS1468; flp-cl-sacB-pL1V-Lv1-TetAR-pBBR1-ELT3), (Table , Table S ).
Assembly of plasmids was done by Golden Gate as described by Geddes et al. . Esp3I was used for the assembly of level 1 cloning plasmids (pL1V-Lv1), BsaI for the assembly of the expression cassette into level 1 plasmids and BpiI for assembly of level 1 modules into level 2 plasmids. Specific details about each plasmid construction are described in .
amplification, sequencing and analysis DNA extraction from each DFM strain was achieved by alkaline lysis (0.05 M NaOH, 0.25% SDS) , and used as a template to amplify by PCR the region from the 3′-end of glmS to Tn 7 -R. Primer PTn 7 R on Tn 7 -R was used as a reverse primer and a specific forward primer was designed for each strain (see Table S ). Amplification was carried out in a 50 µL PCR reaction containing 5–10 ng of isolated DNA and 2 U of Q5 DNA polymerase (NEB). PCR products were visualised on 1% agarose gels, purified (Monarch® PCR & DNA Cleanup kit, NEB) and Sanger sequenced (Eurofins). Alignment of sequences was performed using MUSCLE implemented in MEGA X software . The alignment consensus was calculated in Jalview .
To construct the Sinorhizobium meliloti CL150 containing the landing pad ( Sm LP), we followed the same procedure as described by Haskett et al. . Firstly, a 282 bp fragment containing the Tn7 attB site using oxp3192 and oxp3193 primers was PCR-amplified from Rlv3841 chromosome (Table S ). Secondly, 1 kb DNA fragments of two flanking regions of the harbour site of S . meliloti CL150 were amplified using primer pairs oxp3190-oxp3191 and oxp3194-oxp3195. These three fragments were assembled by HiFi (NEB) with pK19mobSacB digested with SmaI, resulting in plasmid pOPS1246. Plasmid pOPS1246 was introduced into S . meliloti CL150, and sucrose selection was used to stably integrate the Tn7 attB site of Rlv3841 (landing pad) into a harbour site in the chromosome by homologous recombination, resulting in Sm LP strain. To test mini-Tn 7 integration specificity into the landing in Azorhizobium caulinodans ORS571 containing landing pad (AcLP) and Sm LP, two sets of primer pairs were used to PCR-amplify the 5′-end of the Rlv3841 attB -containing site fragment to Tn7-R (oxp2986 and oxp1390) and the Tn 7 -L to the 3′-end of the Rlv3841 attB -containing site fragment (PTn7L and oxp5053).
Rlv3841 containing a mini-Tn7-Gm-sfGFP (Rlv3841 G-Gm ) was conjugated with pOPS1468 (flp-cl-sacB-pL1V-Lv1-TetAR-pBBR1-ELT3), and colonies selected on TY containing Tc. Transconjugants were pooled and plated on TY supplemented with sucrose (12%). Fifty colonies were patched on TY media with and without Tc. Strains unable to grow on Tc were PCR-tested with primers oxp3878 and oxp3879, which bind between T0 and T1 and on FRT sequence. Two bands of 272 bp and 1240 bp were present in Rlv3841 G-Gm , but only the 272 bp band in the Rlv3841 G , which confirms excision of the Gm cassette.
Microscopy images were taken of cultures of Rlv3841 DFM strains growing on TY agar plates using a Leica M165FC. Detection of fluorescent proteins was as follows: mTagBFP with ET BFP filter (10,450,329, excitation: 405/20 nm, barrier: 460/40 nm) and exposure time 0.7 s; sYFP2 with ET YFP filter (10,447,410, excitation: 500/20 nm, barrier: 535/30 nm) and exposure time 1 s; and mCherry with ET mCherry filter (10,450,195, excitation: 560/40 nm, emission: 630/74 nm) and exposure time 0.2 s. Gain was set at 1 × , saturation at 1.0 and gamma at 1.01 for all images. A mix containing equal amounts of cultures of Rlv3841 DFM and unlabelled were imaged with a Zeiss LSM 880 Airy Scan confocal microscope and analysed with ZEN Black v 3.6 software. To visualise fluorescent tags, mCherry was excited with a 561 nm wavelength laser and detected between 598 and 649 nm, sYFP2 was excited with a 488 nm wavelength laser and emission detected between 498 and 562 nm and mTag was excited with a 405 nm wavelength laser and emission detected between 440 and 490 nm. Two channels were used for the overlapping excitation and emission of sYFP2 and mTag. Channel one excited and detected mCherry and mTag, channel two excited and detected sYFP2.
An Amnis® Cellstream® (Luminex Ltd.) flow cytometer with autosampler, equipped with 405 nm, 488 nm and 561 nm to excite TagBFP, sfGFP/sYFP2 and mCherry respectively, was used. Flow rates were set to low speed/high sensitivity (3.66 µL·min −1 ) and 5000–20,000 events defined by our gating parameters as Bacteria population were counted for each sample. Using Cellstream® Analysis 1.3.384 software, the Bacteria population was defined as the concentrated events area when plotting size (FSC) and granularity (SSC). The bacteria population was afterwards gated based on FSC (threshold > 0) and the aspect-ratio of SSC (threshold > 0.4) defining the Singlets population. Then Singlets events were gated based on their fluorescence emission, generating three colour populations: Red, Yellow and Blue for each fluorescent protein, mCherry, sYFP2 and TagBFP, respectively . The Red population are singlets events detected in the 561–611/31 channel above 550 FI units. The Yellow population are singlets events detected 488–528/46 channel above 500 FI units. The Blue population are singlets events detected in the 405–456/51 channel above 450 FI units (Fig. S ). Afterwards, we created six Combined populations defined as presence absence of Red, Yellow and Blue colour populations. R population (exclusively Red), Y (exclusively Yellow), B (exclusively Blue), RY (exclusively Red and Yellow), RB (exclusively Red and Blue) and YB (exclusively Yellow and Blue). For instance, an event will be assigned to the R population if belongs to the Red population whilst not belonging to either the Yellow or Blue population. This implies that only signals for mCherry detection were observed. The number of events·mL −1 (emL) was recorded for each Combined population in each sample and transformed into events·g root −1 (egr). All flow cytometer data is available at http://flowrepository.org , experiment codes are shown in Table S .
To calculate the MGT of each Rlv3841, strain labelled with DFM was grown in minimum media (UMS, ). A single colony of bacteria was streaked onto 10 mL UMS agar slopes supplemented with 10 mM glucose and 10 mM NH 4 Cl and incubated for 2 days. Cultures were resuspended in 4 mL of UMS supplemented with 10 mM glucose and 10 mM NH 4 Cl and washed three times. The OD 600nm was measured and 400 µL of 10 7 cells·mL −1 were inoculated into 24-well plates (Vision Plate™, 4titude) and incubated in a plate reader (FLUOstar Omega, BMG Labtech) for 72 h, 700 rpm, 28 °C. MGT was calculated as the number of h it takes the population to double whilst in exponential growth phase .
A single colony of bacteria was streaked in 10 mL of TY supplemented with 20 mM succinate agar slopes in 30 mL universal tubes. For E . cloacae AA4, O . pituitosum AA2 and P . fluorescens SBW25 cultures were incubated overnight. A . xylosoxidans AT1 cultures were incubated for 1 day and A . olearius DQS-4 and Rlv3841 for 2 days. Once grown, cultures were resuspended in 4 mL of sterile 0.9% NaCl. OD 600 nm was measured and cultures were set at 10 9 cells·mL −1 . For competition and community experiments, cultures were mixed in equal ratios at 10 9 cells·mL −1 . Inocula were diluted to 10 5 cells·mL −1 and 1 mL was added to each plant.
Pea seeds were sterilised in ethanol 70% for 1 min, followed by 5 min in 3% NaClO. Barley seeds were sterilised in ethanol 70% for 1 min, followed by 5 min in 7% NaClO plus 0.1% Tween20 (Sigma-Aldrich). Seeds were washed with sterile distilled water. Pea seeds were pregerminated on agar-water 0.8% for 3 days at 23 °C in the dark, and after 3 days were transferred into sterilised boiling tubes containing fine vermiculite and 25 mL of root nutrient solution . Sterilised barley seeds were transfer into boiling tubes containing fine vermiculite and 25 mL of root nutrient solution . At 7 days after sterilisation, each seed was inoculated with a total of 10 5 cells. At 7 days post-inoculation (dpi) (1 to 14 dpi for assembly dynamics experiment), plants were harvested by inverting and shaking the tubes. Roots were dipped in sterilised water to remove loosely attached vermiculite, separated from seed, and shoot by cutting the root below the seed, weighed, and transferred to 50-mL Falcon tubes. Then, 25 mL harvest solution (0.9% NaCl, 0.02% Silwet L-77) was added and vortexed at maximum speed for 1 min. Further, 1 mL was passed through 40 µm filters (FLOWMI™ cell strainers) and 100 µL of each sample was transferred to 96-well u-bottom plates for single cell quantification using Amnis® Cellstream® (Luminex Ltd.) flow cytometer.
Uninoculated pea and barley plants were grown for 14 days, and samples were treated as described above. For each DFM population, emL was recorded and converted into egr. The values obtained were defined as root background and subtracted from total egr obtained from samples with bacterial inoculation (Table S ).
Statistical analyses were performed on Prism 10 v10.02.
Nitrogenase activity of A . olearius DQS-4 and A. olearius DQS-4 labelled with sYFP2 (AoDQS-4 Y ) on barley plants was assessed as described by Haskett et al. .
Development of pTn7-SCOUT plasmids Genomic integration of fluorescent markers is crucial for gene stability when studying bacteria in complex environments, due to the absence of plasmid-associated antibiotic selection . However, fluorescent protein expression must be tuned to ensure sufficient levels of protein required for detection by microscopy and flow cytometry, whilst also avoiding toxicity due to overexpression. To overcome this challenge, we generated the pTn7-SCOUT (plasmid Tn7 Suicidal low COpy for Universal Transfer) as a family of mini-Tn 7 delivery plasmids that are compatible with BEVA modular Golden Gate cloning, and which only replicate in strains containing the pir genes . The pTn7-SCOUT plasmid family facilitates the chromosomal integration of multiple expression cassettes in a diverse group of Proteobacteria. This can be applied, as shown in this work, for tracking bacterial community through the quantification of fluorescent protein. To develop the master pTn7-SCOUT10 (Fig. ), we used the pUC18R6K-mini-Tn 7 T-Km developed by Choi et al. as a scaffold. First, BsaI and Esp3I RS present in the pUC18R6K-mini-Tn 7 T-Km plasmid were mutated since BsaI and Esp3I sites are used for level 1 and antibiotic marker cloning, respectively. Secondly, the Km R located in the mini-Tn 7 between the FRT sites was replaced with an Esp3I cloning site to allow for addition of different selection markers. Lastly, the MCS located in the mini-Tn 7 was substituted with a level 1 Golden Gate cloning site ( lacZ [12pt]{minimal}
$$$$ α ) for blue to white selection, which facilitates the assembly of one expression cassette by using BsaI. To enable the assembly of multiple expression cassettes, we generated the level 2 master plasmid pTn7-SCOUT20 by replacing the pTn7-SCOUT10 cloning site with a level 2 ( tsPurple ) for purple to white selection. Finally, we independently cloned the antibiotic markers, gentamicin (Gm R ), kanamycin (Km R ), tetracycline (Tc R ) and spectinomycin (Sp R ) by Golden Gate reaction into the Esp3I cloning site, generating the pTn7-SCOUT family (Table ). The existence of a FRT site on either side of the antibiotic expression cassette on mini-Tn 7 means that, following integration, the antibiotic marker can be removed using the Flp. To facilitate this, we also developed new antibiotic versions of the pFLP2 plasmid ( flp , cI , sacB Ap R ) (Table ) to ensure compatibility with the strains used in this study. The Rhizobium leguminosarum bv. viciae 3841 (Rlv3841) containing the mini-Tn 7 -Gm-sfGFP (Rlv3841 G−Gm ) was conjugated with pOPS1468 ( flp-Ic-sacB -Tc-pBBR) to excise the Gm R from the integrated mini-Tn 7 . After sucrose selection, 100% of the strains were sensitive to Gm and the lack of a Gm R was confirmed by PCR. Analysis of mini-Tn7 integration delivered by pTn7-SCOUT In the model bacteria Escherichia coli , integration of the Tn 7 transposon occurs downstream of the glmS gene . Different strains of Alpha-, Beta- and Gammaproteobacteria were tested for mini-Tn 7 integration delivered by pTn7-SCOUT and its integration site was assessed. The region from the 3′ end of glmS gene to the upstream end of the mini-Tn 7 (Tn 7 -R) was PCR amplified and sequenced (see Table S for primers). Nucleotide alignment of the Tn 7 integration site for these strains revealed that, as previously observed in E . coli K12 and Pseudomonas aeruginosa PAO1 , Tn 7 integration occurs 25 bp from the glmS stop codon (Fig. ). However, in P . protegens Pf-5 and Achromobacter xylosoxidans AT1, integration occurs 24 bp downstream of glmS , and in Azoarcus olearius DQS-4 and Enterobacter cloacae AA4 at 26 bp. Whilst 90% of the time the Tn 7 transposon integrates 25 bp downstream glmS in E . coli K12, it has been shown to integrate at a lower frequency, at either 24 bp or 26 bp downstream . Therefore, the different integration locations ( attB ) identified among the strains tested could be related to the nature of Tn 7 integration itself rather than a strain-specific effect. Upon Tn 7 integration there is a duplication of 5 bp immediately upstream to attB site . Our results show that there is no conservation in this 5 bp sequence, suggesting that Tn 7 does not require a specific recognition sequence for integration, but rather integrates at a specific distance from the glmS gene (Fig. ). Whilst we have demonstrated that Tn 7 integration occurs 25 ± 1 bp from the glmS stop codon in diverse species, we found that some bacteria such Azorhizobium caulinodans ORS571 and Sinorhizobium meliloti CL150 encode a gene in this region that appear to be lethally disrupted by mini-Tn 7 insertion. We have previously overcome this issue by introducing a Tn 7 landing pad derived from the Rlv3841 Tn 7 attB site into a neutral region of the A . caulinodans ORS571 ( Ac LP) chromosome by double homologous recombination. This landing pad provides an alternative, non-lethal site which permits integration by Tn 7 . Here, we use the same strategy to integrate the landing pad into S . meliloti CL150 chromosome at the same neutral site previously used to harbour a recombinase attB , creating strain Sm LP. We tested the specificity of integration into these sites for Ac LP and Sm LP with three independent conjugation experiments and were able to isolate mini-Tn 7 exconjugants of each strain harbouring the landing pad, but not for their corresponding wild-type strains, indicating the landing pads were being utilised for integration. Ten of each Ac LP and Sm LP colonies putatively harbouring mini-Tn 7 from each of the three conjugation experiments were screened by PCR using bridging across the left Tn 7 attB site and chromosomal landing pad, confirming integration at the desired site in at least 90% for Ac LP (9/10, 10/10 and 9/10 colonies produced bands of the correct size) and 100% for Sm LP (10/10, 10/10, and 10/10 colonies produced bands of the correct size). One amplicon generated from each independent experiment was sequenced and successfully aligned to the predicted in silico sequences to further confirm this conclusion. Clearly this landing pad strategy is robust and can be applied to most strains recalcitrant to Tn 7 insertion at the native glmS position. Expression of single and dual fluorescent markers permits differentiation of up to six bacteria The use of single fluorescent proteins to track bacteria is widely used in plant–microbe interaction studies , but is restricted to availability of fluorophores and an ability to detect them. Our differential fluorescent marking (DFM) strategy couples use of three distinguishable fluorescent proteins, mCherry, sYFP2 and TagBFP (Fig. S ) and mini-Tn 7 stable chromosomal specific integration delivered by pTn7-SCOUT plasmids. DFM uses the aforementioned fluorescent proteins in single and double combinations to generate six unique patterns. The three single constructs are formed by cloning either, mCherry (R), sYFP2 (Y) and TagBFP (B), whilst the three doubles makers were constructed by cloning the fluorescent proteins in pairs, mCherry and sYFP2 (RY), mCherry and TagBFP (RB) and sYFP2 and TagBFP (YB). To test our DFM strategy Rlv3841 was labelled with each DFM construction (Rlv3841 R , Rlv3841 Y , Rlv3841 B , Rlv3841 RY , Rlv3841 RB and Rlv3841 YB ) (Table ), spotted on agar and after two days the fluorescence of each spot was detected using a fluorescent stereomicroscope, confirming the differentiation among the six DFM patterns which are not present in the unlabelled strain (Rlv3841 U ) (Fig. A). We expanded our investigation by combining Rlv3841 U and each Rlv3841 DFM in equal proportions. The resulting mixture was visualised using a Zeiss LSM 880 Airy Scan confocal microscope, confirming differentiation at the single-cell level among the six distinct DFM patters and unlabelled strain (Fig. S ). Subsequently, we ran these Rlv3841 DFM strains and Rlv3841 U independently through a flow cytometer and used Cellstream® Analysis software to distinguish the six strains based on the presence or absence of the three fluorescent proteins (Fig. B). First, the bacteria population was defined as the concentrated area based on size (FSC) and granularity (SSC), followed by the definition of the Singlets population based on FSC and the aspect-ration of SSC (Fig. S A and B). Our gating strategy is followed by the delineation of three different colour population for each fluorescent marker as follows; for mCherry expression, the Red population as events detected 561–611/31 channel above 550 FI units; for sYPF2 expression the Yellow population, events detected 488–528/46 channel above 500 FI units; and for mTagBFP expression the Blue population as the events detected in the 405–456/51 channel above 450 FI units (Fig. S C). Afterwards, we assigned six Combined populations defined as presence or absence of the Colour populations Red, Yellow and Blue: R population (exclusively Red), Y (exclusively Yellow), B (exclusively Blue), RY (exclusively Red and Yellow), RB (exclusively Red and Blue) and YB (exclusively Yellow and Blue) (Fig. S D). The graphs in Fig. B show the detection by flow cytometry of each colour population (column) for each Rlv3841 DFM strain (rows), which confirms the six unique DFM patters observed with the stereomicroscopy (Fig. A). Next, we calculated the accuracy of our flow cytometry gating strategy to assign each Rlv3841 DFM strain to its corresponding colour population, showing that more than 90% events were determined correctly, whereas Rlv3841 U showed less than 1.7% of Singlets events belonging to any of these colour population (Table ). This 1.7% misassignment of events corresponds to events detected in the Blue colour population. The accuracy of our flow cytometry gating strategy for detecting each DFM pattern was assessed by calculating the percentage of each combined population (R, Y, B, RY, RB and YB) for each Rlv3841 DFM strain (Rlv3841 R , Rlv3841 Y , Rlv3841 B , Rlv3841 RY , Rlv3841 RB and Rlv3841 YB ). The results showed an accuracy of more than 95% in assigning the correct combined population to the corresponding DFM strain with almost complete accuracy for Rlv3841 B (Table ). In this case, 99.9% of the events detected when running Rlv3841 B in the flow cytometer by itself were assigned as the corresponded B Combined population (Table ). Next, we evaluated the precision of our gating strategy in discriminating each Rlv3841 DFM strain when present in a mixed sample, with an equal number of each strain. The number of events for each Combined population was calculated revealing that 1/6 of the total number of events were assigned to each Rlv3841 DFM version (Table ). To assess if the presence of any DFM combination had a growth effect in Rlv3841, the MGT on minimum media was calculated and compared to Rlv3841 U . No differences were observed for any of the Rlv3841 DFM strains, neither for each antibiotic version with a sfGFP expression cassette, nor for different colour combinations (Table ). This is consistent with previous studies showing that the fluorescent protein has no effect on the fitness when integrated in single copy using mini-Tn 7 . To validate the use of DFM combined with flow cytometry to assess bacterial colonisation on plant roots, we inoculated Rlv3841 R onto pea and quantified colonisation 7 dpi by colony counts and flow cytometry. The number of Rlv3841 R counted with flow cytometry was 6 · 10 5 ± 4 · 10 5 egr and by colony count 1.1 · 10 6 ± 8.6 · 10 5 CFU·g root −1 , showing no significant differences ( p value = 0.4375, Wilcoxon test), demonstrating that flow cytometry gives comparable numbers to CFU, as shown for Herbaspirillum colonising rice roots . Subsequently, we tested the capacity of each Rlv3841 DFM strain to grow on pea roots in single inoculation and in competition with Rlv3841 U . No significant differences were observed confirming that DFM does not affect the competitive colonisation ability of the strain (Table ). Finally, we examined the capacity to differentiate each Rlv3841 DFM strain when inoculated in equal amounts on pea roots. At 7 dpi, no significant differences were observed among the Rlv3841 DFM strains (Table ). These results confirm that DFM combined with flow cytometry can be used to simultaneously differentiate and quantify up to six bacterial strains from both liquid culture and plant samples with no deleterious effects on bacterial fitness. Since one member of OxCom6 is capable of nitrogen fixation, we tested if the presence of mini-Tn 7 affects the capacity of A . olearius DQS-4 to fix nitrogen on barley roots. The nitrogenase activity of A . olearius DQS-4 wild-type strain was 208.1 ± 44.6 nmol ethylene·plant −1 h −1 , and in A. olearius DQS-4 integrated with mini-Tn 7 was 176.6 ± 24 nmol ethylene·plant −1 h −1 . t Test showed no significance differences between strains ( p value = 0.25). Tracking bacteria in synthetic communities using differential fluorescent markers To test the accuracy of DFM to discriminate, track and quantify individual members in a bacterial community, a model SynCom (OxCom6) was assembled with well-characterised root-colonising strains, all of which are amenable to genetic modification. These belong to Alphaproteobacteria ( Ochrobactrum pituitosum AA2, R. leguminosarum bv. viciae 3841), Betaproteobacteria ( A. xylosoxidans AT1, A. olearius DQS-4) and Gammaproteobacteria ( E . cloacae AA4 and P . fluorescens SBW25) . Each member of the OxCom6 community was labelled with a specific DFM combination: O . pituitosum AA2 was labelled with mCherry (OpAA2 R ), R . leguminosarum bv. viciae 3841 mCherry and mTagBFP (Rlv3841 RB ), A . olearius DQS-4 sYFP2 (AoDQS-4 Y ), A . xylosoxidans AT1 sYFP2 and mTagBFP (AxAT1 YB ), E . cloacae AA4 mCherry and sYFP2 (EcAA4 RY ) and P . fluorescens SBW25 mTagBFP (PfSBW25 B ) (Table , and Table S for details on the strains used). The labelling of OxCom6 strain with each DFM pattern does not have any effect on fitness (Table S ) or competitive colonisation (Table S ). Similar to the observations for Rlv3841, when comparing CFU·mL −1 and events·mL −1 for each OxCom6 strains labelled with DFM, no differences were observed (Table S ). Subsequently, we monitored the assembly of OxCom6 in nutrient-rich media over a span of 96 h and on pea and barley roots for a duration 14 days (Fig. ). The results from the OxCom6 assembly in nutrient-rich media (Fig. A) revealed that EcAA4 RY exhibited a robust and sustained growth, reaching a maximum count of 1.5·10 9 events·mL −1 (emL) within 24 h. In contrast, the other members of the OxCom6 reached a growth plateau at 61 h. OpAA2 R and PfSBW25 B attained peak counts of 2·10 8 and 1.7·10 8 emL respectively. Similarly, AxAT1 YB and Rlv3841 RB achieved comparable plateau levels, recording 8.4·10 6 and 8.8·10 6 emL correspondingly. Meanwhile, AoDQS-4 Y reached a maximum growth of 4.7·10 6 emL. Notably, among the strains, EcAA4 RY demonstrated the fastest growth rate, establishing itself as the most prolific member during the OxCom6 assembly in nutrient-rich media and therefore most abundant strain when OxCom6 assembled in rich media. Subsequently, the assembly dynamics of OxCom6 was tracked on pea roots over 14 days (Fig. B). At 1 dpi, EcAA4 RY emerged as the predominant coloniser, accounting for 10 6 egr. However, by 2 dpi, PfSBW25 B displayed higher counts than EcAA4 RY , recording figures of 3.7·10 6 ± 3.2·10 5 and 1.4·10 6 ± 7.3·10 5 egr respectively. This disparity became significant from 3 dpi with colonisation counts of 4.2·10 6 ± 2.4·10 6 egr for EcAA4 RY and 1.7·10 7 ± 7.9·10 6 egr for PfSBW25 B (paired t test p value = 0.001). Both strains achieved and sustained a plateau from 3 dpi onward, with counts circa 1.7–4.3·10 7 and 1.8–4.2·10 6 egr respectively. Starting at 10 dpi, a consistent rise was observed in the accounts of Rlv3841 RB and OpAA2 R . Rlv3841 RB exhibited an increase from 6 to 14 dpi, rising from 1.4·10 6 to 4.1·10 6 egr, aligning its values with those of EcAA4 RY . A similar pattern was evident for OpAA2 R , which displayed growth from 4.9·10 5 to 1.7·10 6 egr between 9 and 14 dpi. This growth correlated positively with Rlv3841 RB colonisation (Pearson r = 0.91, R 2 = 0.83, p value < 10 −4 ). Despite early events of colonisation, the Betaproteobacteria AoDQS-4 Y and AxAT1 YB were not consistently detected within the OxCom6 assembly on pea roots. At 2 dpi, AoDQS-4 Y achieved a peak colonisation of 1.4·10 6 egr. However, its counts swiftly decreased to 3.5·10 5 egr by 3 dpi, concurring with the increase of PfSWB25 B . This phenomenon finds support in a significant negative correlation between the two strains (Pearson r = -0.62, R 2 = 0.38, p value = 0.03) indicating a potential displacement of AoDQS-4 Y by PfSWB25 B . AxAT1 YB attained a maximum value of 1·10 6 egr root at 3 dpi, followed by a fluctuating pattern until 14 dpi, with counts ranging between values of 10 5 ·10 6 egr. Finally, OxCom6 assembly dynamics were tracked on barley roots over 14 days (Fig. C). EcAA4 RY emerged as the primary coloniser from 2 dpi onward, achieving a plateau of 2–3·10 7 egr by 5 dpi. The colonisation counts of PfSBW25 B at 1 dpi (1.8·10 6 ± 2.1·10 6 egr) did not significantly differ from those observed for EcAA4 RY (4.1·10 6 ± 4.4·10 6 egr), as indicated by the paired t test ( p value = 0.08). However, PfSBW25 B displayed a noteworthy decrease at 2 dpi (3.6·10 5 ± 8.1·10 4 egr), which differed significantly from the account at 1 dpi ( t test p value = 0.005). Subsequently, the count of PfSBW25 B ’s rebounded to 5–6·10 6 egr by 11 dpi when it reached plateau. Despite the early competitive events, a robust positive correlation exists between EcAA4 RY and PfSBW25 B (Pearson r = 0.81, R 2 = 0.66, p value = 0.0004). OpAA2 R was initially detected at 3 dpi and maintained a consistent count between 5·10 5 and 10 6 egr up to 14 dpi. Similarly, AxAT1 YB exhibited a steady colonisation on barley roots, ranging 2–5·10 5 egr. Rlv3841 RB was only detected at 11 and 13 dpi in one and two plants respectively, indicating that its colonisation on barley roots lacked stability in the presence of other OxCom6 members. Likewise, AoDQS-4 Y was detected during initial stages of colonisation (1–3 dpi) within the 5·10 5 to 10 6 egr range. Subsequently, it was detected at 9 and 13 dpi with roughly the same counts as before. The colonisation of AoDQS-4 Y at 3 dpi and 9 dpi was quantified in only one plant, and at 13 dpi in two plants. AoDQS-4 Y colonisation in barley roots appeared to exhibit stability during the initial colonisation events (1–3 dpi) but was subsequently outcompeted by other OxCom6 members.
Genomic integration of fluorescent markers is crucial for gene stability when studying bacteria in complex environments, due to the absence of plasmid-associated antibiotic selection . However, fluorescent protein expression must be tuned to ensure sufficient levels of protein required for detection by microscopy and flow cytometry, whilst also avoiding toxicity due to overexpression. To overcome this challenge, we generated the pTn7-SCOUT (plasmid Tn7 Suicidal low COpy for Universal Transfer) as a family of mini-Tn 7 delivery plasmids that are compatible with BEVA modular Golden Gate cloning, and which only replicate in strains containing the pir genes . The pTn7-SCOUT plasmid family facilitates the chromosomal integration of multiple expression cassettes in a diverse group of Proteobacteria. This can be applied, as shown in this work, for tracking bacterial community through the quantification of fluorescent protein. To develop the master pTn7-SCOUT10 (Fig. ), we used the pUC18R6K-mini-Tn 7 T-Km developed by Choi et al. as a scaffold. First, BsaI and Esp3I RS present in the pUC18R6K-mini-Tn 7 T-Km plasmid were mutated since BsaI and Esp3I sites are used for level 1 and antibiotic marker cloning, respectively. Secondly, the Km R located in the mini-Tn 7 between the FRT sites was replaced with an Esp3I cloning site to allow for addition of different selection markers. Lastly, the MCS located in the mini-Tn 7 was substituted with a level 1 Golden Gate cloning site ( lacZ [12pt]{minimal}
$$$$ α ) for blue to white selection, which facilitates the assembly of one expression cassette by using BsaI. To enable the assembly of multiple expression cassettes, we generated the level 2 master plasmid pTn7-SCOUT20 by replacing the pTn7-SCOUT10 cloning site with a level 2 ( tsPurple ) for purple to white selection. Finally, we independently cloned the antibiotic markers, gentamicin (Gm R ), kanamycin (Km R ), tetracycline (Tc R ) and spectinomycin (Sp R ) by Golden Gate reaction into the Esp3I cloning site, generating the pTn7-SCOUT family (Table ). The existence of a FRT site on either side of the antibiotic expression cassette on mini-Tn 7 means that, following integration, the antibiotic marker can be removed using the Flp. To facilitate this, we also developed new antibiotic versions of the pFLP2 plasmid ( flp , cI , sacB Ap R ) (Table ) to ensure compatibility with the strains used in this study. The Rhizobium leguminosarum bv. viciae 3841 (Rlv3841) containing the mini-Tn 7 -Gm-sfGFP (Rlv3841 G−Gm ) was conjugated with pOPS1468 ( flp-Ic-sacB -Tc-pBBR) to excise the Gm R from the integrated mini-Tn 7 . After sucrose selection, 100% of the strains were sensitive to Gm and the lack of a Gm R was confirmed by PCR.
In the model bacteria Escherichia coli , integration of the Tn 7 transposon occurs downstream of the glmS gene . Different strains of Alpha-, Beta- and Gammaproteobacteria were tested for mini-Tn 7 integration delivered by pTn7-SCOUT and its integration site was assessed. The region from the 3′ end of glmS gene to the upstream end of the mini-Tn 7 (Tn 7 -R) was PCR amplified and sequenced (see Table S for primers). Nucleotide alignment of the Tn 7 integration site for these strains revealed that, as previously observed in E . coli K12 and Pseudomonas aeruginosa PAO1 , Tn 7 integration occurs 25 bp from the glmS stop codon (Fig. ). However, in P . protegens Pf-5 and Achromobacter xylosoxidans AT1, integration occurs 24 bp downstream of glmS , and in Azoarcus olearius DQS-4 and Enterobacter cloacae AA4 at 26 bp. Whilst 90% of the time the Tn 7 transposon integrates 25 bp downstream glmS in E . coli K12, it has been shown to integrate at a lower frequency, at either 24 bp or 26 bp downstream . Therefore, the different integration locations ( attB ) identified among the strains tested could be related to the nature of Tn 7 integration itself rather than a strain-specific effect. Upon Tn 7 integration there is a duplication of 5 bp immediately upstream to attB site . Our results show that there is no conservation in this 5 bp sequence, suggesting that Tn 7 does not require a specific recognition sequence for integration, but rather integrates at a specific distance from the glmS gene (Fig. ). Whilst we have demonstrated that Tn 7 integration occurs 25 ± 1 bp from the glmS stop codon in diverse species, we found that some bacteria such Azorhizobium caulinodans ORS571 and Sinorhizobium meliloti CL150 encode a gene in this region that appear to be lethally disrupted by mini-Tn 7 insertion. We have previously overcome this issue by introducing a Tn 7 landing pad derived from the Rlv3841 Tn 7 attB site into a neutral region of the A . caulinodans ORS571 ( Ac LP) chromosome by double homologous recombination. This landing pad provides an alternative, non-lethal site which permits integration by Tn 7 . Here, we use the same strategy to integrate the landing pad into S . meliloti CL150 chromosome at the same neutral site previously used to harbour a recombinase attB , creating strain Sm LP. We tested the specificity of integration into these sites for Ac LP and Sm LP with three independent conjugation experiments and were able to isolate mini-Tn 7 exconjugants of each strain harbouring the landing pad, but not for their corresponding wild-type strains, indicating the landing pads were being utilised for integration. Ten of each Ac LP and Sm LP colonies putatively harbouring mini-Tn 7 from each of the three conjugation experiments were screened by PCR using bridging across the left Tn 7 attB site and chromosomal landing pad, confirming integration at the desired site in at least 90% for Ac LP (9/10, 10/10 and 9/10 colonies produced bands of the correct size) and 100% for Sm LP (10/10, 10/10, and 10/10 colonies produced bands of the correct size). One amplicon generated from each independent experiment was sequenced and successfully aligned to the predicted in silico sequences to further confirm this conclusion. Clearly this landing pad strategy is robust and can be applied to most strains recalcitrant to Tn 7 insertion at the native glmS position.
The use of single fluorescent proteins to track bacteria is widely used in plant–microbe interaction studies , but is restricted to availability of fluorophores and an ability to detect them. Our differential fluorescent marking (DFM) strategy couples use of three distinguishable fluorescent proteins, mCherry, sYFP2 and TagBFP (Fig. S ) and mini-Tn 7 stable chromosomal specific integration delivered by pTn7-SCOUT plasmids. DFM uses the aforementioned fluorescent proteins in single and double combinations to generate six unique patterns. The three single constructs are formed by cloning either, mCherry (R), sYFP2 (Y) and TagBFP (B), whilst the three doubles makers were constructed by cloning the fluorescent proteins in pairs, mCherry and sYFP2 (RY), mCherry and TagBFP (RB) and sYFP2 and TagBFP (YB). To test our DFM strategy Rlv3841 was labelled with each DFM construction (Rlv3841 R , Rlv3841 Y , Rlv3841 B , Rlv3841 RY , Rlv3841 RB and Rlv3841 YB ) (Table ), spotted on agar and after two days the fluorescence of each spot was detected using a fluorescent stereomicroscope, confirming the differentiation among the six DFM patterns which are not present in the unlabelled strain (Rlv3841 U ) (Fig. A). We expanded our investigation by combining Rlv3841 U and each Rlv3841 DFM in equal proportions. The resulting mixture was visualised using a Zeiss LSM 880 Airy Scan confocal microscope, confirming differentiation at the single-cell level among the six distinct DFM patters and unlabelled strain (Fig. S ). Subsequently, we ran these Rlv3841 DFM strains and Rlv3841 U independently through a flow cytometer and used Cellstream® Analysis software to distinguish the six strains based on the presence or absence of the three fluorescent proteins (Fig. B). First, the bacteria population was defined as the concentrated area based on size (FSC) and granularity (SSC), followed by the definition of the Singlets population based on FSC and the aspect-ration of SSC (Fig. S A and B). Our gating strategy is followed by the delineation of three different colour population for each fluorescent marker as follows; for mCherry expression, the Red population as events detected 561–611/31 channel above 550 FI units; for sYPF2 expression the Yellow population, events detected 488–528/46 channel above 500 FI units; and for mTagBFP expression the Blue population as the events detected in the 405–456/51 channel above 450 FI units (Fig. S C). Afterwards, we assigned six Combined populations defined as presence or absence of the Colour populations Red, Yellow and Blue: R population (exclusively Red), Y (exclusively Yellow), B (exclusively Blue), RY (exclusively Red and Yellow), RB (exclusively Red and Blue) and YB (exclusively Yellow and Blue) (Fig. S D). The graphs in Fig. B show the detection by flow cytometry of each colour population (column) for each Rlv3841 DFM strain (rows), which confirms the six unique DFM patters observed with the stereomicroscopy (Fig. A). Next, we calculated the accuracy of our flow cytometry gating strategy to assign each Rlv3841 DFM strain to its corresponding colour population, showing that more than 90% events were determined correctly, whereas Rlv3841 U showed less than 1.7% of Singlets events belonging to any of these colour population (Table ). This 1.7% misassignment of events corresponds to events detected in the Blue colour population. The accuracy of our flow cytometry gating strategy for detecting each DFM pattern was assessed by calculating the percentage of each combined population (R, Y, B, RY, RB and YB) for each Rlv3841 DFM strain (Rlv3841 R , Rlv3841 Y , Rlv3841 B , Rlv3841 RY , Rlv3841 RB and Rlv3841 YB ). The results showed an accuracy of more than 95% in assigning the correct combined population to the corresponding DFM strain with almost complete accuracy for Rlv3841 B (Table ). In this case, 99.9% of the events detected when running Rlv3841 B in the flow cytometer by itself were assigned as the corresponded B Combined population (Table ). Next, we evaluated the precision of our gating strategy in discriminating each Rlv3841 DFM strain when present in a mixed sample, with an equal number of each strain. The number of events for each Combined population was calculated revealing that 1/6 of the total number of events were assigned to each Rlv3841 DFM version (Table ). To assess if the presence of any DFM combination had a growth effect in Rlv3841, the MGT on minimum media was calculated and compared to Rlv3841 U . No differences were observed for any of the Rlv3841 DFM strains, neither for each antibiotic version with a sfGFP expression cassette, nor for different colour combinations (Table ). This is consistent with previous studies showing that the fluorescent protein has no effect on the fitness when integrated in single copy using mini-Tn 7 . To validate the use of DFM combined with flow cytometry to assess bacterial colonisation on plant roots, we inoculated Rlv3841 R onto pea and quantified colonisation 7 dpi by colony counts and flow cytometry. The number of Rlv3841 R counted with flow cytometry was 6 · 10 5 ± 4 · 10 5 egr and by colony count 1.1 · 10 6 ± 8.6 · 10 5 CFU·g root −1 , showing no significant differences ( p value = 0.4375, Wilcoxon test), demonstrating that flow cytometry gives comparable numbers to CFU, as shown for Herbaspirillum colonising rice roots . Subsequently, we tested the capacity of each Rlv3841 DFM strain to grow on pea roots in single inoculation and in competition with Rlv3841 U . No significant differences were observed confirming that DFM does not affect the competitive colonisation ability of the strain (Table ). Finally, we examined the capacity to differentiate each Rlv3841 DFM strain when inoculated in equal amounts on pea roots. At 7 dpi, no significant differences were observed among the Rlv3841 DFM strains (Table ). These results confirm that DFM combined with flow cytometry can be used to simultaneously differentiate and quantify up to six bacterial strains from both liquid culture and plant samples with no deleterious effects on bacterial fitness. Since one member of OxCom6 is capable of nitrogen fixation, we tested if the presence of mini-Tn 7 affects the capacity of A . olearius DQS-4 to fix nitrogen on barley roots. The nitrogenase activity of A . olearius DQS-4 wild-type strain was 208.1 ± 44.6 nmol ethylene·plant −1 h −1 , and in A. olearius DQS-4 integrated with mini-Tn 7 was 176.6 ± 24 nmol ethylene·plant −1 h −1 . t Test showed no significance differences between strains ( p value = 0.25).
To test the accuracy of DFM to discriminate, track and quantify individual members in a bacterial community, a model SynCom (OxCom6) was assembled with well-characterised root-colonising strains, all of which are amenable to genetic modification. These belong to Alphaproteobacteria ( Ochrobactrum pituitosum AA2, R. leguminosarum bv. viciae 3841), Betaproteobacteria ( A. xylosoxidans AT1, A. olearius DQS-4) and Gammaproteobacteria ( E . cloacae AA4 and P . fluorescens SBW25) . Each member of the OxCom6 community was labelled with a specific DFM combination: O . pituitosum AA2 was labelled with mCherry (OpAA2 R ), R . leguminosarum bv. viciae 3841 mCherry and mTagBFP (Rlv3841 RB ), A . olearius DQS-4 sYFP2 (AoDQS-4 Y ), A . xylosoxidans AT1 sYFP2 and mTagBFP (AxAT1 YB ), E . cloacae AA4 mCherry and sYFP2 (EcAA4 RY ) and P . fluorescens SBW25 mTagBFP (PfSBW25 B ) (Table , and Table S for details on the strains used). The labelling of OxCom6 strain with each DFM pattern does not have any effect on fitness (Table S ) or competitive colonisation (Table S ). Similar to the observations for Rlv3841, when comparing CFU·mL −1 and events·mL −1 for each OxCom6 strains labelled with DFM, no differences were observed (Table S ). Subsequently, we monitored the assembly of OxCom6 in nutrient-rich media over a span of 96 h and on pea and barley roots for a duration 14 days (Fig. ). The results from the OxCom6 assembly in nutrient-rich media (Fig. A) revealed that EcAA4 RY exhibited a robust and sustained growth, reaching a maximum count of 1.5·10 9 events·mL −1 (emL) within 24 h. In contrast, the other members of the OxCom6 reached a growth plateau at 61 h. OpAA2 R and PfSBW25 B attained peak counts of 2·10 8 and 1.7·10 8 emL respectively. Similarly, AxAT1 YB and Rlv3841 RB achieved comparable plateau levels, recording 8.4·10 6 and 8.8·10 6 emL correspondingly. Meanwhile, AoDQS-4 Y reached a maximum growth of 4.7·10 6 emL. Notably, among the strains, EcAA4 RY demonstrated the fastest growth rate, establishing itself as the most prolific member during the OxCom6 assembly in nutrient-rich media and therefore most abundant strain when OxCom6 assembled in rich media. Subsequently, the assembly dynamics of OxCom6 was tracked on pea roots over 14 days (Fig. B). At 1 dpi, EcAA4 RY emerged as the predominant coloniser, accounting for 10 6 egr. However, by 2 dpi, PfSBW25 B displayed higher counts than EcAA4 RY , recording figures of 3.7·10 6 ± 3.2·10 5 and 1.4·10 6 ± 7.3·10 5 egr respectively. This disparity became significant from 3 dpi with colonisation counts of 4.2·10 6 ± 2.4·10 6 egr for EcAA4 RY and 1.7·10 7 ± 7.9·10 6 egr for PfSBW25 B (paired t test p value = 0.001). Both strains achieved and sustained a plateau from 3 dpi onward, with counts circa 1.7–4.3·10 7 and 1.8–4.2·10 6 egr respectively. Starting at 10 dpi, a consistent rise was observed in the accounts of Rlv3841 RB and OpAA2 R . Rlv3841 RB exhibited an increase from 6 to 14 dpi, rising from 1.4·10 6 to 4.1·10 6 egr, aligning its values with those of EcAA4 RY . A similar pattern was evident for OpAA2 R , which displayed growth from 4.9·10 5 to 1.7·10 6 egr between 9 and 14 dpi. This growth correlated positively with Rlv3841 RB colonisation (Pearson r = 0.91, R 2 = 0.83, p value < 10 −4 ). Despite early events of colonisation, the Betaproteobacteria AoDQS-4 Y and AxAT1 YB were not consistently detected within the OxCom6 assembly on pea roots. At 2 dpi, AoDQS-4 Y achieved a peak colonisation of 1.4·10 6 egr. However, its counts swiftly decreased to 3.5·10 5 egr by 3 dpi, concurring with the increase of PfSWB25 B . This phenomenon finds support in a significant negative correlation between the two strains (Pearson r = -0.62, R 2 = 0.38, p value = 0.03) indicating a potential displacement of AoDQS-4 Y by PfSWB25 B . AxAT1 YB attained a maximum value of 1·10 6 egr root at 3 dpi, followed by a fluctuating pattern until 14 dpi, with counts ranging between values of 10 5 ·10 6 egr. Finally, OxCom6 assembly dynamics were tracked on barley roots over 14 days (Fig. C). EcAA4 RY emerged as the primary coloniser from 2 dpi onward, achieving a plateau of 2–3·10 7 egr by 5 dpi. The colonisation counts of PfSBW25 B at 1 dpi (1.8·10 6 ± 2.1·10 6 egr) did not significantly differ from those observed for EcAA4 RY (4.1·10 6 ± 4.4·10 6 egr), as indicated by the paired t test ( p value = 0.08). However, PfSBW25 B displayed a noteworthy decrease at 2 dpi (3.6·10 5 ± 8.1·10 4 egr), which differed significantly from the account at 1 dpi ( t test p value = 0.005). Subsequently, the count of PfSBW25 B ’s rebounded to 5–6·10 6 egr by 11 dpi when it reached plateau. Despite the early competitive events, a robust positive correlation exists between EcAA4 RY and PfSBW25 B (Pearson r = 0.81, R 2 = 0.66, p value = 0.0004). OpAA2 R was initially detected at 3 dpi and maintained a consistent count between 5·10 5 and 10 6 egr up to 14 dpi. Similarly, AxAT1 YB exhibited a steady colonisation on barley roots, ranging 2–5·10 5 egr. Rlv3841 RB was only detected at 11 and 13 dpi in one and two plants respectively, indicating that its colonisation on barley roots lacked stability in the presence of other OxCom6 members. Likewise, AoDQS-4 Y was detected during initial stages of colonisation (1–3 dpi) within the 5·10 5 to 10 6 egr range. Subsequently, it was detected at 9 and 13 dpi with roughly the same counts as before. The colonisation of AoDQS-4 Y at 3 dpi and 9 dpi was quantified in only one plant, and at 13 dpi in two plants. AoDQS-4 Y colonisation in barley roots appeared to exhibit stability during the initial colonisation events (1–3 dpi) but was subsequently outcompeted by other OxCom6 members.
Mini-Tn 7 is an excellent delivery system to use when working with a wide range of bacterial species in a non-selective environment since it is 100% stable for 100 generations in the absence of antibiotic selection . Mini-Tn 7 is broad-range as demonstrated by successful delivery into multiple strains within Proteobacteria . Moreover, mini-Tn 7 is highly efficient and integrates in single copy into bacterial chromosomes, site- and orientation-specifically at attB Tn 7 , located downstream of the 3′-end of the highly conserved glmS gene . In contrast to plasmids, mini-Tn 7 is replicated within the chromosome, therefore it does not have a fitness cost due to copy number or replication mechanism, and it is compatible with any other cloning system . Here, we developed the pTn7-SCOUT, a new family of mini-Tn 7 plasmids compatible with the Golden Gate modular cloning system BEVA , which allowed us to rapidly tune the expression of the different fluorescent markers used in the DFM strategy. The pTn7-SCOUT family uses the suicidal R6K as origin of replication, which only replicates in the presence of pir genes supplied in trans . Moreover, in pir + E . coli strains the R6K copy number is less than 15, which reduces the toxic effect of highly-expressed cassettes . We replaced the MCS for either a level 1 or level 2 compatible Golden Gate cloning site, to allow the addition of single or multiple expression cassettes respectively. These Golden Gate cloning sites have blue/purple ( lacZ [12pt]{minimal}
$$$$ α / tsPurple ) to white markers to facilitate the identification of positive transformants. The presence of a Golden Gate cloning site enables the use of a vast diversity of compatible Golden Gate modules available to construct the desired fluorescent cassette . Nevertheless, the pTn7-SCOUT family is not restricted to Golden Gate assembly, as the level 1 and level 2 plasmids can be digested with BsaI and BpiI respectively to become entry plasmids for classic cloning such as digestion-ligation or DNA fragment assembly methods like Gibson or HiFi (NEB). Moreover, the lacZ [12pt]{minimal}
$$$$ α within the level 1 cloning site contains a polylinker for traditional cloning . The pTn7-SCOUT plasmid family has an Esp3I site within the mini-Tn 7 to clone any selection marker such as antibiotic resistance genes. We successfully cloned Gm R , Tc R and Km R resistance markers using the BEVA modules . However, as shown with Sp versions, any other selection marker can be cloned; by simply PCR-amplifying them with compatible overhangs, followed by cloning into pTn7-SCOUT digested with Esp3I. The level 2 Golden Gate and the antibiotic cassette cloning sites increase the modularity of the already available mini-Tn 7 delivery plasmids . We expanded the pTn7-SCOUT family with new antibiotic versions of Flippase-containing plasmids to enable excision of the antibiotic resistance cassette, which are compatible with the strains used in the study, since only ampicillin (Ap R ) and Tc R version were available . Characterization of the attB site has enabled us to predict the success of mini-Tn 7 integration if the host genome sequence is known. In some strains, mini-Tn 7 integration would disrupt a gene; however, we have overcome this issue by integrating a new landing pad , providing a new attB site where mini-Tn 7 is able to integrate (with an efficiency over 90% in the strains tested). This tool removes a bottleneck in mini-Tn 7 use. The DMF tool combines single chromosomal integration with multi-fluorescence labelling to discriminate up to six different strains in a bacterial community when growing in nutrient-rich media or colonising plant roots (Fig. ). Our flow cytometry protocol is able to discriminate with more than 95% efficiency each DFM-labelled strain (Fig. , Fig. S , Table , Table , Table ), which is as efficient as the tool developed by Whitaker et al. where they combined GFP and RFP with different RBS strengths to differentiate six Bacteroides strains with a 6% error. The main source of misassignment detected was with the Blue Colour population (Table , Fig. B). This can be partially related to autofluorescence of aromatic amino acids, thiamine and riboflavin, detected in the 405–456/61 channel . However, this blue autofluorescence represents less 2% of the events in Rlv3841 U strain (Table , Fig. B). In addition, plant roots can show blue autofluorescence, mainly related to lignin and suberin compounds of the cell wall , as shown in the non-inoculated pea roots (Table S ). To overcome this issue, we quantified the background on non-inoculated pea roots for each combined population and subtracted this from the colonisation values. High expression of fluorescent proteins can affect growth, decrease fitness, and generate toxicity due to protein aggregation and solubilisation . The fluorescent proteins chosen for DFM (mCherry, sYFP2 and mTagBFP) are engineered monomers with increased brightness, protein folding, extinction coefficient and maturation, which reduce deleterious effects compared to their predecessors . Moreover, DFM is assembled in low-copy number plasmids and then integrated as a single copy into the bacterial chromosome, which reduces overall expression levels of the fluorescent proteins, and thereby any related toxicity. Furthermore, our results showed no deleterious effect of any DFM combinations during growth in liquid culture or colonisation of plants (Table , Table , Table S , Table S ). We successfully applied DFM to the OxCom6, a model SynCom of Proteobacteria root colonisers. Assembly of OxCom6 showed differences between nutrient-rich media, pea and barley roots (Fig. ), indicating that the findings in planta can be associated with rhizosphere adaptation, as has been proven for plant microbiome . The most marked difference was the one observed between OxCom6 assembly on pea and barley roots, where each of them have a distinct dominant strain, PfSBW25 B and EcAA4 RY , respectively, and their colonisation was determined in the early stages of root occupancy (1–3 dpi) (Fig. B and C). P. fluorescens SWB25, a well-known root coloniser isolated from sugar beet , is recognised to enhance plant growth through a combination of factors such as competing with other microorganisms, producing antimicrobial compounds and stimulating systemic resistance . P . fluorescens SWB25 has the capability to generate furanomycin, which displays a potent inhibitory effect on the growth of Pseudomonas , Bacillus , Erwinia and Dickeya strains as observed in agar diffusion assay . On the other hand, E . cloacae AA4 is part of a 7-member SynCom isolated from maize roots, and the absence of E . cloacae AA4 results in the collapse of the root colonisation by the SynCom. Whilst E . cloacae AA4 exhibits antifungal and nematocidal properties, it has not been shown to have any antibacterial activity . The intrinsic antibiotic capabilities of both OxCom6 Gammaproteobacteria alone do not explain the distinctive OxCom6 assembly phenotype. This suggests that there may be a rhizosphere adaptation to pea in the case of PfSBW25 B and to barley in the case of EcAA4 RY , likely influenced by root exudates. The pea and barley root exudate profile have not been extensively characterised to date, but there are some studies that have provided partial descriptions of these exudates’ components. In the case of barley, a study by Calvo et al. reported the presence of sugars such as sucrose, fructose and glucose at concentrations between 1 and 1.5 mg g root dry weight −1 at 71 days. On the other hand, the use of metabolite reporters on pea roots showed that at 4 dpi, the greater proportion of metabolites detected was sugars (xylose, fructose and myo-inositol), di-carboxylic acids (malonate and tartrate) and hesperetin; whereas, other sugars like sucrose were barely detected at this time point . This suggests the different nature of pea and barley rhizosphere secretions, and therefore a different metabolic profile which the OxCom6 members can catabolise during the early stages of establishment, may be crucial in colonisation. In pea roots Rlv3841 RB can achieve similar levels of colonisation as EcAA4 RY , with both reaching counts of 4.1·10 6 egr at 14 dpi (Fig. B). Rlv3841 is a root symbiont of pea plants known for its unique affinity for colonising pea roots and inducing formation of nitrogen-fixing nodules . Therefore, colonisation of Rlv3841 RB may be associated with specific niches, such as infection threads and nodules, as evident from the presence of prominent nodules formed by Rlv3841 RB at 13 and 14 dpi, as shown in Fig. S . Rlv3841 RB root colonisation numbers on pea in OxCom6 are lower compared to those in single culture at 7dpi, 1.7·10 6 ± 1.4·10 6 and 4.7·10 6 ± 1.5·10 6 egr respectively ( t test p value = 0.006) (Fig. B and Table ). This suggests the potential use of OxCom6 as a controlled environment to investigate competitive colonisation of legume endosymbionts, which is critical for the competitiveness of inoculants in the field . On the other hand, Rlv3841 RB was not detected in the barley rhizosphere (Fig. C), which reveals adaptation of this pea endosymbiont to its host rhizosphere . Although A. olearius DQS-4 is capable of fixing nitrogen under free-living conditions and on barley roots, as well as promoting plant growth in rice and Setaria viridis , it was not able to effectively colonise pea and barley roots in the presence of other members of OxCom6. Whilst it can colonise the root intercellular spaces of rice and S . viridis , it is not a strong competitor for pea and barley root colonisation, perhaps because it was isolated from oil-contaminated soil . O . pituitosum AA2, like E . cloacae AA4, is one of the seven members of the maize SynCom and a significant contributor to that community at 14 dpi . OpAA2 R has a strong positive correlation with the colonisation/root infection of Rlv3841 RB on pea roots. This could be partially facilitated by Nod factor produced by rhizobia, as legume mutants with impaired Nod factor perception have been shown to have a less abundant and altered microbiome . However, OpAA2 R showed similar root colonisation counts between pea and barley since a positive correlation was observed between both plants (Pearson r = 0.62, R 2 = 0.39, p value = 0.03), which suggests a good adaptation to both plant rhizospheres, and only this strain out of the six showed any significant correlation between both plants. Therefore, the correlation with Rlv3841 RB on pea cannot be attributed to Rlv3841 host specificity. A . xylosoxidans AT1 was isolated from the rhizosphere of Medicago truncatula and it promotes growth of A . thaliana , M . truncatula and Brachypodium distachyon . The fluctuating colonisation of AxAT1 YB on pea roots, as shown in Fig. B, may be influenced by stochastic availability of specific resources for bacteria in the pea rhizosphere, which can result in oscillation in bacterial growth . However, this is not the case on barley roots, where AxAT1 YB colonises in a steadier way, suggesting a better adaptation to this rhizosphere. A . xylosoxidans AT1 was isolated from M . truncatula by Tkacz et al. ; however, OTUs of Achromobacter spp. were among the most abundant in the three rhizospheres studied: M . truncatula , A . thaliana and B . dystachium . This suggests that the isolation from M . truncatula may be somewhat stochastic and does not necessarily imply that A. xylosoxidans AT1 is better adapted to this plant. These results suggest that the distinct nature of the rhizosphere resources in pea and barley can result in different metabolic profiles encountered by OxCom6 members during colonisation . The availability of these resources in both plants would be just one aspect of the equation. Similarly, the catabolic capabilities of OxCom6 members in these rhizospheres could play a significant role in determining the assembly profile in each plant root based on their preference for catabolic sources . However, catabolic capability alone may not be the sole determinant of this phenotype; competitive exclusion also could play a crucial role . The speed at which bacteria utilise these resources could define their adaptation, and consequently, their abundance. Factors like chemotaxis and motility are pivotal in these processes , since once a bacterium can detect a resource and effectively access and utilise it, it would gain an advantage over others, and this would lead to more rapid increase in numbers.
The combination of DFM with flow cytometry allowed us to perform absolute quantification of bacterial root colonisation quickly and easily. This is crucial when assessing root colonisation dynamics, as shown in Fig. , since relying solely on relative abundance can lead to inaccurate comparisons between samples (Fig. S ) . Whilst DFM was used here for absolute quantification of bacterial root colonisation, it can also be applied to other bacterial communities in any environment. Whilst in this study we limited the SynCom to six members to correspond to the available marker combinations, marked strains can of course be combined into larger communities. Furthermore, by varying the marked strains, large assemblies can be investigated. Techniques using DFM illustrated here provide the means for rapid assessment of microbial communities in diverse plant, animal, and environmental settings.
Additional file 1: Fig S1. Flow cytometry gating strategy. Employing the CellStream® Analysis 1.3.384 software, the gating strategy was implemented to delineate the Colour and Combined populations. The initial step involved defining the Bacteria population by selecting the concentrated events area when plotting size (FSC – 456/51) against granularity (SSC – 773/56). Subsequently, the Bacteria population was gated based on FSC (threshold > 0) and the aspect-ratio of SSC (threshold > 0.4) establishing the Singlets population. Then Singlets population was further refined based on their fluorescence emission to depict the different Colour populations: Red, Yellow and Blue, corresponding to the fluorescent emission of mCherry, sYFP2 and mTagBFP, respectively. For mCherry, fluorescent emission was detected at 611/31, with a threshold above 550 FI units to define Red population. For sYFP2, emission was detected at 528/46, and the events above 500 FI units were designated as Yellow population. Emission for TagBFP was acquired at 457/51, and events exhibiting fluorescence above 450 FI units were categorised as the Blue population. Combining in one or two of the different Colour populations led to the definition of six distinct Combined populations: R (Red), Y (Yellow), B (Blue), RY (Red and Yellow), RB (Red and Blue) and YB (Yellow and Blue). Additional file 2: Fig S2. Fluorescent proteins spectra. The excitation (EX) (dot) and emission (EM) (dash) spectra are shown for three fluorescent proteins: mTagBFP (blue), sYFP2 (yellow) and mCherry (red) (data sourced from fpbase.org). Vertical lines indicate the laser wavelength (nm), whilst the light bars represent the filters used in the Amnis® Cellstream® flow cytometer to detect mTagBFP (blue, 405 nm – 457/51), sYFP2 (yellow, 488 – 528/46) and mCherry (red, 561 – 528/46). Additional file 3: Fig S3. Confocal microscopy images of Rhizobium leguminosarum 3841 unlabelled and labelled with different DFM combinations. A) bright channel. B) bright, red, yellow and blue channel. C) red, yellow and blue channel. D) red channel. E) yellow channel. F) blue channel. WT: R. leguminosarum 3841 (Rlv3841) not labelled. R: Rlv3841 labelled with mCherry. Y: Rlv3841 labelled with sYFP2. B: Rlv3841 labelled with mTag. RY: Rlv3841 labelled with mCherry and sYFP2. RB: Rlv3841 labelled with mCherry and mTag. YB: Rlv3841 labelled with sYFP2 and mTag. Additional file 4: Fig S4. Stereomicroscope images of Rhizobium leguminosarum bv. viciae 3841 RB within nodules on pea roots inoculated with OxCom6 at 13 and 14 dpi. In the first column, bright images are shown. In the second column the 560/40—630/74 channel was utilised to observe mCherry expression. The third column utilises the 405/20—460/40 channel to visualise TagBFP expression. Additional file 5: Fig S5. Absolute and relative values of community assembly of Enterobacter cloacae AA4. This figure represents the absolute (blue) and relative values (orange) of E. cloacae AA4 labelled with mCherry and sYFP2 (EcAA4 RY ) colonising pea roots (A), barley roots (B) and growing on rich media (C). egr (events•g root −1 ). emL (event•mL −1 ). Data shows that for EcAA4 RY that the absolute and relative values showed a different tendency on pea roots and on rich media where in both of them looks like there is a decrease when checking relative values whereas absolute values shows that the strains maintain steady. Additional file 6: Table S1. Primers used in this study Additional file 7: Table S2. Plasmids use in this study Additional file 8: Table S3. pTn7-SCOUT plasmids developed in this study Additional file 9: Table S4. Strains used in this study Additional file 10: Table S5. Flow repository codes for flow cytometry data used in this study Additional file 11: Table S6. Events per gram of root of non-inoculated pea and barley roots for each Combined population. Additional file 12: Table S7. Mean Generation time of each OxCom6 strain unlabelled and labelled with its DFM pattern Additional file 13: Table S8. Colonisation of pea roots (egr) by each OxCom6 strains when inoculated alone (single colonisation) or in competition with unlabelled strain (competitive colonisation) Additional file 14: Table S9. Comparison between colony formation units and flow cytometry data for each OxCom6 strain. Additional file 15: Supplementary Methods. Description of the assembly of Golden Gate plasmids used in this study.
|
Quantification of US Food and Drug Administration Premarket Approval Statements for High-Risk Medical Devices With Pediatric Age Indications | ac701e6d-d61f-45dc-9024-1ce52b998f86 | 8220494 | Pediatrics[mh] | Medical devices in the US are regulated by the US Center for Devices and Radiological Health of the US Food and Drug Administration (FDA) for quality, safety, and effectiveness. The FDA categorizes medical devices into 3 classes (I, II, and III) in order of risk. Class III designation is reserved for devices that “support or sustain human life, are of substantial importance in preventing impairment of human health, or that present a potential, unreasonable risk of illness or injury.” These high-risk devices often require premarket approval (PMA), which has stringent testing requirements to demonstrate safety and effectiveness. , In contrast, a 510(k) premarket submission is used for medical devices that are “substantially equivalent” to previously approved devices. These devices tend to be class II and have a precedent for safety and effectiveness. Class I devices are usually exempt from review. Substantial barriers exist in developing medical devices for the pediatric population. These include a lack of pediatric device trials infrastructure, difficulty in enrolling pediatric participants, and high costs. An FDA-led national survey of government-associated clinicians found that 74% of device needs pertain to the pediatric population. The report further found that clinicians with a pediatric focus were more likely than those without one to modify or repurpose a therapeutic device or use the device for off-label treatment in patients. Off-label use has emerged as a by-product of the relative scarcity of specifically approved pediatric medical devices and the device needs of the pediatric population. The extent of off-label use in pediatric populations and the effectiveness and safety of off-label use are not well characterized. The integration of medical devices in clinical settings requires attention to their indications and use. Off-label use in pediatric clinical care is relatively common because few devices are specifically indicated for use in pediatric clinical care populations across clinical disciplines. Off-label device use is problematic, especially in the context of high-risk class III devices. For example, biliary stents and embolization coils are regularly used off-label in pediatric interventional cardiology and can lead to severe clinical complications, such as intravascular hemolysis, embolization, and thrombosis. The pediatric population has unique medical needs with relevant differences in growth and development that preclude the direct application of adult devices. A systematic quantitative analysis of age-based device availability may help highlight areas of need for innovators and policy makers, characterize rates and off-label use in pediatric patients, and facilitate the implementation and assessment of targeted policies and initiatives to address disparities. However, this type of analysis has not been conducted to date because no database has comprehensively compiled the age-based indications for medical devices. This information is dispersed across regulatory documents, such as the FDA reports to Congress and medical-device labels in free-text descriptions of indications for use, making systematic analysis a challenge. The aim of this study was to quantify and characterize high-risk medical devices with pediatric age indications. We hypothesized that fewer medical devices would be available for the pediatric population. We compiled a database of age indication information from PMA statements and used it to characterize the number and types of high-risk devices that are intended for use in pediatric patients.
Data Set Retrieval and Preprocessing We retrieved PMA statements that included the words indicated or intended for medical devices listed in the FDA PMA database from inception to February 2020 using the OpenFDA REST API. We also obtained metadata associated with PMA supplements, including product codes, regulation numbers, advisory panels, and approval dates. We then preprocessed the text of approval order statements by natural language processing systems by removing characters not found in the American Standard Code for Information Interchange, such as trademark symbols, converting text written in all capital letters to sentence case, and correcting segmentation errors for enumerated lists. We released the corpus of approval order statements on PubAnnotation. This study was exempt from institutional review board approval according to the general policies of the Icahn School of Medicine at Mount Sinai institutional review board regarding research not involving human participants. After initial data cleaning of the PMA statements, we searched for key words pertaining to age, including age , pediatric , adolescent , neonate , infant , child , children , younger , older , and years , for a total of 394 documents . Duplicates and documents without age indication were removed, with a final sample of 297 viable statements for analysis. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline for cross-sectional studies. Annotation Guidelines We developed guidelines to standardize the manual annotation process using an iterative approach. Starting from a simple initial system, we annotated all text spans pertaining to age-based populations, identified constituent semantic types and linguistic features of age indications, and updated our annotation guidelines until we reached a final set of guidelines. Updating the guidelines was done collaboratively among 4 researchers (S.J.L., L.C., S.R., and B.S.G.), and changes were accepted only in the case of unanimous agreement. The final set of semantic types used for annotation included age category, age range, age range start, age range start unit, age range end, and age range end unit. The annotation guidelines are available in the eMethods in the . Manual Annotation We manually annotated text spans denoting age indications according to our annotation guidelines using TextAE. To ensure the validity of our natural language processing system, 2 reviewers (S.J.L. and L.C.) independently annotated each PMA statement document. Where we observed discrepancies between reviewers, a third individual (B.S.G) resolved the difference to create a final consensus annotation set that was used in our final analysis. Statistical Analysis After filtering the initial corpus to 297 unique documents, we downloaded and further cleaned the annotations using Python programming language version 3 (Python Software Foundation). Annotations with the same meaning but different spellings were standardized across documents for each annotation category. For instance, the phrases “18 years of age and older” and “18 years and older” were standardized to “18 years of age and older.” Age annotations that were spelled out were converted to their numeric equivalents (ie, “eighteen” was converted to “18”). Age ranges in devices indicating multiple age ranges were collapsed into an age range that included all specified age ranges. Under section 515A of the Federal Food, Drug, and Cosmetic Act, the FDA classifies the pediatric population as birth through 21 years of age and further subdivides this population into neonate, infant, child, and adolescent. The FDA categorizes neonates from birth through the first 28 days, infants from 29 days to 2 years of age, children from 2 years to 12 years of age, and adolescents from 12 years to 21 years. Because each subpopulation presents with its own unique challenges, we classified devices as indicated for pediatric patients or adult patients and further classified devices indicated for pediatric patients into pediatric subgroups. Some devices were classified into multiple groups because of age indications that intersected with multiple predefined FDA age groups. We then tabulated the number of devices available for patients at each age from 0 to 21 years. Using the device metadata on the advisory committee, we then stratified devices by clinical specialty to investigate which clinical specialties were represented. We calculated the time between the initial PMA statement and the PMA statement with a pediatric indication for various generic device categories. We identified the generic device category from the metadata for each identified pediatric device based on the manual annotation process. We queried the OpenFDA API for the date of the first PMA statement for that generic category. We then manually surveyed the subsequent PMA statements for a given generic device category to find the date of approval for the first mention of a pediatric indication. Using these 2 dates, we calculated the difference between the 2 dates for each pediatric generic device category.
We retrieved PMA statements that included the words indicated or intended for medical devices listed in the FDA PMA database from inception to February 2020 using the OpenFDA REST API. We also obtained metadata associated with PMA supplements, including product codes, regulation numbers, advisory panels, and approval dates. We then preprocessed the text of approval order statements by natural language processing systems by removing characters not found in the American Standard Code for Information Interchange, such as trademark symbols, converting text written in all capital letters to sentence case, and correcting segmentation errors for enumerated lists. We released the corpus of approval order statements on PubAnnotation. This study was exempt from institutional review board approval according to the general policies of the Icahn School of Medicine at Mount Sinai institutional review board regarding research not involving human participants. After initial data cleaning of the PMA statements, we searched for key words pertaining to age, including age , pediatric , adolescent , neonate , infant , child , children , younger , older , and years , for a total of 394 documents . Duplicates and documents without age indication were removed, with a final sample of 297 viable statements for analysis. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) reporting guideline for cross-sectional studies.
We developed guidelines to standardize the manual annotation process using an iterative approach. Starting from a simple initial system, we annotated all text spans pertaining to age-based populations, identified constituent semantic types and linguistic features of age indications, and updated our annotation guidelines until we reached a final set of guidelines. Updating the guidelines was done collaboratively among 4 researchers (S.J.L., L.C., S.R., and B.S.G.), and changes were accepted only in the case of unanimous agreement. The final set of semantic types used for annotation included age category, age range, age range start, age range start unit, age range end, and age range end unit. The annotation guidelines are available in the eMethods in the .
We manually annotated text spans denoting age indications according to our annotation guidelines using TextAE. To ensure the validity of our natural language processing system, 2 reviewers (S.J.L. and L.C.) independently annotated each PMA statement document. Where we observed discrepancies between reviewers, a third individual (B.S.G) resolved the difference to create a final consensus annotation set that was used in our final analysis.
After filtering the initial corpus to 297 unique documents, we downloaded and further cleaned the annotations using Python programming language version 3 (Python Software Foundation). Annotations with the same meaning but different spellings were standardized across documents for each annotation category. For instance, the phrases “18 years of age and older” and “18 years and older” were standardized to “18 years of age and older.” Age annotations that were spelled out were converted to their numeric equivalents (ie, “eighteen” was converted to “18”). Age ranges in devices indicating multiple age ranges were collapsed into an age range that included all specified age ranges. Under section 515A of the Federal Food, Drug, and Cosmetic Act, the FDA classifies the pediatric population as birth through 21 years of age and further subdivides this population into neonate, infant, child, and adolescent. The FDA categorizes neonates from birth through the first 28 days, infants from 29 days to 2 years of age, children from 2 years to 12 years of age, and adolescents from 12 years to 21 years. Because each subpopulation presents with its own unique challenges, we classified devices as indicated for pediatric patients or adult patients and further classified devices indicated for pediatric patients into pediatric subgroups. Some devices were classified into multiple groups because of age indications that intersected with multiple predefined FDA age groups. We then tabulated the number of devices available for patients at each age from 0 to 21 years. Using the device metadata on the advisory committee, we then stratified devices by clinical specialty to investigate which clinical specialties were represented. We calculated the time between the initial PMA statement and the PMA statement with a pediatric indication for various generic device categories. We identified the generic device category from the metadata for each identified pediatric device based on the manual annotation process. We queried the OpenFDA API for the date of the first PMA statement for that generic category. We then manually surveyed the subsequent PMA statements for a given generic device category to find the date of approval for the first mention of a pediatric indication. Using these 2 dates, we calculated the difference between the 2 dates for each pediatric generic device category.
Of the 149 unique devices analyzed, we identified 102 devices (68%) with a pediatric indication, 10 devices (7%) with a neonate age indication, 32 devices (21%) with an infant age indication, 60 devices (40%) with a child age indication, and 94 devices (63%) with an adolescent age indication. Manual Review Process An example of our annotations in PubAnnotation is shown in the eMethods in the , identifying key words related to age according to our annotation guidelines (eFigure 1 in the ). This consensus set for the 297 documents consists of a total of 1568 annotations distributed across the 5 annotation groups agreed on in the annotation guidelines. The most common annotations were age end (349 of 1568 [22%]), age start (346 of 1568 [22%]), and age range (333 of 1568 [21%]). Fewer documents had unit (age start unit: 235 of 1568 [15%]; age end unit: 58 of 1568 [4%]) and age category (247 of 1568 [16%]) to annotate (eFigure 2 in the ). The device statements surveyed contain a wide scope of age ranges indicated for multiple age groups. The mean (SD) number of devices for the pediatric population aged 21 and younger was 47.13 (19.63) devices. However, many of these devices were indicated for patients aged 18 and over. The number of devices with an age indication from 17 years to 18 years increased from 42 to 81 . The increases in devices may owe to the fact that the most common age ranges used in the approval statements were 18 years of age and older and 21 years of age and older. Device Landscape The 297 documents reviewed accounted for 149 unique devices with PMA statements because some documents were supplements updating the information on a device model. Of these devices, 102 devices (68%) had a pediatric age indication (under 21 years), 10 devices (7%) had a neonate age indication (birth to 28 days), 32 devices (21%) had an infant age indication (29 days to 2 years), 60 devices (40%) had a child age indication (2-12 years), and 94 devices (63%) had an adolescent age indication (12-21 years). Because devices can be indicated for broad age ranges, some devices are indicated for multiple pediatric subgroups; 140 of the 149 identified devices (94%) were class III, and 9 devices (6%) were class II. Results by Clinical Specialty Fifteen different clinical specialties were represented by the device statements reviewed (eTable 1 in the ). Of these specialties, the most frequently represented were ophthalmology with 48 devices, cardiology with 22, immunology with 16, and clinical chemistry with 16, whereas radiology, anesthesiology, and physical medicine had 1 associated device each. After stratification for clinical specialty, we found that the diversity of clinical specialties with age-approved devices increased with increasing age . From ages 0 to 17 years, the mean (SD) number of specialties represented was 7.27 (1.4) of the total 15 clinical specialties. In contrast, 12 of 15 specialties were represented from ages 18 to 21 years. From age 17 to 18 years, notable increases in ophthalmology (from 1 to 10), neurology (from 2 to 6), and clinical chemistry (from 9 to 15) were seen. In addition, surgery, orthopedics, radiology, pathology, and physical medicine devices had no representation from ages 0 to 17. Adults aged 21 years and older had the greatest representation across specialties, with 14 (93%) of 15 specialties represented in devices available. Pediatric Subpopulation Analysis We classified devices into pediatric subgroups based on identified age indications; 40 of the 94 identified adolescent devices (43%) were not indicated for children, infant, or neonate age groups. Few devices were approved specifically for the children (27), infant (17), and neonate (10) age ranges (eTable 2 in the ). This finding is consistent with the higher number of devices available to patients aged 18 years and over. The clinical specialties most associated with devices with a pediatric indication were cardiology (22), ophthalmology (18), and clinical chemistry (16) . The most common devices seen with a pediatric indication include an excimer laser system (13), hepatitis B screening tests (11), automated external defibrillators (7), invasive glucose sensors (7), and cochlear implants (5) . A total of 100 of the pediatric devices (98%) were class III and the remaining 2 (2%) were class II. Neonate For the 10 identified devices with a neonatal age indication, all were hepatitis B screening tests that belonged to the clinical specialties of microbiology or immunology. All were class III devices. Infant For the 32 identified devices with an infant age indication, the most common clinical specialties were cardiology and otolaryngology. Most cardiovascular devices were automated external defibrillators, and most otolaryngology devices were cochlear implants. Thirty-one of these devices (97%) were class III, and 1 device (3%) was class II. Children Of the 60 devices, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. A total of 58 of the devices (97%) identified were class III, and 2 devices (3%) were class II. Adolescent Of the 94 devices identified for adolescents, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. Excimer laser system devices also made up the most common device accounting for increased ophthalmic devices represented among the adolescent devices. Ninety-two of the devices (98%) were class III and 2 devices (2%) were class II. Indication Time Lag Results by Device Category By examining the time between a generic device category’s initial approval and its first approval with a pediatric indication, we quantified the lag time among different device categories. A total of 18 (38%) generic device categories were approved for a portion of the pediatric population during their initial approval, and for others, years passed before a pediatric indication was included in its approval statement. The wide heterogeneity in device approval times highlights the difficulties in regulating and identifying gaps in pediatric device innovation .
An example of our annotations in PubAnnotation is shown in the eMethods in the , identifying key words related to age according to our annotation guidelines (eFigure 1 in the ). This consensus set for the 297 documents consists of a total of 1568 annotations distributed across the 5 annotation groups agreed on in the annotation guidelines. The most common annotations were age end (349 of 1568 [22%]), age start (346 of 1568 [22%]), and age range (333 of 1568 [21%]). Fewer documents had unit (age start unit: 235 of 1568 [15%]; age end unit: 58 of 1568 [4%]) and age category (247 of 1568 [16%]) to annotate (eFigure 2 in the ). The device statements surveyed contain a wide scope of age ranges indicated for multiple age groups. The mean (SD) number of devices for the pediatric population aged 21 and younger was 47.13 (19.63) devices. However, many of these devices were indicated for patients aged 18 and over. The number of devices with an age indication from 17 years to 18 years increased from 42 to 81 . The increases in devices may owe to the fact that the most common age ranges used in the approval statements were 18 years of age and older and 21 years of age and older.
The 297 documents reviewed accounted for 149 unique devices with PMA statements because some documents were supplements updating the information on a device model. Of these devices, 102 devices (68%) had a pediatric age indication (under 21 years), 10 devices (7%) had a neonate age indication (birth to 28 days), 32 devices (21%) had an infant age indication (29 days to 2 years), 60 devices (40%) had a child age indication (2-12 years), and 94 devices (63%) had an adolescent age indication (12-21 years). Because devices can be indicated for broad age ranges, some devices are indicated for multiple pediatric subgroups; 140 of the 149 identified devices (94%) were class III, and 9 devices (6%) were class II.
Fifteen different clinical specialties were represented by the device statements reviewed (eTable 1 in the ). Of these specialties, the most frequently represented were ophthalmology with 48 devices, cardiology with 22, immunology with 16, and clinical chemistry with 16, whereas radiology, anesthesiology, and physical medicine had 1 associated device each. After stratification for clinical specialty, we found that the diversity of clinical specialties with age-approved devices increased with increasing age . From ages 0 to 17 years, the mean (SD) number of specialties represented was 7.27 (1.4) of the total 15 clinical specialties. In contrast, 12 of 15 specialties were represented from ages 18 to 21 years. From age 17 to 18 years, notable increases in ophthalmology (from 1 to 10), neurology (from 2 to 6), and clinical chemistry (from 9 to 15) were seen. In addition, surgery, orthopedics, radiology, pathology, and physical medicine devices had no representation from ages 0 to 17. Adults aged 21 years and older had the greatest representation across specialties, with 14 (93%) of 15 specialties represented in devices available.
We classified devices into pediatric subgroups based on identified age indications; 40 of the 94 identified adolescent devices (43%) were not indicated for children, infant, or neonate age groups. Few devices were approved specifically for the children (27), infant (17), and neonate (10) age ranges (eTable 2 in the ). This finding is consistent with the higher number of devices available to patients aged 18 years and over. The clinical specialties most associated with devices with a pediatric indication were cardiology (22), ophthalmology (18), and clinical chemistry (16) . The most common devices seen with a pediatric indication include an excimer laser system (13), hepatitis B screening tests (11), automated external defibrillators (7), invasive glucose sensors (7), and cochlear implants (5) . A total of 100 of the pediatric devices (98%) were class III and the remaining 2 (2%) were class II. Neonate For the 10 identified devices with a neonatal age indication, all were hepatitis B screening tests that belonged to the clinical specialties of microbiology or immunology. All were class III devices. Infant For the 32 identified devices with an infant age indication, the most common clinical specialties were cardiology and otolaryngology. Most cardiovascular devices were automated external defibrillators, and most otolaryngology devices were cochlear implants. Thirty-one of these devices (97%) were class III, and 1 device (3%) was class II. Children Of the 60 devices, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. A total of 58 of the devices (97%) identified were class III, and 2 devices (3%) were class II. Adolescent Of the 94 devices identified for adolescents, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. Excimer laser system devices also made up the most common device accounting for increased ophthalmic devices represented among the adolescent devices. Ninety-two of the devices (98%) were class III and 2 devices (2%) were class II.
For the 10 identified devices with a neonatal age indication, all were hepatitis B screening tests that belonged to the clinical specialties of microbiology or immunology. All were class III devices.
For the 32 identified devices with an infant age indication, the most common clinical specialties were cardiology and otolaryngology. Most cardiovascular devices were automated external defibrillators, and most otolaryngology devices were cochlear implants. Thirty-one of these devices (97%) were class III, and 1 device (3%) was class II.
Of the 60 devices, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. A total of 58 of the devices (97%) identified were class III, and 2 devices (3%) were class II.
Of the 94 devices identified for adolescents, the most common clinical specialties were cardiology and clinical chemistry. Most of the cardiovascular devices were automated external defibrillators, and most of the clinical chemistry devices were insulin pumps and glucose sensors. Excimer laser system devices also made up the most common device accounting for increased ophthalmic devices represented among the adolescent devices. Ninety-two of the devices (98%) were class III and 2 devices (2%) were class II.
By examining the time between a generic device category’s initial approval and its first approval with a pediatric indication, we quantified the lag time among different device categories. A total of 18 (38%) generic device categories were approved for a portion of the pediatric population during their initial approval, and for others, years passed before a pediatric indication was included in its approval statement. The wide heterogeneity in device approval times highlights the difficulties in regulating and identifying gaps in pediatric device innovation .
In this cross-sectional study of pediatric indications for PMA statements for medical devices, we characterized the gap in both quantity and diversity of devices approved for use in the pediatric population vs the adult population. Only 42 devices with a PMA statement were identified for use in pediatric patients under the age of 18 years. A notable increase was seen at the ages of 18 and 21, showing a wide gap in the number of devices for the younger pediatric population compared with adults. This gap is particularly problematic given the vulnerable population and the life-sustaining nature of these devices. Devices approved for older adolescents are generally devices approved for adults, which does not address the unique needs of the younger pediatric subgroups. Our study is consistent with the finding that most devices indicated for the pediatric population are limited to those over 18 years of age. Distinction between the pediatric subpopulations is important because the safety and efficacy profile of a device may vary across pediatric subpopulations from neonates to young adolescents. Devices indicated for use in adolescents had the greatest range of clinical specialties represented, while devices indicated for use in children, infants, and neonates had a less diverse range of specialties represented. For instance, we found no devices indicated for ages 0 to 17 years in surgery, orthopedics, radiology, pathology, and physical medicine. This disparity was consistent with the high proportion of pediatric devices that are indicated for the age group of 18 to 21 years. Notably, our analysis identifies clinical specialties that are most in need of pediatric device innovation. With the increasing role of medical devices in modern medicine and the diverse and unique needs of the pediatric population, pediatric device development is important to the field of pediatrics. A number of policies have been implemented to address the need for pediatric devices. Pediatric device consortiums throughout the US provide funding and in-kind support to innovators developing new pediatric devices. , Targeted nondilutive funding through the Small Business Innovation Research and Small Business Technology Transfer programs may also provide powerful incentives for developers to prioritize pediatric populations. , The System of Hospitals for Innovation in Pediatrics–Medical Devices initiative attempts to provide a comprehensive strategic plan to address the regulatory, financing, hospital, and reimbursement challenges associated with medical device development. In addition to these policies, the use of real-world data are another important measure to address the scarcity of evidence in pediatric devices. Real-world data, including electronic health records, claims, and data from mobile devices, can be used to support expanded indications for preapproved devices or postmarketing surveillance of devices in use. , We noted variability in descriptions of age ranges and age-based populations in the PMA statements, which made analysis of this information challenging. Although the FDA issued draft guidance and a proposed rule for manufacturers submitting a PMA statement, further standardization or enforcement may be warranted. Standardization of age indications of devices may help reduce uncertainty about the indicated subpopulations for a device. Increased clarity in age indication descriptions in PMA statements along with improved accessibility and centralization of data for age indication is required to optimally evaluate the pediatric medical device landscape. Limitations Our study has limitations. We used the FDA-defined guidelines for the pediatric subgroups, but these guidelines may not subdivide the pediatric population in a developmentally or physiologically meaningful manner. Another limitation is that we did not account for devices approved for use in pediatric patients when age was not mentioned in PMA statements. In addition, we analyzed PMA devices, which make up only 10% of available devices. We believe we have laid the groundwork for a much larger analysis of class II and humanitarian device exemption devices.
Our study has limitations. We used the FDA-defined guidelines for the pediatric subgroups, but these guidelines may not subdivide the pediatric population in a developmentally or physiologically meaningful manner. Another limitation is that we did not account for devices approved for use in pediatric patients when age was not mentioned in PMA statements. In addition, we analyzed PMA devices, which make up only 10% of available devices. We believe we have laid the groundwork for a much larger analysis of class II and humanitarian device exemption devices.
To our knowledge, this is the most comprehensive characterization of the pediatric high-risk device landscape. Given the segmented nature of information on pediatric devices, we hope this analysis and annotation database may serve as a resource to characterize high-risk pediatric devices. Future studies may annotate PMA statements for other device indication information, such as anatomy, disease, procedure, molecular entities, and contraindications. In addition, we released the natural language processing annotation guidelines, the annotated data set corpus, and all code to promote open science. We have highlighted several areas of need for pediatric device innovation.
|
Automated Identification of Referable Retinal Pathology in Teleophthalmology Setting | ed2bbb40-2c09-4d95-a5ea-e18d6732130b | 8161696 | Ophthalmology[mh] | The coronavirus disease 2019 (COVID-19) pandemic has brought teleophthalmology into the spotlight and highlighted the need for a well-run remote retinal imagining model that, besides good image quality, provides an accurate and fast image interpretation. This project is a part of the large initiative to perform retinal screening in patients with diabetes during their visits to the primary care provider's office. This paper focuses on our efforts to develop an automated system that can efficiently process retinal images obtained during these visits and identify patients who need further ophthalmology attention. Several groups have attempted to address this issue by proposing automated solutions that are either human-in-the-loop systems or operated semi-autonomously. However, developing a fully automated approach was challenging as a significant percentage of uninterpretable images were present in training and testing datasets. – Uninterpretable images exist due to inappropriate focus, exposure, or illumination settings used during the image-capturing process and do not contain sufficient image biomarkers for the reviewer to conclude the absence or presence of retinal pathology (i.e. ungradable). Specifically, computer-aided diagnosis tools developed by Usher et al. were able to identify retinal pathology in a semi-automated manner using color fundus photography (CFP) images. However, human interaction was necessary for the image preprocessing or feature extraction steps. Gargeya et al. , , , , , improved the automation degree, but for interpretable images only, by proposing a one-fit-for-all preprocessing method for CFP images, with the resulting images being processed and classified by convolutional neural networks (CNNs). Additionally, Kermany et al. , , , , devised CNN-based models that can identify ophthalmic pathologies from optical coherence tomography (OCT) scans. To further improve the prediction performance and capture the image features jointly across different modalities, Yoo et al. – proposed multistream CNN models for automated diagnosis using multimodal inputs (e.g. OCT and CFP). However, to the best of our knowledge, no existing work can be deployed for fully automated retinal pathology diagnosis, mostly because uninterpretable images are excluded from training and testing. – This process requires an expert's input to determine ungradable images and exclude them from the dataset. The presence of images with substandard quality is universal and inevitable to encounter in clinical practice. , This problem might become more accentuated in the future with broader acceptance of automated image capture systems with integrated AI-based diagnosis algorithms. In such instances, no clinician would be present on-site to fine-tune the scanner for each patient or re-take images if the outputs were unsatisfactory. Consequently, a substantial number of ophthalmic screenings on undilated pupils will likely contain uninterpretable images, and it is essential to include those while designing such deep learning (DL) models to allow for their integration into a fully automated diagnosis system for instant and accurate diagnoses. The purpose of this study was to create such an accurate DL approach for retinal image classification and identification of referable retinal pathology. Our main goal was to develop a CNN model that can automatically handle imperfect images, including uninterpretable images, and provide high validation accuracy and low false-negative rate to identify retinal pathology.
Retinal Imaging This retrospective study analyzed 1148 OCT and CFP retinal images obtained from 647 patients with diabetes. Images were captured by Topcon, Maestro 3D-OCT multimodality OCT/Fundus imaging device (Topcon Inc., Tokyo, Japan). CFP had an angle of 45 degrees ± 5%, or 30 degrees, on the nondilated pupil. B scan horizontal range was 3–12 mm degrees ± 5%, with a 4× “Moving Average” oversampling performed, with the averaged final image. All eligible patients were invited to participate in the study and verbally consented to participate in the study by their primary care provider. The images were taken by trained certified medical assistants (CMAs). The study was a part of the Duke Quality Assessment/Quality Improvement (QA/QI) project and received institutional review broad approval from Duke University Health System. The study complied with the principles of the Declaration of Helsinki. Dataset Formulation Retinal images (OCT and CFP) were saved in JPEG compression format with a size of 659 × 512 and 661 × 653 pixels. For each OCT volume scan, only the central scan (i.e. the 31st B-scan out of a total of 60 B-scans in each volume scan) through the fovea was used. The images were resized to 299 × 299 to comply with the input dimension of the developed CNN architecture (for more details, see subsection: CNN Design). Images were graded as previously described (Hadziahmetovic et al., JAMA Ophthalmology) by Duke medical retina fellows and a medical retina faculty, and the final grading of de-identified images was done by consensus. The images were classified as follows: (a) uninterpretable images (if no clear identification of macula was available due to poor positioning or inferior exposure owing to media opacity; containing 2 OCT images and 71 CFP images), (b) retinal pathology negative (RPN; containing 982 OCT and 952 CFP images), and (c) retinal pathology positive (RPP; containing 164 OCT and 125 CFP images; see ). For each patient, there was at least one interpretable image out of all obtained images. The final diagnosis used to train the CNN model was generated using the label consensus mechanism (LCM) presented in Appendix A and . As a result, 924 eyes were labeled as normal (i.e. RPN), whereas 224 eyes were identified as RPP ( and ). To form the testing dataset, we randomly selected 57 RPN and 57 RPP eyes from the available data following a uniform distribution. These numbers represented about 10% of the total eyes, roughly 6% and 25% of RPN and RPP eye cohorts, respectively. Uninterpretable images were present in 15 eyes (1 OCT and 14 CFP). The remaining images were used to form the training dataset. We specifically chose this ratio of RPN and RPP samples to assure a well-balanced testing dataset and guarantee that the resulting dataset contained sufficient samples from the minority class (i.e. RPP). Study Design and Outcomes Measures We propose a fully automated system that utilizes a multimodal CNN to identify referable retinal pathology. Additionally, we propose a backpropagation algorithm associated with the CNN model that can train it to minimize the impact of the input images that do not contain sufficient biomarkers to determine diagnoses. Problem Formulation Pairs of OCT and CFP scans ( O k , and C k ) were obtained from each eye of each patient P k , with some of them being uniterpretable. We designed a CNN model that takes input as ( O k , and C k ) and classifies it as “without” (i.e. RPN) and “with” (i.e. RPP) retinal pathology. Precisely, “without pathology” corresponds to the cases with normal OCT and CFP, and “with retinal pathology” refers to cases where retinal pathology can be identified in at least one of the imaging modalities (i.e. OCT or CFP). Moreover, if either O k or C k were uninterpretable, the outcome was derived from the interpretable image. Finally, if both O k and C k were uninterpretable, we specifically assigned the label as retina pathology potentially present (RPPP); those samples potentially could be selected and removed from the dataset using a separate classification model, as the clinicians would need to perform a further assessment, and potentially redo the imaging. (A detailed introduction of this labeling mechanism for paired OCT/CFP inputs is in Appendix A and ). CNN Design Design of CNN model was performed in three phases: (1) expert diagnosis and label consensus (steps I and II); (2) image augmentation and preprocessing (step III); and (3) training with the novel backpropagation algorithm that can work with uninterpretable images (step IV); illustrated in . Expert Diagnosis and Label Consensus (Steps I and II) Each OCT and CFP image was individually labeled by the panel of retina professionals as uninterpretable, RPN, and RPP. Then, to train the CNN model, we determined the final diagnosis as RPN if one imaging modality was deemed uninterpretable and other RPN or both were RPN. Similarly, we labeled a patient RPP if we had at least one modality read as RPP. In the case of both modalities being uninterpretable, we referred to it as RPPP (Appendix A and ). Image Augmentation and Preprocessing (Step III) Bearing in mind that our dataset was limited (which is often the case with clinical data), we augmented the dataset by rotation, random cropping, flipping, etc. Given that OCT images usually come with extensive background noise, which can prevent the DL-based models from capturing the image biomarkers, we applied Gaussian filters for noise reduction. No images were augmented for the validation set. However, the OCT images were de-noised using Gaussian blur, as in the training set. Details are introduced in Appendix B. CNN Model Architecture and the Back Propagation Algorithm (Step IV) We developed a multimodal CNN that takes as an input OCT and CFP images jointly and classifies them into RPN and RPP categories . First, the input OCT and CFP images were processed by two sets of convolutional filters to obtain corresponding feature maps. Then, the output feature maps were fed into the global average pooling layers for dimension reduction (to derive feature vectors for both imaging modalities), which was then fed into a global, fully connected layer designed to: (1) map feature vectors to logits; and (2) to implicitly reach a consensus between the prediction outcomes (as the results from different imaging modalities could oppose each other – e.g. pathology does exist in one and does not in the other). Finally, Softmax activation was applied to the output layer to map the logits to probabilities of classifying the inputs as RPP. To ensure that the CNN can successfully handle uninterpretable images presented in both training and testing datasets, we developed an alternate gradient descent (AGD) algorithm. This way, we could minimize the impact of uninterpretable images on the prediction performance implicitly without formulating the binary classification problem as a multicategory task (e.g. RPN, RPP, and uninterpretable). The AGD algorithm. We first divided all the weight parameters θ in the CNN into three subsets θ 1 , θ 2 and θ 3 , which represent the weights for the convolutional blocks and global average pooling layer that process the OCT inputs (i.e. Conv_blocks_1 and Avg_pool_1 in ), the convolutional and global average pooling layers for the CFP modality (i.e. Conv_blocks_2 and Avg_pool_2 in ), and the final fully connected layer (i.e. FC_3 in ), respectively. The following briefly illustrates how the AGD algorithm works during the training of the CNN model. In each training iteration, (I) we first updated θ 1 by minimizing the binary cross-entropy loss (BCEL) between the CNN predictions corresponding to the input ( O k , and C k ) samples that contain interpretable OCT images and the labels associated with them (i.e. in this step, the uninterpretable images were not included while calculating the training loss); (II) then similarly, θ 2 was updated by minimizing the BCEL between the CNN predictions corresponding to the input ( O k , and C k ) samples with interpretable CFP images from the training inputs and the labels associated with them; and (III) finally, θ 3 was updated to minimize the BCEL between the CNN predictions given all input ( O k , and C k ) samples (i.e. both interpretable and uninterpretable OCT/CFP) and the associated labels. After step I and II, the convolutional filters processing the OCT and CFP modality (i.e. θ 1 , and θ 2 ) were trained toward extracting features that can best differentiate RPN/RPP samples if the inputs were interpretable. On the other hand, if one modality (or both modalities) of the inputs was (were) uninterpretable, then the features extracted by the corresponding convolutional filters were considered uninformative, as they were not included during the training of θ 1 and θ 2 . In step III, the weights of the fully connected layer θ 3 were optimized to capture if the features output from θ 1 and θ 2 implies RPN, RPP, or uninformative, as well as learn to infer the correct predictions when the features corresponding to the OCT and CFP modality carry inconsistent information (e.g. one implies RPN whereas the other implies RPP, or the other was uninformative). As a result, the CNN was trained, using the AGD algorithm, to implicitly handle the uninterpretable images contained in the dual-inputs ( O k and C k ) without classifying them as a third class besides RPN and RPP. The illustration of the AGD algorithm from the mathematical perspective is provided in Appendix C (The Python code implementing this algorithm can be accessed from https://github.com/gaoqitong/Alternate-Gradient-Descent-For-Uninterpretable-Images ). Transfer learning. Transfer learning was applied to pretrain the convolutional blocks (i.e. θ 1 , and θ 2 ) in the CNN model, as it was shown to be effective in boosting both training efficiency and validation performance. – Specifically, we used the open-source OCT dataset containing 108,312 OCT scans from 4 different categories: (1) choroidal neovascularization (37,206 images), (2) diabetic macular edema (DME; 11,349 images), (3) Drusen (8,617 images), and (4) normal (51,140 images), which were provided by Kermany et al. We also used the CFP image dataset containing 35,126 CFP images with (25,810 images) and without diabetic retinopathy (DR) pathology (9316 images), which are obtained from Kaggle. Then, all the CFP images with DR pathology were flipped over horizontally and vertically to balance the number of images in the two classes, and loose pairing was performed to couple the OCT and CFP modality, which then generated 100,000 “nominal” eyes. We further labeled the OCT images that contained any pathology as RPP and took a logical and between the individual OCT and CFP labels to determine the final diagnoses used to pretrain the network. Given that these two datasets did not contain any uninterpretable images, we pretrained the network, as illustrated in , by minimizing the cross-entropy loss between the CNN predictions and labels for all images. Appendix C illustrated this optimization problem from a mathematical perspective. Although the open-source OCT dataset did not contain all retinal pathologies that we were interested in, the CNN model was still trained to effectively locate the biomarkers that help distinguish inputs as RPN and RPP, as presented in the Results section. Specific convolutional layer architecture and training hyper parameters. The convolutional blocks in both the OCT and the CFP branches of the network (i.e. the Conv_blocks_1 and Conv_blocks_2 from ) used the inception-v3 architecture. Furthermore, the open-source OCT dataset that we used to pretrain the CNN model had also been shown to attain the highest accuracy with the inception-v3 structure. Specifically, during training, both OCT and CFP images were resized to 299 × 299 to comply with the design of the convolutional layers before feeding into the network. , After performing global average pooling for the OCT and CFP streams, the image features (i.e. Feature_1 and Feature_2 in ) had the size n x1 × 1 × 2048, where n denotes the batch size. The two feature vectors were then concatenated and reshaped to an n x4096 vector, which was then processed by a fully connected layer with 4096 nodes to generate prediction logits. Finally, Softmax functions were applied to normalize the logits as probabilities of classifying the inputs as RPN/RPP. During training, Adam optimizer , was used to minimize training losses, where the learning rate was set to be 1e-04 with exponential decay of 0.91 in 1500 steps.
This retrospective study analyzed 1148 OCT and CFP retinal images obtained from 647 patients with diabetes. Images were captured by Topcon, Maestro 3D-OCT multimodality OCT/Fundus imaging device (Topcon Inc., Tokyo, Japan). CFP had an angle of 45 degrees ± 5%, or 30 degrees, on the nondilated pupil. B scan horizontal range was 3–12 mm degrees ± 5%, with a 4× “Moving Average” oversampling performed, with the averaged final image. All eligible patients were invited to participate in the study and verbally consented to participate in the study by their primary care provider. The images were taken by trained certified medical assistants (CMAs). The study was a part of the Duke Quality Assessment/Quality Improvement (QA/QI) project and received institutional review broad approval from Duke University Health System. The study complied with the principles of the Declaration of Helsinki.
Retinal images (OCT and CFP) were saved in JPEG compression format with a size of 659 × 512 and 661 × 653 pixels. For each OCT volume scan, only the central scan (i.e. the 31st B-scan out of a total of 60 B-scans in each volume scan) through the fovea was used. The images were resized to 299 × 299 to comply with the input dimension of the developed CNN architecture (for more details, see subsection: CNN Design). Images were graded as previously described (Hadziahmetovic et al., JAMA Ophthalmology) by Duke medical retina fellows and a medical retina faculty, and the final grading of de-identified images was done by consensus. The images were classified as follows: (a) uninterpretable images (if no clear identification of macula was available due to poor positioning or inferior exposure owing to media opacity; containing 2 OCT images and 71 CFP images), (b) retinal pathology negative (RPN; containing 982 OCT and 952 CFP images), and (c) retinal pathology positive (RPP; containing 164 OCT and 125 CFP images; see ). For each patient, there was at least one interpretable image out of all obtained images. The final diagnosis used to train the CNN model was generated using the label consensus mechanism (LCM) presented in Appendix A and . As a result, 924 eyes were labeled as normal (i.e. RPN), whereas 224 eyes were identified as RPP ( and ). To form the testing dataset, we randomly selected 57 RPN and 57 RPP eyes from the available data following a uniform distribution. These numbers represented about 10% of the total eyes, roughly 6% and 25% of RPN and RPP eye cohorts, respectively. Uninterpretable images were present in 15 eyes (1 OCT and 14 CFP). The remaining images were used to form the training dataset. We specifically chose this ratio of RPN and RPP samples to assure a well-balanced testing dataset and guarantee that the resulting dataset contained sufficient samples from the minority class (i.e. RPP).
We propose a fully automated system that utilizes a multimodal CNN to identify referable retinal pathology. Additionally, we propose a backpropagation algorithm associated with the CNN model that can train it to minimize the impact of the input images that do not contain sufficient biomarkers to determine diagnoses. Problem Formulation Pairs of OCT and CFP scans ( O k , and C k ) were obtained from each eye of each patient P k , with some of them being uniterpretable. We designed a CNN model that takes input as ( O k , and C k ) and classifies it as “without” (i.e. RPN) and “with” (i.e. RPP) retinal pathology. Precisely, “without pathology” corresponds to the cases with normal OCT and CFP, and “with retinal pathology” refers to cases where retinal pathology can be identified in at least one of the imaging modalities (i.e. OCT or CFP). Moreover, if either O k or C k were uninterpretable, the outcome was derived from the interpretable image. Finally, if both O k and C k were uninterpretable, we specifically assigned the label as retina pathology potentially present (RPPP); those samples potentially could be selected and removed from the dataset using a separate classification model, as the clinicians would need to perform a further assessment, and potentially redo the imaging. (A detailed introduction of this labeling mechanism for paired OCT/CFP inputs is in Appendix A and ). CNN Design Design of CNN model was performed in three phases: (1) expert diagnosis and label consensus (steps I and II); (2) image augmentation and preprocessing (step III); and (3) training with the novel backpropagation algorithm that can work with uninterpretable images (step IV); illustrated in . Expert Diagnosis and Label Consensus (Steps I and II) Each OCT and CFP image was individually labeled by the panel of retina professionals as uninterpretable, RPN, and RPP. Then, to train the CNN model, we determined the final diagnosis as RPN if one imaging modality was deemed uninterpretable and other RPN or both were RPN. Similarly, we labeled a patient RPP if we had at least one modality read as RPP. In the case of both modalities being uninterpretable, we referred to it as RPPP (Appendix A and ). Image Augmentation and Preprocessing (Step III) Bearing in mind that our dataset was limited (which is often the case with clinical data), we augmented the dataset by rotation, random cropping, flipping, etc. Given that OCT images usually come with extensive background noise, which can prevent the DL-based models from capturing the image biomarkers, we applied Gaussian filters for noise reduction. No images were augmented for the validation set. However, the OCT images were de-noised using Gaussian blur, as in the training set. Details are introduced in Appendix B. CNN Model Architecture and the Back Propagation Algorithm (Step IV) We developed a multimodal CNN that takes as an input OCT and CFP images jointly and classifies them into RPN and RPP categories . First, the input OCT and CFP images were processed by two sets of convolutional filters to obtain corresponding feature maps. Then, the output feature maps were fed into the global average pooling layers for dimension reduction (to derive feature vectors for both imaging modalities), which was then fed into a global, fully connected layer designed to: (1) map feature vectors to logits; and (2) to implicitly reach a consensus between the prediction outcomes (as the results from different imaging modalities could oppose each other – e.g. pathology does exist in one and does not in the other). Finally, Softmax activation was applied to the output layer to map the logits to probabilities of classifying the inputs as RPP. To ensure that the CNN can successfully handle uninterpretable images presented in both training and testing datasets, we developed an alternate gradient descent (AGD) algorithm. This way, we could minimize the impact of uninterpretable images on the prediction performance implicitly without formulating the binary classification problem as a multicategory task (e.g. RPN, RPP, and uninterpretable). The AGD algorithm. We first divided all the weight parameters θ in the CNN into three subsets θ 1 , θ 2 and θ 3 , which represent the weights for the convolutional blocks and global average pooling layer that process the OCT inputs (i.e. Conv_blocks_1 and Avg_pool_1 in ), the convolutional and global average pooling layers for the CFP modality (i.e. Conv_blocks_2 and Avg_pool_2 in ), and the final fully connected layer (i.e. FC_3 in ), respectively. The following briefly illustrates how the AGD algorithm works during the training of the CNN model. In each training iteration, (I) we first updated θ 1 by minimizing the binary cross-entropy loss (BCEL) between the CNN predictions corresponding to the input ( O k , and C k ) samples that contain interpretable OCT images and the labels associated with them (i.e. in this step, the uninterpretable images were not included while calculating the training loss); (II) then similarly, θ 2 was updated by minimizing the BCEL between the CNN predictions corresponding to the input ( O k , and C k ) samples with interpretable CFP images from the training inputs and the labels associated with them; and (III) finally, θ 3 was updated to minimize the BCEL between the CNN predictions given all input ( O k , and C k ) samples (i.e. both interpretable and uninterpretable OCT/CFP) and the associated labels. After step I and II, the convolutional filters processing the OCT and CFP modality (i.e. θ 1 , and θ 2 ) were trained toward extracting features that can best differentiate RPN/RPP samples if the inputs were interpretable. On the other hand, if one modality (or both modalities) of the inputs was (were) uninterpretable, then the features extracted by the corresponding convolutional filters were considered uninformative, as they were not included during the training of θ 1 and θ 2 . In step III, the weights of the fully connected layer θ 3 were optimized to capture if the features output from θ 1 and θ 2 implies RPN, RPP, or uninformative, as well as learn to infer the correct predictions when the features corresponding to the OCT and CFP modality carry inconsistent information (e.g. one implies RPN whereas the other implies RPP, or the other was uninformative). As a result, the CNN was trained, using the AGD algorithm, to implicitly handle the uninterpretable images contained in the dual-inputs ( O k and C k ) without classifying them as a third class besides RPN and RPP. The illustration of the AGD algorithm from the mathematical perspective is provided in Appendix C (The Python code implementing this algorithm can be accessed from https://github.com/gaoqitong/Alternate-Gradient-Descent-For-Uninterpretable-Images ). Transfer learning. Transfer learning was applied to pretrain the convolutional blocks (i.e. θ 1 , and θ 2 ) in the CNN model, as it was shown to be effective in boosting both training efficiency and validation performance. – Specifically, we used the open-source OCT dataset containing 108,312 OCT scans from 4 different categories: (1) choroidal neovascularization (37,206 images), (2) diabetic macular edema (DME; 11,349 images), (3) Drusen (8,617 images), and (4) normal (51,140 images), which were provided by Kermany et al. We also used the CFP image dataset containing 35,126 CFP images with (25,810 images) and without diabetic retinopathy (DR) pathology (9316 images), which are obtained from Kaggle. Then, all the CFP images with DR pathology were flipped over horizontally and vertically to balance the number of images in the two classes, and loose pairing was performed to couple the OCT and CFP modality, which then generated 100,000 “nominal” eyes. We further labeled the OCT images that contained any pathology as RPP and took a logical and between the individual OCT and CFP labels to determine the final diagnoses used to pretrain the network. Given that these two datasets did not contain any uninterpretable images, we pretrained the network, as illustrated in , by minimizing the cross-entropy loss between the CNN predictions and labels for all images. Appendix C illustrated this optimization problem from a mathematical perspective. Although the open-source OCT dataset did not contain all retinal pathologies that we were interested in, the CNN model was still trained to effectively locate the biomarkers that help distinguish inputs as RPN and RPP, as presented in the Results section. Specific convolutional layer architecture and training hyper parameters. The convolutional blocks in both the OCT and the CFP branches of the network (i.e. the Conv_blocks_1 and Conv_blocks_2 from ) used the inception-v3 architecture. Furthermore, the open-source OCT dataset that we used to pretrain the CNN model had also been shown to attain the highest accuracy with the inception-v3 structure. Specifically, during training, both OCT and CFP images were resized to 299 × 299 to comply with the design of the convolutional layers before feeding into the network. , After performing global average pooling for the OCT and CFP streams, the image features (i.e. Feature_1 and Feature_2 in ) had the size n x1 × 1 × 2048, where n denotes the batch size. The two feature vectors were then concatenated and reshaped to an n x4096 vector, which was then processed by a fully connected layer with 4096 nodes to generate prediction logits. Finally, Softmax functions were applied to normalize the logits as probabilities of classifying the inputs as RPN/RPP. During training, Adam optimizer , was used to minimize training losses, where the learning rate was set to be 1e-04 with exponential decay of 0.91 in 1500 steps.
Pairs of OCT and CFP scans ( O k , and C k ) were obtained from each eye of each patient P k , with some of them being uniterpretable. We designed a CNN model that takes input as ( O k , and C k ) and classifies it as “without” (i.e. RPN) and “with” (i.e. RPP) retinal pathology. Precisely, “without pathology” corresponds to the cases with normal OCT and CFP, and “with retinal pathology” refers to cases where retinal pathology can be identified in at least one of the imaging modalities (i.e. OCT or CFP). Moreover, if either O k or C k were uninterpretable, the outcome was derived from the interpretable image. Finally, if both O k and C k were uninterpretable, we specifically assigned the label as retina pathology potentially present (RPPP); those samples potentially could be selected and removed from the dataset using a separate classification model, as the clinicians would need to perform a further assessment, and potentially redo the imaging. (A detailed introduction of this labeling mechanism for paired OCT/CFP inputs is in Appendix A and ).
Design of CNN model was performed in three phases: (1) expert diagnosis and label consensus (steps I and II); (2) image augmentation and preprocessing (step III); and (3) training with the novel backpropagation algorithm that can work with uninterpretable images (step IV); illustrated in .
Each OCT and CFP image was individually labeled by the panel of retina professionals as uninterpretable, RPN, and RPP. Then, to train the CNN model, we determined the final diagnosis as RPN if one imaging modality was deemed uninterpretable and other RPN or both were RPN. Similarly, we labeled a patient RPP if we had at least one modality read as RPP. In the case of both modalities being uninterpretable, we referred to it as RPPP (Appendix A and ).
Bearing in mind that our dataset was limited (which is often the case with clinical data), we augmented the dataset by rotation, random cropping, flipping, etc. Given that OCT images usually come with extensive background noise, which can prevent the DL-based models from capturing the image biomarkers, we applied Gaussian filters for noise reduction. No images were augmented for the validation set. However, the OCT images were de-noised using Gaussian blur, as in the training set. Details are introduced in Appendix B.
We developed a multimodal CNN that takes as an input OCT and CFP images jointly and classifies them into RPN and RPP categories . First, the input OCT and CFP images were processed by two sets of convolutional filters to obtain corresponding feature maps. Then, the output feature maps were fed into the global average pooling layers for dimension reduction (to derive feature vectors for both imaging modalities), which was then fed into a global, fully connected layer designed to: (1) map feature vectors to logits; and (2) to implicitly reach a consensus between the prediction outcomes (as the results from different imaging modalities could oppose each other – e.g. pathology does exist in one and does not in the other). Finally, Softmax activation was applied to the output layer to map the logits to probabilities of classifying the inputs as RPP. To ensure that the CNN can successfully handle uninterpretable images presented in both training and testing datasets, we developed an alternate gradient descent (AGD) algorithm. This way, we could minimize the impact of uninterpretable images on the prediction performance implicitly without formulating the binary classification problem as a multicategory task (e.g. RPN, RPP, and uninterpretable). The AGD algorithm. We first divided all the weight parameters θ in the CNN into three subsets θ 1 , θ 2 and θ 3 , which represent the weights for the convolutional blocks and global average pooling layer that process the OCT inputs (i.e. Conv_blocks_1 and Avg_pool_1 in ), the convolutional and global average pooling layers for the CFP modality (i.e. Conv_blocks_2 and Avg_pool_2 in ), and the final fully connected layer (i.e. FC_3 in ), respectively. The following briefly illustrates how the AGD algorithm works during the training of the CNN model. In each training iteration, (I) we first updated θ 1 by minimizing the binary cross-entropy loss (BCEL) between the CNN predictions corresponding to the input ( O k , and C k ) samples that contain interpretable OCT images and the labels associated with them (i.e. in this step, the uninterpretable images were not included while calculating the training loss); (II) then similarly, θ 2 was updated by minimizing the BCEL between the CNN predictions corresponding to the input ( O k , and C k ) samples with interpretable CFP images from the training inputs and the labels associated with them; and (III) finally, θ 3 was updated to minimize the BCEL between the CNN predictions given all input ( O k , and C k ) samples (i.e. both interpretable and uninterpretable OCT/CFP) and the associated labels. After step I and II, the convolutional filters processing the OCT and CFP modality (i.e. θ 1 , and θ 2 ) were trained toward extracting features that can best differentiate RPN/RPP samples if the inputs were interpretable. On the other hand, if one modality (or both modalities) of the inputs was (were) uninterpretable, then the features extracted by the corresponding convolutional filters were considered uninformative, as they were not included during the training of θ 1 and θ 2 . In step III, the weights of the fully connected layer θ 3 were optimized to capture if the features output from θ 1 and θ 2 implies RPN, RPP, or uninformative, as well as learn to infer the correct predictions when the features corresponding to the OCT and CFP modality carry inconsistent information (e.g. one implies RPN whereas the other implies RPP, or the other was uninformative). As a result, the CNN was trained, using the AGD algorithm, to implicitly handle the uninterpretable images contained in the dual-inputs ( O k and C k ) without classifying them as a third class besides RPN and RPP. The illustration of the AGD algorithm from the mathematical perspective is provided in Appendix C (The Python code implementing this algorithm can be accessed from https://github.com/gaoqitong/Alternate-Gradient-Descent-For-Uninterpretable-Images ). Transfer learning. Transfer learning was applied to pretrain the convolutional blocks (i.e. θ 1 , and θ 2 ) in the CNN model, as it was shown to be effective in boosting both training efficiency and validation performance. – Specifically, we used the open-source OCT dataset containing 108,312 OCT scans from 4 different categories: (1) choroidal neovascularization (37,206 images), (2) diabetic macular edema (DME; 11,349 images), (3) Drusen (8,617 images), and (4) normal (51,140 images), which were provided by Kermany et al. We also used the CFP image dataset containing 35,126 CFP images with (25,810 images) and without diabetic retinopathy (DR) pathology (9316 images), which are obtained from Kaggle. Then, all the CFP images with DR pathology were flipped over horizontally and vertically to balance the number of images in the two classes, and loose pairing was performed to couple the OCT and CFP modality, which then generated 100,000 “nominal” eyes. We further labeled the OCT images that contained any pathology as RPP and took a logical and between the individual OCT and CFP labels to determine the final diagnoses used to pretrain the network. Given that these two datasets did not contain any uninterpretable images, we pretrained the network, as illustrated in , by minimizing the cross-entropy loss between the CNN predictions and labels for all images. Appendix C illustrated this optimization problem from a mathematical perspective. Although the open-source OCT dataset did not contain all retinal pathologies that we were interested in, the CNN model was still trained to effectively locate the biomarkers that help distinguish inputs as RPN and RPP, as presented in the Results section. Specific convolutional layer architecture and training hyper parameters. The convolutional blocks in both the OCT and the CFP branches of the network (i.e. the Conv_blocks_1 and Conv_blocks_2 from ) used the inception-v3 architecture. Furthermore, the open-source OCT dataset that we used to pretrain the CNN model had also been shown to attain the highest accuracy with the inception-v3 structure. Specifically, during training, both OCT and CFP images were resized to 299 × 299 to comply with the design of the convolutional layers before feeding into the network. , After performing global average pooling for the OCT and CFP streams, the image features (i.e. Feature_1 and Feature_2 in ) had the size n x1 × 1 × 2048, where n denotes the batch size. The two feature vectors were then concatenated and reshaped to an n x4096 vector, which was then processed by a fully connected layer with 4096 nodes to generate prediction logits. Finally, Softmax functions were applied to normalize the logits as probabilities of classifying the inputs as RPN/RPP. During training, Adam optimizer , was used to minimize training losses, where the learning rate was set to be 1e-04 with exponential decay of 0.91 in 1500 steps.
To validate our approach, we selected the following three baseline methods to compare with our method: (1) training two CNN models that classifies the OCT and CFP modality respectively into three categories (RPN, RPP, and uninterpretable), then the final diagnoses are determined following the LCM illustrated in Appendix A and ; (2) first, two classifiers are trained to classify the interpretability for the OCT and CFP modality separately, followed by two CNN models that identify the presence of retinal pathology for interpretable OCT and CFP images respectively, with the final diagnoses being determined using the LCM; and (3) a two-stream CNN model based on the state-of-the-art multimodal ophthalmological image analysis methods developed by Wang et al., – which uses the CNN architecture that does not consider any uninterpretable images, but is trained to minimize the cross-entropy loss with conventional backpropagation algorithms, instead of the AGD, as proposed in our work. In Appendix D, we illustrated the intuition of designing Baseline A and B and their implementation details. shows the performance comparison between our approach and the baseline methods in terms of accuracy, false-negative rate (FNR), recall (or true positive rate), specificity (or true negative rate), and area under the curve (AUC) of the receiver operating characteristic (ROC) curve; FNR is defined as: (1) FNR = F N T P + F N = 1 - R e c a l l , with FN representing the false negatives and TP referring to the true positives. We chose FNR as one of the metrics because it evaluates the portion of the RPP patients who are falsely identified as RPN, or, in other words, the patients who have retinal pathology presented but failed to be recognized by the automated diagnosis system due to erroneous classifications. Our approach achieved 88.60% accuracy with 95% confidence interval (CI) of 82.76% to 94.43%, which outperforms all three baseline methods, as shown in . We also attained an FNR of 12.28% with 95% CI of 6.26% to 18.31% (or recall of 87.72% with 95% CI of 81.69% to 93.74%, which outperforms baseline A and B. To address the fact that baseline C results in a lower FNR (and thus higher recall) than our model, we created the accuracy-FNR plots ( A, the blue curve represents our approach, whereas the orange shows baseline C) showing how the accuracy and FNR change when different decision thresholds are applied to the probabilities output from the CNN (which can be interpreted as the confidence of classifying the input samples as RPP cases). All the thresholds are sampled uniformly between 0.5 and 1, where the top-right end points of both curves correspond to the threshold of 0.5 (i.e. the samples that result in prediction probability greater than 0.5 are determined as RPP while the rest are classified as RPN) and the bottom-left points are associated with threshold 1 (i.e. all the inputs are classified as RPN regardless of the presence of pathology or not). As can be observed from A, our method is capable of achieving an FNR of 8.77% with 95% CI of 3.58% to 13.96%, with an accuracy of 81.57% with 95% CI of 74.45% to 88.69%, which outperforms baseline C concerning both metrics given a threshold of 0.65 (shown as the red dot in A). Moreover, our method attains higher accuracy than baseline C for any decision threshold in [0.5 and , 1]. Our approach achieved a specificity of 89.47% with 95% CI of 83.84% to 95.11%, which outperforms baselines B and C. Note that baseline A gives rise to a higher specificity due to missclassifying RPP samples as RPN, which is indicated by the very high FNR (33.33%, 95% CI 24.68% to 41.99%) and the relatively low AUC (83.58%, 95% CI 76.11% to 91.05%). Finally, our approach reached an AUC of 92.74% with 95% CI of 87.71% to 97.76%, which is higher than all the baseline methods, as captured by the ROC curves shown in B. Consequently, our approach achieved satisfactory performance evaluated through the five metrics and was able to balance between accuracy and FNR flexibly by selecting appropriate decision thresholds. To evaluate the impact of the uninterpretable images on prediction performance, we evaluated our model by excluding them from the testing dataset (i.e. each eye with at least one uninterpretable image was excluded; ). Performance of our model did not change when evaluated on interpretable images only. On the other hand, baseline B and C methods’ performances increased dramatically in this setting, as expected, because both methods were not designed to process the uninterpretable inputs. Finally, baseline A had slightly decreased accuracy and recall, likely due to the higher FNR of the baseline A model. All of this could be observed by comparing the changes in the FNR between and , where the number of false negative samples barely decreased when the uninterpretable samples were excluded. In other words, for baselines B and C, by removing uninterpretable images, the classification performance improved, as those images lead to decreasing recall (or increasing FNR), while the opposite was true for baseline A. As presented, the uninterpretable images negatively impacted the baseline methods while having a minimal impact on our approach. We further validated our model by generating class activation maps (CAMs), which can visualize how much “attention” the CNN model is paying to each pixel of the input images (see ). We followed the procedure proposed by Zhou et al., where the weights of the fully connected layer (i.e. FC_3 in ) and the image features generated from the global average pooling layers (i.e. Avg_pool_1 and Avg_pool_2 in ) were used to generate attention values associated with all pixels in the input images from both imaging modalities. This evaluates to what extend each pixel is weighted while the CNN model generates predictions. The higher values correspond to the stronger attention, whereas they correspond lower to weaker attention (see ).
There is an unmet need for automated imaging and diagnosis systems for identifying retinal pathology. This limitation of the current healthcare model has been emphasized during the COVID-19 pandemic, especially because ophthalmology has been one of the hardest hit specialties. Additionally, early recognition of sight-threatening retinal diseases might offer timely treatment, potentially improve visual outcomes, and reduce healthcare costs. Moreover, with improved triage, clinician effort, and clinic time might be better spent on other activities providing improved referral accuracy and more efficient use of ophthalmic resources. – This paper introduces a CNN-based approach that enables fully automated retinal image classification into present or absent retinal pathology. Similar existing methods cannot be applied autonomously as they have not been developed while considering uninterpretable images, which are frequently encountered during eye screening, , , – and thus cannot handle them well. By addressing these limitations, our approach facilitates the development of automated retinal diagnosis systems, where a healthcare worker does not need to evaluate the quality of the images (in order for some to be retaken) before they are submitted for the analysis. This system can be deployed either in the clinics for triage or during remote screening (e.g. teleophthalmology) without involving physical interactions between patients and physicians. Herein, we presented a CNN model that takes OCT and CFP images as dual-modal inputs and predicts if the corresponding eye has retinal pathology (e.g. DR, DME, and age-related macular degeneration [AMD]). Our model was able to process imperfect/uninterpretable images resulting from the patient's poor positioning during the screening or inappropriate parameters , (e.g. focus, exposure, and illumination). Inputs obtained from uninterpretable images were utilized during the training through a novel backpropagation algorithm that can minimize the impact from images that do not contain sufficient image biomarkers to be determined as RPN/RPP during the training process. We created a fully automated retinal pathology diagnosis system (i.e. that requires no human interaction). To train and validate our model, we collected 1148 pairs of CFP and OCT images from 674 patients, where each pair pertains to a single eye of a patient. We used a 9:1 ratio to split the training and testing dataset. Finally, we attained a validation accuracy of 88.6%, recall/sensitivity of 87.7%, specificity of 89.5%, and AUC for ROC of 0.93. We presented the case, which only considers the dual-modal inputs (OCT and CFP); regardless, the proposed approach can be further extended to include other imaging modalities (e.g. fundus autofluorescence). Moreover, we observed that the performance of baseline methods could be negatively impacted when uninterpretable images are used for testing. On the other hand, the performance of our approach was not affected when evaluated with either full testing dataset or interpretable images only. Significant work related to this topic was done by Yoo et al., Wang et al., Vaghefi et al., and Xu et al. Specifically, in the Yoo method pretrained VGG-19 was used to convert input OCT and CFP images into feature vectors, which were then classified as AMD and non-AMD by random forests. In this work, the pretrained CNN was applied for feature extraction without fine-tuning and potentially could have led to unsatisfactory performance. Precisely, most of the pretrained models were trained with standard datasets (e.g. ImageNet ) that do not contain ophthalmic images and the resulting models were potentially not optimized for analysis of OCT or CFP inputs. The other mentioned methods proposed CNN models for the multimodal identification of retinal diseases. Wang et al. and Xu et al. developed two-stream CNNs to jointly analyze the OCT and CFP images. First, each modality was processed by the corresponding stream through convolutional filters and pooling layers for feature extraction using ResNet-18 or ResNet-50 architectures. Then, the two streams’ output features were concatenated and fed into a fully connected layer for classification. Slightly different CNN architecture was applied in the Vaghefi method. Each single-modal stream consisted of a few customized convolutional layers for initial processing, together with the outputs across streams that were combined through max-pooling followed by Inception-ResNet-V2 for further processing and classification. Despite being ground-breaking, these methods neither evaluate nor handle uninterpretable images, making them unsuitable for remote retinal image assessments where uninterpretable and low-quality images regularly occur. To capture and generalize the ideas behind these four methods and emphasize the importance of uninterpretable image utilization, we combined them in the baseline C learning approach and compared it to our model. We concluded that although baseline C achieved slightly lower FNR, it attained 9.3% less accuracy than our method when the presented decision thresholds were used in our model (see ). However, when the decision thresholds in our model were adjusted, our approach achieved both lower FNR and higher accuracy than baseline C, but with slightly lower accuracy than with our initial decision threshold – this highlights that by controlling decision thresholds, we were able to make a tradeoff between accuracy and FNR. Furthermore, this underlined the importance of taking into account uninterpretable images during the training phase and showed that our AGD algorithm and the obtained CNN model could effectively handle uninterpretable images. In addition, we designed baseline A and B models to evaluate the prediction performance when the AGD backpropagation algorithm was not used, and the input images were classified into three categories (i.e. RPN, RPP, and uninterpretable), as opposed to the two-class problem addressed by our model trained by the AGD algorithm. Comparing these two methods to ours showed that our method had higher accuracy and lower FNR. The improved performance of our method and baseline C compared to baseline A and B methods confirm the strength of multimodal analysis, where the CNN models can effectively capture the correlation among different imaging modalities and make accurate predictions. Finally, FNR is an important factor to consider while validating different image interpretation models because it is crucial not to miss pathology that can have serious consequences. As shown in the ACC-FNR Curve in A, our CNN based approach allowed the users to balance the tradeoff between accuracy and FNR by customizing the decision thresholds (i.e. a threshold around 0.5 can be applied for attaining higher accuracy, whereas a threshold greater than 0.5 leads to lower FNR).
We have developed a fully automated retinal image interpretation system that outperformed other existing computational models. Our multimodal input approach used two inputs (CFP and OCT) to process and identify the presence of retinal pathology, but it is not limited to only these imaging modalities. The novel backpropagation algorithm that we proposed was able to utilize low-quality or uninterpretable images in the decision making process (about 6% of all photographs), and proved that it was minimally affected by it.
This approach has limitations, and we will be addressing them as part of our future research. First, we can potentially improve the prediction performance with a dataset containing more balanced labels. Given the FNR of 12.28%, the CNN model may still classify RPP images as RPN. This is highlighted by the fact that regardless of the augmentation, the effective sample size in the RPP group is outnumbered by the effective sample size in the RPN group. Second, although the dataset contains a fairly sufficient number of uninterpretable CFP images, a limited number of uninterpretable OCT images was available. This leads to unequal distribution and may potentially influence the final outcome if the dataset contains a higher number of uninterpretable CFP images. Third, our dataset did not contain the samples where both imaging modalities were uninterpretable; thus, we could not demonstrate the model's performance in that setting. However, the implicit binary classification mechanism (i.e. the AGD algorithm) did not hinder this analysis if such data were available in the dataset. Specifically, the CNN model could still be trained to classify the inputs into two categories (i.e. (i) RPN and (ii) RPP or RPPP). The latter (i.e. RPP and RPPP) samples could be grouped into one class because both should be referred further. Moreover, the CNN model (see ) could identify RPPP samples as the only difference between processing two uninterpretable modalities and one (or zero) uninterpretable modality was the fully connected layer θ 3 needed to learn to map uninformative features generated by both convolutional streams (i.e. θ 1 and θ 2 ) to its corresponding label while the other feature was uninformative. Given that the AGD algorithm would not interfere with this process during training, we expect that the fully connected layer can learn from such samples and inference properly; thus, our approach could process the inputs constituted by two uninterpretable modalities. On the other hand, if one prefers to refer the RPPP cases separately from the RPP cases (i.e. classify them into two separate classes), an additional classification model can be introduced to identify the RPPP samples from the dataset before our approach is applied. Finally, our model does not identify specific retinal pathology (e.g. DR, AMD, and DME) but instead classifies the images as retina pathology present or absent. The main focus of our future work will be resolving this challenge.
Supplement 1
|
A novel theoretical framework for simultaneous measurement of excitatory and inhibitory conductances | eb466782-4efa-42cb-9d48-d92472507eab | 8746761 | Physiology[mh] | Neuronal firing is orchestrated by the interplay of excitatory and inhibitory inputs. Therefore, studying their relationship has been crucial to solving fundamental questions in cellular and system neuroscience. Disrupted relations between these inputs were suggested to accompany many neurological diseases and in particular epileptic seizures. It is commonly believed that such seizures are accompanied and even caused by a disruption of excitation-inhibition ratio and their temporal relationships . The most widely used method to measure inhibitory and excitatory inputs in isolation is the voltage clamp technique. To reveal excitatory synaptic currents the membrane potential is voltage clamped near the reversal potential of inhibition (near -80 mV) and inhibitory synaptic currents are revealed when the voltage is clamped near the excitatory reversal potential (near 0 mV). Voltage clamp recordings have been used in this manner to study mechanisms of feature selectivity of cortical cells belonging to various modalities . Current clamp recordings also allow for the isolation of excitatory and inhibitory conductances, which is done by injecting constant positive or negative currents which bring the membrane potential near the reversal potential of these two input types . Voltage and current clamp approaches share several similarities. In both cases, excitation and inhibition are recorded in different trials and conductances are estimated by fitting the averaged data with the membrane potential equation ( below). Hence, these methods provide only an average picture and thus fail to capture the instantaneous and trial-by-trial based insight into the relations between excitation and inhibition. The instantaneous relation between excitation and inhibition in-vivo was revealed using a different approach, relying on the finding that the membrane potential of nearby cortical cells in anesthetized animals is highly synchronized . This approach consists of depolarizing one cell to reveal its inhibitory inputs while simultaneously hyperpolarizing a neighboring cell to reveal its excitatory inputs. Doing this showed that excitatory and inhibitory synaptic inputs are highly correlated in anesthetized and awake rodents and was used to study the degree of correlation during oscillatory neuronal activities . However, this approach depends on making the recordings from highly correlated cells, mostly observed in deeply anesthetized animals. Methods for estimation of excitatory and inhibitory inputs of a single cell during single trials were previously developed . However, these methods make significant assumptions about the dynamics and statistics of the inputs. Importantly, all these methods rely on the occurrence of membrane potential fluctuations when estimating excitatory and inhibitory conductances. Clearly, changes in conductance sometimes are not accompanied by any change in membrane potential, as expected when a cell receives tonic shunting synaptic input with a reversal potential near the resting potential of the cell. We describe a new theoretical framework for simultaneously measuring both excitatory and inhibitory conductances under current clamp in a single trial with high temporal resolution, without making statistical assumptions about the inputs. It is based on frequency analysis of the response of neurons when injected with a current composed of two sinusoidal components and allows measuring both the excitatory and inhibitory conductances simultaneously with membrane potential as a function of time. We demonstrate this method in-silico using simulations of a point neuron receiving excitatory and inhibitory synaptic inputs as well as in a realistic pyramidal cell model when synapses are distributed further away from the soma. Finally, we describe the limitations of this approach in whole cell patch clamp recordings obtained using contemporary intracellular amplifiers.
Transformation of membrane potential and total conductance to E and I conductances We sought to develop a method that provides a way to simultaneously measure the excitatory and inhibitory conductances in a single trial with high temporal resolution during current clamp recording. We begin with the membrane Eq for passive synaptic inputs of a point neuron, which can be rearranged to isolate the excitatory and inhibitory conductances as shown in Eq . C · d V ( t ) d t = − ( g l ( V ( t ) − V l ) + g e ( t ) ( V ( t ) − V e ) + g i ( t ) ( V ( t ) − V i ) − I ( t ) ) (1) Replacing V ( t )− V l , V ( t )− V e , V ( t )− V i with V l ( t ), V e ( t ), V i ( t ) respectively and assuming that the total conductance equals the sum of the inhibitory and excitatory conductance g s ( t ) = g i ( t )+ g e ( t ) we get: g e ( t ) = C · d V ( t ) d t + g l · V l ( t ) + g s ( t ) · V i ( t ) − I ( t ) ( V i ( t ) − V e ( t ) ) ; g i ( t ) = g s ( t ) − g e ( t ) (2) Eq shows that the two inputs can be isolated if the following parameters are known: V ( t ), membrane voltage; g l , leak conductance; g s ( t ), total synaptic conductance; V l , V e , V i , equilibrium potentials of the individual conductances; C , membrane capacitance; I , stimulus current. shows how this equation works in a simulated point neuron where these parameters are indeed known. We demonstrate this transformation by showing depressing excitatory and inhibitory inputs as well as a step change in conductance. However, it works for any type and dynamic of excitatory and inhibitory inputs. How do we find these parameters under experimental conditions? The equilibrium potentials are generally assumed to be known and determined from intracellular and extracellular ion concentrations. The leak conductance and membrane capacitance can be measured when injecting hyperpolarizing current steps. The voltage is also easy to resolve during the current clamp. However, developing a method to record the membrane potential and at the same time also measure the conductance at each time point has been challenging. As we describe below, we can theoretically estimate the total conductance of the cell by measuring the voltage response during injection of a current composed of two high-frequency sinusoidal components. We start with impedance analysis of passive circuits representing a simplified point neuron with a patch clamp pipette and describing the relationships between the impedance and cell conductance. Impedance-conductance relationship in a passive point neuron To develop a method that can be practically used for whole cell patch recordings, we included the resistance of the patch pipette in our analysis. As shown below, the resistance of the electrode affects the measurement of the cell’s impedance and thus cannot be ignored. We analyzed in the frequency domain the impedance of a circuit composed of a recording electrode ( R s ) and a simplified point neuron (composed of a conductance, g(t) (equal to g l +g e (t)+g i (t) ) and a capacitor, C ). The impedance of this circuit is given by ( w = 2πf , j is the imaginary unit and f is the frequency in Hertz). The cell conductance ( g(t) ) and the pipette resistance ( R s (t) can vary over time, and so consequently also the impedance of the circuit ( Z(t) ). Z ( f , t ) = R s ( t ) + 1 ( g ( t ) + j · w · C ) = R s ( t ) + g ( t ) g ( t ) 2 + ( w · C ) 2 − j · w · C g ( t ) 2 + ( w · C ) 2 (3) illustrates the relationships between the impedance and g for various frequencies (for constant values). It also shows that in the presence of R s , impedance-frequency curves intersect each other as frequency increases, resulting in a positive relationship between circuit impedance and g for a large range of g (compare ). The presence of R s also keeps the phase almost constant for different frequencies and g values ). Thus the electrode resistance has a prominent effect on the total impedance of this circuit and should not be ignored when injecting high frequency sinusoidal current into cells. The in-silico experiment In the next sections, we show the response of a point neuron to an injection of a current composed of two sinusoidal components (Eq , w 1 = 2πf 1 , w 2 = 2πf 2 ): I ( t ) = I 1 · s i n ( w 1 · t ) + I 2 · s i n ( w 2 · t ) (4) can be used to measure changes in excitatory and inhibitory conductances imposed on the model in a single trial. Although the voltage response in our simulation fluctuates across a large range of more than 35mV , most of the drop of voltage occurs on the electrode resistor, as seen when we set R s to zero . Due to the low-pass filtering of the input by the passive properties of the cell when injecting high frequency sinusoidal current, the fluctuations of the voltage across the membrane itself are extremely attenuated, resulting in less than 6mV peak to peak amplitudes. Such small fluctuations are unlikely to recruit any voltage-gated intrinsic current. Note that the value of the electrode resistance accounts for both the pipette and access resistance. In our our simulation we set the electrode resistance to 30MΩ, which is higher than for the typical access resistance in in-vitro recordings, but well within the range of in-vivo recordings . The current and the voltage are used to calculate all the passive properties of the simulated cell in a single trial (i.e., R s (t) , g (t) and C ). The computations are all analytical and approximation is done only when estimating the cell’s capacitance as shown below. As described above, estimating the cell’s conductance allows us to measure the excitatory and inhibitory conductances. Measurement of the cell’s total conductance The first step towards measuring the cell’s excitatory and inhibitory conductances using injection of sinusoidal currents is to measure its total capacitance. The cell’s capacitance is usually estimated from the response to a step current. Other methods for such estimation are also available, such as using a short pulse and variance analysis of the response to injection of noise . Here we show that a cell”s capacitance can be well estimated from the response to either one of the two frequencies composing the sinusoidal current (Eq . We rely on the assumption that when the frequency of the current is high ( w * C >> g(t) 2 ), we can neglect g(t) 2 in the denominators of the second and third terms in . Hence, at such frequencies the electrode resistance ( R s ) is relatively larger than the second term, and thus the second term can be neglected. In this case, the total impedance of the circuit is mostly determined by the electrode resistance and the capacitance of the cell, as the latter draws most of the sinusoidal current that is injected into the cell. Here we ignore any stray capacitance in the recording system, such as of the recording pipette, but below we show that this capacitance can be partially compensated offline. The capacitance of the cell can be estimated from the voltage amplitude and phase relationship between the voltage and the current. These relationships can be approximated by Eq (see also the phase curves in ) obtained from when w*C>>g . Z ( f ) ≈ R s − j w · C (5) For such an estimation to be valid (i.e., deriving Eq from , the frequency of each one of the two current components has to be sufficiently high. For example, for a cell with a mean conductance of 1/100MΩ and total capacitance of 0.15nF, recorded with 10MΩ electrode ( R s ), a ratio of ~88 between (w*C) 2 and g 2 will be obtained at 100Hz. Since the impedance of the second term in Eq for this example is ~1MΩ, much smaller than R s (10MΩ), we neglect this term. Thus, the capacitance can be obtained from Eq , if we can estimate the electrode resistance and the phase relationship between the current and the voltage. We do this in a single trial when sinusoidal current is injected, by first measuring the electrode resistance ( R s , est ) from the ratio of the absolute values of the fast Fourier Transform (FFT) of the voltage and the current at the frequency of the injected current, after both traces were bandpass filtered at one of the two frequencies (F1 or F2, using ‘bandpass’ Matlab function, implementing finite impulse response ( FIR ) filter). Importantly, this calculation is performed for a time window within which no stimulation is delivered (e.g., 1 second before stimulation). The two vectors ( FV , FI bandpass filtered voltage and current) are then used to estimate R s . For the measurement of the capacitance we provide a rough estimation of R s , denoted with an asterisk. A more precise estimation of R s is provided later. R s , e s t * = a b s ( f f t ( F V ) / a b s ( f f t ( F I ) ) ; ( a t F 1 o r F 2 ) (6) The phase between FV and FI is calculated from the Hilbert transform of FV (H operator, either for the F1 or F2) using the ‘hilbert’ Matlab function and averaging over time: θ e s t = a n g l e ( H ( F V ) ) − a n g l e ( H ( F I ) ) ¯ (7) Averaging is performed for the same time window as above, within which no stimulation is delivered (e.g., 1 second before). The trigonometric relationships between the real and imaginary parts in Eq are described in Eq , allowing to estimate the cell’s total capacitance given that R s and θ est are measured as described in Eqs and : C e s t = 1 / a b s ( t a n ( θ e s t ) · R s , e s t * · w ) (8) In the example shown in Figs and , the real capacitance was set to 0.15nF and was estimated as 0.149nF. Note, that estimation of C can also be obtained when setting R s to zero at a similar accuracy. We then use the estimated capacitance of the cell to measure the cell’s conductance and to obtain a more accurate measurement of the electrode resistance, both over time in a single trial. In this computation these values will be measured based on the analytical solution of , this time without making any approximations. Here we use the fact that the current contains two sinusoidal components having two different frequencies (F1 and F2, e.g., 210Hz and 315Hz as used in the example). Since Z(f) decreases with increasing frequency , increasing the frequencies, although it allows higher temporal resolution, will reduce the signal to noise ratio in the presence of noise. The voltage and the current are then bandpass filtered at the two frequencies ( , due to screen resolution are displayed as patches of colors). Note the small modulations in the bandpass filtered voltage signals, which are in the order of about 1mV. These modulations result from changes in the cell’s conductance during the simulation of the synaptic inputs following the relationships between them as shown in . For each bandpass filtered voltage and current trace: FV 1 ( t ), FV 2 ( t ), FI 1 ( t ), FI 2 ( t ) we computed thehilbert transforms ( HFV 1 ( t ), HFV 2 ( t ), HFI 1 ( t ), HFI 2 ( t ), using the ‘hilbert’ Matlab function). These complex vectors are then used to calculate the impedance of the cell at the two frequencies over time: Z 1 ( f 1 , t ) = H F V 1 ( t ) / H F I 1 ( t ) (9) Z 2 ( f 2 , t ) = H F V 2 ( t ) / H F I 2 ( t ) (10) The absolute values of these complex vectors, shown in , demonstrate curves with a shape that is similar to that of the total conductance of the cell (leak plus synaptic conductances). Note that when the conductance of the cell is increased during activation of these inputs, the impedance is also elevated. This only happens in the presence of R s , as shown in . These two impedance vectors are then used together to solve and obtaining a solution for R s (t) and g (t) (when z 1 ≠ z 2 , C is the estimated capacitance). To this end we used Mathematica (Wolfram) to solve the two equations for absolute values of z 1 and z 2 (“ Solve[Abs (r + 1/(g + I*w1*c)) = = Abs (z1) && Abs (r + 1/(g + I*w2*c)) = = Abs (z2), {r, g}]” , I = imaginary unit in Mathematica (Wolfram) ) which gives the following solutions for R s and g (here Z 1 and Z 2 are complex time dependent vectors, j is the imaginary unit, and C is the estimated capacitance): R s , e s t ( t ) = ( 1 / ( 2 · j · c ( w 1 − w 2 ) ) ) · ( j · c ( w 1 · Z 1 − w 2 · Z 1 + w 1 · Z 2 + w 2 · Z 2 ) + ( ( − j · c ( w 1 · Z 1 + w 2 · Z 1 − w 1 · Z 2 + w 2 · Z 2 ) 2 − 4 · j · c ( w 1 − w 2 ) ( Z 1 − Z 2 + j · c · w 1 · Z 1 · Z 2 − j · c · w 2 · Z 1 · Z 2 ) ) 0.5 ) (11) g e s t ( t ) = − j · ( j + c · R s , e s t ( t ) · w 1 − c · w 1 · Z 1 ) / ( R s , e s t ( t ) − Z 1 ) (12) In Eqs and z 1 , z 2 as well as R s , est ( t ) are time dependent variables. Identical estimation will be obtained in after replacing w 1 and z 1 with w 2 and z 2 . In , we again plotted the two impedance curves and also included the electrode resistance ( R s , est ( t )), which is only slightly larger than its real value used in the simulation. The estimated total conductance is plotted in . Note that the estimated total conductance is almost identical in shape and magnitude to the sum of the leak, excitatory and inhibitory conductances used to simulate the membrane potential in this example. Estimation of the excitatory and inhibitory conductances from cell’s conductance and membrane potential After estimating the total conductance, Eqs and are used to compute the excitatory and inhibitory conductances as discussed above. Since sinusoidal current is injected into the cell (with two frequency components) we bandstop filter around each frequency (+- 5Hz) to obtain a clean version of the membrane potential. Before we use Eqs and , we need to calculate the resting membrane potential and its corresponding leak conductance. We do this by finding the mean voltage in the cleaned membrane potential for the lower 5 th percentile of the total conductance vector, which we assume reflects the resting state at which no synaptic inputs are evoked (i.e, g l , est ). The corresponding membrane potential values for this 5th percentile conductance were used to calculate the mean resting potential ( V l ). The synaptic conductance is simply given by: g s , est ( t ) = g est ( t )− g l , est (the difference between total conductance and leak conductance). In the transformation presented in Eqs and , we assume that the reversal potentials of excitation and inhibition are available to us (i.e., 0mV and -70mV). The capacitance and total conductance are obtained as described above. The results of these computations are shown in . Our calculations revealed that the estimated conductances are almost identical to the real inputs of the simulated cell (compare Figs to ). We note that our method allows estimating the conductances even when tonic input exits, as demonstrated in the step change in excitation and inhibition (shown between 3 to 4 seconds). In fact, the Pearson correlation between the real inputs and the estimated inputs for this simulated example were extremely high: 0.999 for excitation and 0.996 for inhibition . Computing the excitatory and inhibitory conductances of a cell embedded in a balanced network We asked if our approach can be used to reveal the underlying excitatory and inhibitory conductances of a model cortical neuron embedded in an active network where it receives excitatory and inhibitory inputs. Therefore, we used a simulation of a cortical network at a balanced asynchronous state to obtain the excitatory and inhibitory synaptic inputs of a single cell (kindly provided by Dr. Michael Okun, University of Leicester). We used these conductances in a simulation of a single cell, in which we injected a current with two sinusoidal components (210 Hz and 315Hz) via a 50MΩ electrode and measure the response of the cell, before and after filtering out the two sinusoidal components from the membrane potential ( , black trace, which is superimposed almost perfectly with the one obtained without current injection, blue trace). We then used our computations to estimate the excitatory and inhibitory conductances . Note, however, that for both inputs the estimated conductances are more negative than expected. This is simply because the leak conductance was estimated from the 5 th percentile of the total conductance of the cell, but since synaptic activity persisted throughout the trace, the leak conductance reflects a mixture of the true leak conductance and some baseline synaptic activity. Nevertheless, the estimated excitatory and inhibitory synaptic conductances were very similar to those used as inputs , and similarly to the real inputs, estimated E and I conductances were highly correlated . Our approach was also successful in measuring E and I inputs when they are not correlated ( , by shifting the inhibitory input by 10 seconds relative to excitation). Indeed, as expected for this case, no correlation was measured between the measured inputs . In summary, our approach allows accurate estimation of excitatory and inhibitory inputs in various conditions without any need to take into account the dynamic and statistical properties of the excitatory and inhibitory inputs. Measurement of E and I inputs during large variations in access resistance Changes in access resistance due to incompletely ruptured membrane or other due to movement of the recorded cell and preparation, pressing the pipette onto the membrane, are well-known limitations of whole cell patch recordings. However, one of the advantages of the approach is in its ability to track changes in the electrode and access resistance and taking them into account when calculating the total conductance of the cell with a high temporal resolution. We demonstrate it by simulating rapid changes in the electrode resistance during the in-silico recordings ( , identical synaptic inputs to those used in ). These variations led to a noisy impedance measurement . However, since we can measure the access resistance over time ( R s ( t ) est ) and total g(t) at the same time (Eqs and , followed by measurement of excitation and inhibition as described in Eq , the changes in the electrode resistance had no apparent effect on the ability to accurately estimate the inhibitory and excitatory conductances . Measurement of E and I inputs in the presence of realistic noise Next we asked how sensitive our measurements are in the presence of realistic noise. Therefore, we used a typical patch electrode to record a voltage trace in a slice setup when positioning the electrode outside a neuron (kindly provided by Dr. Alexander Binshtok, Hebrew University). We then added this noise to our simulated voltage prior to the measurement of excitation and inhibition . A sample of the voltage in the absence of sinusoidal current injection is shown in the inset of (voltage scale bar is 0.5 mV). Despite the presence of such noise (standard deviation of 0.04mV), and a concomitantly noisier measurement of excitation and inhibition, their values closely matched those we imposed as inputs in the simulation . Compensation for electrode capacitance In the above computations we assumed that the recordings are made with a pipette of zero capacitance. However, electrode capacitance can greatly affect the measurement using our novel algorithm. Most of the stray capacitance of recording pipettes is formed by the separation of the solution inside vs. outside the glass pipettes. Experimentally, it can be reduced but not eliminated by coating the pipette with hydrophobic material . Pipette capacitance ( C p , illustrated in ) can also be neutralized by the electronic circuit of the intracellular amplifier, using a positive feedback circuit. In our in-silico experiment, we show that C p can greatly affect the measurement, as pipette capacitance draws some of the injected sinusoidal current. As a result, the impedance measurements for the two frequencies (z 1 and z 2 ) are smaller than expected from the cell and R s alone ( is 20MΩ and the curves are well below this value). This, in turn, results in a much higher leak conductance and a completely wrong estimation in the synaptic conductances based on Eqs and . Altogether, our estimations can be flawed, leading to negative evoked inhibitory conductance . To compensate for the impedance reduction due to the pipette capacitance we estimated Cp and then used this value to correct the measured impedances. Here we show the theoretical admittance (Y, Y = 1/Z) at each of the two frequencies for the equivalent circuit of a cell recorded with a pipette that has stray capacitance, as shown in . The second terms in the following Equations depict the admittance of the stray capacitance (Eqs were derived from the circuit that is presented in , G is the cell’s total conductance). 1 / Z 1 = 1 R s + 1 G + j · w 1 · C + j · w 1 · C p (13) 1 / Z 2 = 1 R s + 1 G + j · w 2 · C + j · w 2 · C p (14) From these two equations and replacing 1 / ( R s + 1 G + j · w 1 · C ) with Y 1 (and Y 2 ), C p is given by: C p = ( 1 / Z 1 − 1 / Z 2 ) − ( Y 1 − Y 2 ) j ( w 1 − w 2 ) (15) However, the value of Y and Y 2 are unknown and are those we seek. We found, however, that the second term ( Y 1 -Y 2 ) can be neglected as it is much smaller when compared to the value of 1/ Z 1 −1/ Z 2 . For example, for the parameters used in this simulation, the ratio between the latter and first terms is ~200, clearly justifying our next approximation in which we use in the measured impedance curves, as made using Eqs and (shown as measured Z 1 and Z 2 below, both are time dependent). C p , e s t ≈ < ( 1 / Z 1 − 1 / Z 2 ) j ( w 1 − w 2 ) > (16) We then use this estimated value of C p (averaged for a selected time window (e.g., 1S) before the stimulation under the assumption that synaptic inputs are silent during this time) to calculate the estimated impedance of the cell and the electrode alone, as theoretically expected ( Z ′ = 1 / Y ′ = R s ( t ) + 1 g ( t ) + j · w 1 · C ) which is done by subtracting from the two measured Z curves the C p , est component following rearranging Eqs and : 1 / Z 1 ′ = 1 / Z 1 − j · w 1 · C p , e s t (17) 1 / Z 2 ′ = 1 / Z 2 − j · w 2 · C p , e s t (18) The new Z′ vectors are then used as the inputs as described above in Eqs and and the subsequent process as described above. This approach greatly improved the measurement of excitation and inhibition . Hence, this component in the analysis, which can be switched on and off, can help resolve the analysis of real recordings, where stray capacitance always exists. Measuring synaptic conductances in morphologically realistic neurons To assess how our method resolves dendritic conductances, we simulated a morphologically realistic CA1 pyramidal cell . We uniformly distribute 50 inhibitory and 50 excitatory synapses proximal to the soma. We realized that due to current escape of the injected sinusoidal current to the dendrites, the estimated leak conductance is much larger than its actual value. In the case of proximal synaptic inputs, less current is escaping towards the dendrites during activation of these inputs when compared to pre-stimulation conditions. We compensated for this change by dynamically altering the strength of the leak conductance at each time point based on the estimated total synaptic conductance before calculating the excitatory and inhibitory conductances (Eqs and by using this empirical equation: g ′ l ( t ) = g l ( 1 − e − ( g s ( t ) / g l ) 2 ) (19) Such change is equivalent to a dynamic change in the electrotonic length of cells, known to cause space clamp errors . It shows that for weak proximal synaptic input this function strongly reduces the newly calculated leak conductance ( g ′ l ( t )) as expected, and that this allows to compensate for the current escape. However, when the synaptic inputs get stronger the function increases the leak, as less current is expected to escape to the dendrites due to the shunting effect of the input. Although those synapses are on average 129.92μm (±47.83μm SD) away from the soma, our method resolves the excitatory and inhibitory conductances in a single trial at least as well as the voltage clamp measurements do during two separate trials. When the synapses are moved further away, to an intermediate distance of 238.69μm (±39.71μm SD), our method underestimates the conductance to the same extent as voltage clamp . Under most biological conditions synapses are not constrained to a narrow part of the dendrite. Therefore, we uniformly distributed synapses anywhere on the apical dendritic tree . This resulted in synapses with an average distance to the soma of 309.92μm (±164.46μm SD). In this case, our method still follows the conductances but underperforms compared to voltage clamp. Because the measurement quality seemed to decrease with distance, we did more simulations to quantify the relationship between somatic distance and recording quality. Conductance measurements of proximal inputs are stable and reliable To investigate the relationship between measurement quality and synaptic distance to soma, we simulated a single excitatory and a single inhibitory synapse at the same dendritic segment. As above, we found that we can reliably isolate the conductances when the synapse pair is close to the soma . At an extremely distal synapse localization, the measurement becomes unreliable. Even the voltage clamp ceases to follow the temporal dynamics. To quantify the extent to which our measurement follows the temporal dynamics of the current we calculated the correlation coefficient between measurement and true conductance. We found that the measurements are very reliable for synapses below 400μm somatic distance . Above that distance, the measurement quality breaks down abruptly for the excitatory conductance .
We sought to develop a method that provides a way to simultaneously measure the excitatory and inhibitory conductances in a single trial with high temporal resolution during current clamp recording. We begin with the membrane Eq for passive synaptic inputs of a point neuron, which can be rearranged to isolate the excitatory and inhibitory conductances as shown in Eq . C · d V ( t ) d t = − ( g l ( V ( t ) − V l ) + g e ( t ) ( V ( t ) − V e ) + g i ( t ) ( V ( t ) − V i ) − I ( t ) ) (1) Replacing V ( t )− V l , V ( t )− V e , V ( t )− V i with V l ( t ), V e ( t ), V i ( t ) respectively and assuming that the total conductance equals the sum of the inhibitory and excitatory conductance g s ( t ) = g i ( t )+ g e ( t ) we get: g e ( t ) = C · d V ( t ) d t + g l · V l ( t ) + g s ( t ) · V i ( t ) − I ( t ) ( V i ( t ) − V e ( t ) ) ; g i ( t ) = g s ( t ) − g e ( t ) (2) Eq shows that the two inputs can be isolated if the following parameters are known: V ( t ), membrane voltage; g l , leak conductance; g s ( t ), total synaptic conductance; V l , V e , V i , equilibrium potentials of the individual conductances; C , membrane capacitance; I , stimulus current. shows how this equation works in a simulated point neuron where these parameters are indeed known. We demonstrate this transformation by showing depressing excitatory and inhibitory inputs as well as a step change in conductance. However, it works for any type and dynamic of excitatory and inhibitory inputs. How do we find these parameters under experimental conditions? The equilibrium potentials are generally assumed to be known and determined from intracellular and extracellular ion concentrations. The leak conductance and membrane capacitance can be measured when injecting hyperpolarizing current steps. The voltage is also easy to resolve during the current clamp. However, developing a method to record the membrane potential and at the same time also measure the conductance at each time point has been challenging. As we describe below, we can theoretically estimate the total conductance of the cell by measuring the voltage response during injection of a current composed of two high-frequency sinusoidal components. We start with impedance analysis of passive circuits representing a simplified point neuron with a patch clamp pipette and describing the relationships between the impedance and cell conductance.
To develop a method that can be practically used for whole cell patch recordings, we included the resistance of the patch pipette in our analysis. As shown below, the resistance of the electrode affects the measurement of the cell’s impedance and thus cannot be ignored. We analyzed in the frequency domain the impedance of a circuit composed of a recording electrode ( R s ) and a simplified point neuron (composed of a conductance, g(t) (equal to g l +g e (t)+g i (t) ) and a capacitor, C ). The impedance of this circuit is given by ( w = 2πf , j is the imaginary unit and f is the frequency in Hertz). The cell conductance ( g(t) ) and the pipette resistance ( R s (t) can vary over time, and so consequently also the impedance of the circuit ( Z(t) ). Z ( f , t ) = R s ( t ) + 1 ( g ( t ) + j · w · C ) = R s ( t ) + g ( t ) g ( t ) 2 + ( w · C ) 2 − j · w · C g ( t ) 2 + ( w · C ) 2 (3) illustrates the relationships between the impedance and g for various frequencies (for constant values). It also shows that in the presence of R s , impedance-frequency curves intersect each other as frequency increases, resulting in a positive relationship between circuit impedance and g for a large range of g (compare ). The presence of R s also keeps the phase almost constant for different frequencies and g values ). Thus the electrode resistance has a prominent effect on the total impedance of this circuit and should not be ignored when injecting high frequency sinusoidal current into cells.
In the next sections, we show the response of a point neuron to an injection of a current composed of two sinusoidal components (Eq , w 1 = 2πf 1 , w 2 = 2πf 2 ): I ( t ) = I 1 · s i n ( w 1 · t ) + I 2 · s i n ( w 2 · t ) (4) can be used to measure changes in excitatory and inhibitory conductances imposed on the model in a single trial. Although the voltage response in our simulation fluctuates across a large range of more than 35mV , most of the drop of voltage occurs on the electrode resistor, as seen when we set R s to zero . Due to the low-pass filtering of the input by the passive properties of the cell when injecting high frequency sinusoidal current, the fluctuations of the voltage across the membrane itself are extremely attenuated, resulting in less than 6mV peak to peak amplitudes. Such small fluctuations are unlikely to recruit any voltage-gated intrinsic current. Note that the value of the electrode resistance accounts for both the pipette and access resistance. In our our simulation we set the electrode resistance to 30MΩ, which is higher than for the typical access resistance in in-vitro recordings, but well within the range of in-vivo recordings . The current and the voltage are used to calculate all the passive properties of the simulated cell in a single trial (i.e., R s (t) , g (t) and C ). The computations are all analytical and approximation is done only when estimating the cell’s capacitance as shown below. As described above, estimating the cell’s conductance allows us to measure the excitatory and inhibitory conductances.
The first step towards measuring the cell’s excitatory and inhibitory conductances using injection of sinusoidal currents is to measure its total capacitance. The cell’s capacitance is usually estimated from the response to a step current. Other methods for such estimation are also available, such as using a short pulse and variance analysis of the response to injection of noise . Here we show that a cell”s capacitance can be well estimated from the response to either one of the two frequencies composing the sinusoidal current (Eq . We rely on the assumption that when the frequency of the current is high ( w * C >> g(t) 2 ), we can neglect g(t) 2 in the denominators of the second and third terms in . Hence, at such frequencies the electrode resistance ( R s ) is relatively larger than the second term, and thus the second term can be neglected. In this case, the total impedance of the circuit is mostly determined by the electrode resistance and the capacitance of the cell, as the latter draws most of the sinusoidal current that is injected into the cell. Here we ignore any stray capacitance in the recording system, such as of the recording pipette, but below we show that this capacitance can be partially compensated offline. The capacitance of the cell can be estimated from the voltage amplitude and phase relationship between the voltage and the current. These relationships can be approximated by Eq (see also the phase curves in ) obtained from when w*C>>g . Z ( f ) ≈ R s − j w · C (5) For such an estimation to be valid (i.e., deriving Eq from , the frequency of each one of the two current components has to be sufficiently high. For example, for a cell with a mean conductance of 1/100MΩ and total capacitance of 0.15nF, recorded with 10MΩ electrode ( R s ), a ratio of ~88 between (w*C) 2 and g 2 will be obtained at 100Hz. Since the impedance of the second term in Eq for this example is ~1MΩ, much smaller than R s (10MΩ), we neglect this term. Thus, the capacitance can be obtained from Eq , if we can estimate the electrode resistance and the phase relationship between the current and the voltage. We do this in a single trial when sinusoidal current is injected, by first measuring the electrode resistance ( R s , est ) from the ratio of the absolute values of the fast Fourier Transform (FFT) of the voltage and the current at the frequency of the injected current, after both traces were bandpass filtered at one of the two frequencies (F1 or F2, using ‘bandpass’ Matlab function, implementing finite impulse response ( FIR ) filter). Importantly, this calculation is performed for a time window within which no stimulation is delivered (e.g., 1 second before stimulation). The two vectors ( FV , FI bandpass filtered voltage and current) are then used to estimate R s . For the measurement of the capacitance we provide a rough estimation of R s , denoted with an asterisk. A more precise estimation of R s is provided later. R s , e s t * = a b s ( f f t ( F V ) / a b s ( f f t ( F I ) ) ; ( a t F 1 o r F 2 ) (6) The phase between FV and FI is calculated from the Hilbert transform of FV (H operator, either for the F1 or F2) using the ‘hilbert’ Matlab function and averaging over time: θ e s t = a n g l e ( H ( F V ) ) − a n g l e ( H ( F I ) ) ¯ (7) Averaging is performed for the same time window as above, within which no stimulation is delivered (e.g., 1 second before). The trigonometric relationships between the real and imaginary parts in Eq are described in Eq , allowing to estimate the cell’s total capacitance given that R s and θ est are measured as described in Eqs and : C e s t = 1 / a b s ( t a n ( θ e s t ) · R s , e s t * · w ) (8) In the example shown in Figs and , the real capacitance was set to 0.15nF and was estimated as 0.149nF. Note, that estimation of C can also be obtained when setting R s to zero at a similar accuracy. We then use the estimated capacitance of the cell to measure the cell’s conductance and to obtain a more accurate measurement of the electrode resistance, both over time in a single trial. In this computation these values will be measured based on the analytical solution of , this time without making any approximations. Here we use the fact that the current contains two sinusoidal components having two different frequencies (F1 and F2, e.g., 210Hz and 315Hz as used in the example). Since Z(f) decreases with increasing frequency , increasing the frequencies, although it allows higher temporal resolution, will reduce the signal to noise ratio in the presence of noise. The voltage and the current are then bandpass filtered at the two frequencies ( , due to screen resolution are displayed as patches of colors). Note the small modulations in the bandpass filtered voltage signals, which are in the order of about 1mV. These modulations result from changes in the cell’s conductance during the simulation of the synaptic inputs following the relationships between them as shown in . For each bandpass filtered voltage and current trace: FV 1 ( t ), FV 2 ( t ), FI 1 ( t ), FI 2 ( t ) we computed thehilbert transforms ( HFV 1 ( t ), HFV 2 ( t ), HFI 1 ( t ), HFI 2 ( t ), using the ‘hilbert’ Matlab function). These complex vectors are then used to calculate the impedance of the cell at the two frequencies over time: Z 1 ( f 1 , t ) = H F V 1 ( t ) / H F I 1 ( t ) (9) Z 2 ( f 2 , t ) = H F V 2 ( t ) / H F I 2 ( t ) (10) The absolute values of these complex vectors, shown in , demonstrate curves with a shape that is similar to that of the total conductance of the cell (leak plus synaptic conductances). Note that when the conductance of the cell is increased during activation of these inputs, the impedance is also elevated. This only happens in the presence of R s , as shown in . These two impedance vectors are then used together to solve and obtaining a solution for R s (t) and g (t) (when z 1 ≠ z 2 , C is the estimated capacitance). To this end we used Mathematica (Wolfram) to solve the two equations for absolute values of z 1 and z 2 (“ Solve[Abs (r + 1/(g + I*w1*c)) = = Abs (z1) && Abs (r + 1/(g + I*w2*c)) = = Abs (z2), {r, g}]” , I = imaginary unit in Mathematica (Wolfram) ) which gives the following solutions for R s and g (here Z 1 and Z 2 are complex time dependent vectors, j is the imaginary unit, and C is the estimated capacitance): R s , e s t ( t ) = ( 1 / ( 2 · j · c ( w 1 − w 2 ) ) ) · ( j · c ( w 1 · Z 1 − w 2 · Z 1 + w 1 · Z 2 + w 2 · Z 2 ) + ( ( − j · c ( w 1 · Z 1 + w 2 · Z 1 − w 1 · Z 2 + w 2 · Z 2 ) 2 − 4 · j · c ( w 1 − w 2 ) ( Z 1 − Z 2 + j · c · w 1 · Z 1 · Z 2 − j · c · w 2 · Z 1 · Z 2 ) ) 0.5 ) (11) g e s t ( t ) = − j · ( j + c · R s , e s t ( t ) · w 1 − c · w 1 · Z 1 ) / ( R s , e s t ( t ) − Z 1 ) (12) In Eqs and z 1 , z 2 as well as R s , est ( t ) are time dependent variables. Identical estimation will be obtained in after replacing w 1 and z 1 with w 2 and z 2 . In , we again plotted the two impedance curves and also included the electrode resistance ( R s , est ( t )), which is only slightly larger than its real value used in the simulation. The estimated total conductance is plotted in . Note that the estimated total conductance is almost identical in shape and magnitude to the sum of the leak, excitatory and inhibitory conductances used to simulate the membrane potential in this example.
After estimating the total conductance, Eqs and are used to compute the excitatory and inhibitory conductances as discussed above. Since sinusoidal current is injected into the cell (with two frequency components) we bandstop filter around each frequency (+- 5Hz) to obtain a clean version of the membrane potential. Before we use Eqs and , we need to calculate the resting membrane potential and its corresponding leak conductance. We do this by finding the mean voltage in the cleaned membrane potential for the lower 5 th percentile of the total conductance vector, which we assume reflects the resting state at which no synaptic inputs are evoked (i.e, g l , est ). The corresponding membrane potential values for this 5th percentile conductance were used to calculate the mean resting potential ( V l ). The synaptic conductance is simply given by: g s , est ( t ) = g est ( t )− g l , est (the difference between total conductance and leak conductance). In the transformation presented in Eqs and , we assume that the reversal potentials of excitation and inhibition are available to us (i.e., 0mV and -70mV). The capacitance and total conductance are obtained as described above. The results of these computations are shown in . Our calculations revealed that the estimated conductances are almost identical to the real inputs of the simulated cell (compare Figs to ). We note that our method allows estimating the conductances even when tonic input exits, as demonstrated in the step change in excitation and inhibition (shown between 3 to 4 seconds). In fact, the Pearson correlation between the real inputs and the estimated inputs for this simulated example were extremely high: 0.999 for excitation and 0.996 for inhibition .
We asked if our approach can be used to reveal the underlying excitatory and inhibitory conductances of a model cortical neuron embedded in an active network where it receives excitatory and inhibitory inputs. Therefore, we used a simulation of a cortical network at a balanced asynchronous state to obtain the excitatory and inhibitory synaptic inputs of a single cell (kindly provided by Dr. Michael Okun, University of Leicester). We used these conductances in a simulation of a single cell, in which we injected a current with two sinusoidal components (210 Hz and 315Hz) via a 50MΩ electrode and measure the response of the cell, before and after filtering out the two sinusoidal components from the membrane potential ( , black trace, which is superimposed almost perfectly with the one obtained without current injection, blue trace). We then used our computations to estimate the excitatory and inhibitory conductances . Note, however, that for both inputs the estimated conductances are more negative than expected. This is simply because the leak conductance was estimated from the 5 th percentile of the total conductance of the cell, but since synaptic activity persisted throughout the trace, the leak conductance reflects a mixture of the true leak conductance and some baseline synaptic activity. Nevertheless, the estimated excitatory and inhibitory synaptic conductances were very similar to those used as inputs , and similarly to the real inputs, estimated E and I conductances were highly correlated . Our approach was also successful in measuring E and I inputs when they are not correlated ( , by shifting the inhibitory input by 10 seconds relative to excitation). Indeed, as expected for this case, no correlation was measured between the measured inputs . In summary, our approach allows accurate estimation of excitatory and inhibitory inputs in various conditions without any need to take into account the dynamic and statistical properties of the excitatory and inhibitory inputs.
Changes in access resistance due to incompletely ruptured membrane or other due to movement of the recorded cell and preparation, pressing the pipette onto the membrane, are well-known limitations of whole cell patch recordings. However, one of the advantages of the approach is in its ability to track changes in the electrode and access resistance and taking them into account when calculating the total conductance of the cell with a high temporal resolution. We demonstrate it by simulating rapid changes in the electrode resistance during the in-silico recordings ( , identical synaptic inputs to those used in ). These variations led to a noisy impedance measurement . However, since we can measure the access resistance over time ( R s ( t ) est ) and total g(t) at the same time (Eqs and , followed by measurement of excitation and inhibition as described in Eq , the changes in the electrode resistance had no apparent effect on the ability to accurately estimate the inhibitory and excitatory conductances .
Next we asked how sensitive our measurements are in the presence of realistic noise. Therefore, we used a typical patch electrode to record a voltage trace in a slice setup when positioning the electrode outside a neuron (kindly provided by Dr. Alexander Binshtok, Hebrew University). We then added this noise to our simulated voltage prior to the measurement of excitation and inhibition . A sample of the voltage in the absence of sinusoidal current injection is shown in the inset of (voltage scale bar is 0.5 mV). Despite the presence of such noise (standard deviation of 0.04mV), and a concomitantly noisier measurement of excitation and inhibition, their values closely matched those we imposed as inputs in the simulation .
In the above computations we assumed that the recordings are made with a pipette of zero capacitance. However, electrode capacitance can greatly affect the measurement using our novel algorithm. Most of the stray capacitance of recording pipettes is formed by the separation of the solution inside vs. outside the glass pipettes. Experimentally, it can be reduced but not eliminated by coating the pipette with hydrophobic material . Pipette capacitance ( C p , illustrated in ) can also be neutralized by the electronic circuit of the intracellular amplifier, using a positive feedback circuit. In our in-silico experiment, we show that C p can greatly affect the measurement, as pipette capacitance draws some of the injected sinusoidal current. As a result, the impedance measurements for the two frequencies (z 1 and z 2 ) are smaller than expected from the cell and R s alone ( is 20MΩ and the curves are well below this value). This, in turn, results in a much higher leak conductance and a completely wrong estimation in the synaptic conductances based on Eqs and . Altogether, our estimations can be flawed, leading to negative evoked inhibitory conductance . To compensate for the impedance reduction due to the pipette capacitance we estimated Cp and then used this value to correct the measured impedances. Here we show the theoretical admittance (Y, Y = 1/Z) at each of the two frequencies for the equivalent circuit of a cell recorded with a pipette that has stray capacitance, as shown in . The second terms in the following Equations depict the admittance of the stray capacitance (Eqs were derived from the circuit that is presented in , G is the cell’s total conductance). 1 / Z 1 = 1 R s + 1 G + j · w 1 · C + j · w 1 · C p (13) 1 / Z 2 = 1 R s + 1 G + j · w 2 · C + j · w 2 · C p (14) From these two equations and replacing 1 / ( R s + 1 G + j · w 1 · C ) with Y 1 (and Y 2 ), C p is given by: C p = ( 1 / Z 1 − 1 / Z 2 ) − ( Y 1 − Y 2 ) j ( w 1 − w 2 ) (15) However, the value of Y and Y 2 are unknown and are those we seek. We found, however, that the second term ( Y 1 -Y 2 ) can be neglected as it is much smaller when compared to the value of 1/ Z 1 −1/ Z 2 . For example, for the parameters used in this simulation, the ratio between the latter and first terms is ~200, clearly justifying our next approximation in which we use in the measured impedance curves, as made using Eqs and (shown as measured Z 1 and Z 2 below, both are time dependent). C p , e s t ≈ < ( 1 / Z 1 − 1 / Z 2 ) j ( w 1 − w 2 ) > (16) We then use this estimated value of C p (averaged for a selected time window (e.g., 1S) before the stimulation under the assumption that synaptic inputs are silent during this time) to calculate the estimated impedance of the cell and the electrode alone, as theoretically expected ( Z ′ = 1 / Y ′ = R s ( t ) + 1 g ( t ) + j · w 1 · C ) which is done by subtracting from the two measured Z curves the C p , est component following rearranging Eqs and : 1 / Z 1 ′ = 1 / Z 1 − j · w 1 · C p , e s t (17) 1 / Z 2 ′ = 1 / Z 2 − j · w 2 · C p , e s t (18) The new Z′ vectors are then used as the inputs as described above in Eqs and and the subsequent process as described above. This approach greatly improved the measurement of excitation and inhibition . Hence, this component in the analysis, which can be switched on and off, can help resolve the analysis of real recordings, where stray capacitance always exists.
To assess how our method resolves dendritic conductances, we simulated a morphologically realistic CA1 pyramidal cell . We uniformly distribute 50 inhibitory and 50 excitatory synapses proximal to the soma. We realized that due to current escape of the injected sinusoidal current to the dendrites, the estimated leak conductance is much larger than its actual value. In the case of proximal synaptic inputs, less current is escaping towards the dendrites during activation of these inputs when compared to pre-stimulation conditions. We compensated for this change by dynamically altering the strength of the leak conductance at each time point based on the estimated total synaptic conductance before calculating the excitatory and inhibitory conductances (Eqs and by using this empirical equation: g ′ l ( t ) = g l ( 1 − e − ( g s ( t ) / g l ) 2 ) (19) Such change is equivalent to a dynamic change in the electrotonic length of cells, known to cause space clamp errors . It shows that for weak proximal synaptic input this function strongly reduces the newly calculated leak conductance ( g ′ l ( t )) as expected, and that this allows to compensate for the current escape. However, when the synaptic inputs get stronger the function increases the leak, as less current is expected to escape to the dendrites due to the shunting effect of the input. Although those synapses are on average 129.92μm (±47.83μm SD) away from the soma, our method resolves the excitatory and inhibitory conductances in a single trial at least as well as the voltage clamp measurements do during two separate trials. When the synapses are moved further away, to an intermediate distance of 238.69μm (±39.71μm SD), our method underestimates the conductance to the same extent as voltage clamp . Under most biological conditions synapses are not constrained to a narrow part of the dendrite. Therefore, we uniformly distributed synapses anywhere on the apical dendritic tree . This resulted in synapses with an average distance to the soma of 309.92μm (±164.46μm SD). In this case, our method still follows the conductances but underperforms compared to voltage clamp. Because the measurement quality seemed to decrease with distance, we did more simulations to quantify the relationship between somatic distance and recording quality.
To investigate the relationship between measurement quality and synaptic distance to soma, we simulated a single excitatory and a single inhibitory synapse at the same dendritic segment. As above, we found that we can reliably isolate the conductances when the synapse pair is close to the soma . At an extremely distal synapse localization, the measurement becomes unreliable. Even the voltage clamp ceases to follow the temporal dynamics. To quantify the extent to which our measurement follows the temporal dynamics of the current we calculated the correlation coefficient between measurement and true conductance. We found that the measurements are very reliable for synapses below 400μm somatic distance . Above that distance, the measurement quality breaks down abruptly for the excitatory conductance .
We describe a novel framework to estimate the excitatory and inhibitory synaptic conductances of a neuron under current clamp in a single trial with high temporal resolution while tracking the trajectory of the membrane potential. We show that the method allows estimating these inputs also in a morphologically realistic model of a neuron. The work described above here is theoretical and lays the foundations for future experimental work. The method is based on the theory of electrical circuit analysis over time when a cell is injected with the sum of two sinusoidal currents. This allows us to measure excitatory and inhibitory conductances and at the same time track the membrane potential. We demonstrated the method in simulations of a point neuron and in realistic simulations of a pyramidal cell, receiving proximal and uniformly distributed synaptic inputs. For the point neuron, we showed that we could reveal the timing and magnitude of depressing excitatory and inhibitory synaptic inputs with high temporal resolution and accuracy of above 99% (Figs and ). In another example, we used our method to reveal these inputs during an asynchronous balanced cortical state and showed that excitation and inhibition dynamics can be measured with high accuracy. Importantly, these estimations were obtained from single trials and allowed obtaining the natural dynamics of the membrane potential by filtering out the sinusoidal components of the response to the injected current. Therefore, our method is especially suitable for estimation of excitation and inhibition when these inputs are not locked to stereotypical external or internal events, such as during ongoing activity. We note that when injecting high frequency current (of a couple of hundred Hertz and above), the voltage drops mostly across the recording electrode. Here we tuned the current amplitude to produce a few millivolts sinusoidal fluctuation across the cell membrane, which should have minimal effect on voltage-dependent intrinsic and synaptic conductances when performing recordings in real neurons. Comparisons with other methods Measurement of average excitatory and inhibitory conductances of single cells : Excitatory and inhibitory synaptic conductances of a single cell were measured both under voltage clamp or current clamp recordings, focusing in-vivo on the underlying mechanisms of feature selectivity in sensory response of cortical cells and on the role of inhibition in shaping the tuned sensory response of mammalian cortical neurons . Conductance measurement methods were also used to reveal the underlying excitatory and inhibitory conductances during ongoing Up and Down membrane potential fluctuations, which characterize slow-wave sleep activity . The advantages and caveats of these methods were reviewed in . Common to these conductance measurement methods is the requirement to average the data over multiple repeats, triggered on a stereotypical event (such as the time of sensory stimulation or the rising phase of an Up state) and then average trials at different holding potentials. The averaged data is then fitted with the membrane potential equation (assuming that the reversal potentials are known) to reveal the conductance of excitation and inhibition at each time point. However, these methods cannot reveal inhibition and excitation simultaneously in a single trial, and only estimate averaged relationships. Our proposed method, on the other hand, allows for simultaneous measurements during a single trial. Importantly, since there is no need to depolarize or hyperpolarize the cell, our method allows measurement of synaptic conductances at the resting potential of the cell, potentially obtaining measurements of voltage dependent conductances as they progress during the voltage response to the synaptic inputs. We note that our method shares the basic approach for the analysis of point-neurons using the theory of frequency analysis of electrical circuits with capacitance measurements methods . An alternative approach for estimating the excitatory and inhibitory conductances of a single cell was demonstrated for retinal ganglion cells . In this study the clamped voltage was alternated between the reversal potential of excitation and inhibition at a rate of 50 Hz and the current was measured at the end of each step. This study revealed strong correlated noise in the strength of both types of synaptic inputs. However, unlike the method proposed here, the underlying conductances are not revealed simultaneously and–due to the clamping–the natural dynamics of the membrane potential is entirely unavailable, preventing examining the role of intrinsic voltage dependent dynamics in the generation of neuronal subthreshold activity. Single trial measurements of g e (t) and g i (t) under various assumptions on synaptic dynamics Theoretical and experimental approaches based on the dynamics of excitatory and inhibitory conductances in a single trial were previously proposed. Accordingly, excitation and inhibition are revealed from current clamp recordings in which no current is injected. Approaches based on Bayesian methods which exploit multiple recorded trials were proposed and estimation of these inputs in a single trial were also proposed but lack the ability to track fast changes in these conductances . A group of other computational methods showed that excitatory and inhibitory conductances could be revealed in a single trial when analysing the membrane potential and its distribution. Common to all these methods is the requirement to observe clear fluctuations in the membrane potential. Our method, however, allows revealing these inputs even if no change in membrane potential due to synaptic input is observed (except for the response to the injected sinusoidal current). Changes in conductance are often expected even when the membrane potential is stable, for example when a cell is receiving tonic input (see the step change in excitation and inhibition in Figs and , between 3 to 4 seconds, resulting in a constant membrane potential value) and when a constant balance in excitatory and inhibitory currents exists. Paired intracellular recordings The substantial synchrony of the synaptic inputs among nearby cortical cells allows continuous monitoring of both the excitatory and inhibitory activities in the local network during ongoing and evoked activities. A similar approach was also used to study the relationships between these inputs in the visual cortex of awake mice as well as gamma activity in slices . While paired recordings are powerful when examining the relationships between these inputs in the local network, such recordings do not provide definitive information about the inputs of a single cell. Moreover, although the instantaneous relationship between excitatory and inhibitory inputs can be revealed by this paired recording approach, the maximum inferred degree of estimated correlation between excitation and inhibition is bounded by the amount of correlation between the cells for each input, which may change across stimulation conditions or brain-state . For example, a reduction in the correlation between excitation, as measured in one cell, and inhibition measured in the other cell can truly suggest smaller correlation between these inputs for each cell, but it can also result from a reduction of synchrony between cells, without any change in the degree of correlation between excitation and inhibition of each cell. This caveat of paired recordings prevents us from finding, for example, if cortical activity shifts between balanced and unbalanced states . Simultaneous measurement of excitatory and inhibitory conductances of a single cell across states will allow these and other questions to be addressed. Limitations Theoretically, increasing the frequency of the sinusoidal waveforms of the injected current in our method improves the temporal precision when measuring synaptic conductances. However, this comes at the expense of sensitivity, which reduces as frequency increases ( and ). In our simulations we limited the frequency of the injected current up to about 350Hz. At this range, our simulations, depicting realistic passive cellular properties and typical sensory evoked conductance will result in a clear modulation in voltage when injecting ~1nA sinusoidal current. When bandpass filtering the voltage, the modulation is in the order of only a mV, but is still above the equipment noise. We show that changes in access resistance due to incompletely ruptured membrane or other factors, such as mechanical vibration causing the membrane to move with respect to the pipette, can be well measured and compensated . Hence our approach can be implemented to estimate excitatory and inhibitory inputs of a cell in these realistic conditions. Another aspect that might reduce the sensitivity of our method is the presence of pipette stray capacitance. We developed a modular component in the analysis that can be used to correct some of this stray capacitance . Importantly, no additional measurement is needed beyond the injected sine waves, done in a single trial, to measure this stray capacitance and compensate for its effect. Yet, when stray capacitance is much higher than was demonstrated here, this approach fails to provide a good estimation of the synaptic conductance. Hence, special care will still be needed to minimize any stray capacitance as much as possible. We demonstrate in simulations of morphologically realistic neurons that we can estimate proximal synaptic inputs in a single trial using our approach. Although we underestimated these inputs when compared to simulated voltage-clamp experiments, their shape and relationships were preserved in our measurements if the inputs impinged on dendrites not more distant than 400 μm from the soma of our implementation of a pyramidal cell. Even though this limitation should be considered in real recordings, these data also suggest that the method will provide an adequate assessment of proximal inputs. Possible application of the method for measurement of non-synaptic intrinsic conductances Our method can also be used when voltage-dependent conductances evolve naturally, as we can measure these inputs at the resting potential of the cell, as long as the sinusoidal fluctuations across the membrane due to the injected current are small. Such an approach therefore can be used when performing pharmacological tests, such as testing effects of modulators, agonists and antagonists of various ion channels. Due to the ability to measure these inputs in a single trial, the time course of the effects can be studied in rapid time scales while examining the effects of such drugs on both inputs at the same time. In summary, our theoretical study shows that synaptic and other conductances can be measured at high temporal resolution in a single trial when cells are recorded at their resting potential. More research is needed to find if this approach can be used successfully during physiological recordings from real neurons. Feasibility of the technique in real recordings The expected signal to noise ratio, based on the addition of realistic noise is sufficiently high to measure the excitatory and inhibitory input during in-vitro recordings. However, it is clear that this framework has to be tested in real recordings of neurons. We fully disclose that we made attempts to test the method in real recordings and discovered that in most of our recordings, none shown here, measurements were unsuccessful. Following tests for impulse response of the amplifier, we found that this results from an active feedback circuit in our intracellular amplifiers. We are currently improving the amplifier circuitry and in parallel developing algorithms that will incorporate the frequency response characteristics of these amplifiers.
Measurement of average excitatory and inhibitory conductances of single cells : Excitatory and inhibitory synaptic conductances of a single cell were measured both under voltage clamp or current clamp recordings, focusing in-vivo on the underlying mechanisms of feature selectivity in sensory response of cortical cells and on the role of inhibition in shaping the tuned sensory response of mammalian cortical neurons . Conductance measurement methods were also used to reveal the underlying excitatory and inhibitory conductances during ongoing Up and Down membrane potential fluctuations, which characterize slow-wave sleep activity . The advantages and caveats of these methods were reviewed in . Common to these conductance measurement methods is the requirement to average the data over multiple repeats, triggered on a stereotypical event (such as the time of sensory stimulation or the rising phase of an Up state) and then average trials at different holding potentials. The averaged data is then fitted with the membrane potential equation (assuming that the reversal potentials are known) to reveal the conductance of excitation and inhibition at each time point. However, these methods cannot reveal inhibition and excitation simultaneously in a single trial, and only estimate averaged relationships. Our proposed method, on the other hand, allows for simultaneous measurements during a single trial. Importantly, since there is no need to depolarize or hyperpolarize the cell, our method allows measurement of synaptic conductances at the resting potential of the cell, potentially obtaining measurements of voltage dependent conductances as they progress during the voltage response to the synaptic inputs. We note that our method shares the basic approach for the analysis of point-neurons using the theory of frequency analysis of electrical circuits with capacitance measurements methods . An alternative approach for estimating the excitatory and inhibitory conductances of a single cell was demonstrated for retinal ganglion cells . In this study the clamped voltage was alternated between the reversal potential of excitation and inhibition at a rate of 50 Hz and the current was measured at the end of each step. This study revealed strong correlated noise in the strength of both types of synaptic inputs. However, unlike the method proposed here, the underlying conductances are not revealed simultaneously and–due to the clamping–the natural dynamics of the membrane potential is entirely unavailable, preventing examining the role of intrinsic voltage dependent dynamics in the generation of neuronal subthreshold activity. Single trial measurements of g e (t) and g i (t) under various assumptions on synaptic dynamics Theoretical and experimental approaches based on the dynamics of excitatory and inhibitory conductances in a single trial were previously proposed. Accordingly, excitation and inhibition are revealed from current clamp recordings in which no current is injected. Approaches based on Bayesian methods which exploit multiple recorded trials were proposed and estimation of these inputs in a single trial were also proposed but lack the ability to track fast changes in these conductances . A group of other computational methods showed that excitatory and inhibitory conductances could be revealed in a single trial when analysing the membrane potential and its distribution. Common to all these methods is the requirement to observe clear fluctuations in the membrane potential. Our method, however, allows revealing these inputs even if no change in membrane potential due to synaptic input is observed (except for the response to the injected sinusoidal current). Changes in conductance are often expected even when the membrane potential is stable, for example when a cell is receiving tonic input (see the step change in excitation and inhibition in Figs and , between 3 to 4 seconds, resulting in a constant membrane potential value) and when a constant balance in excitatory and inhibitory currents exists. Paired intracellular recordings The substantial synchrony of the synaptic inputs among nearby cortical cells allows continuous monitoring of both the excitatory and inhibitory activities in the local network during ongoing and evoked activities. A similar approach was also used to study the relationships between these inputs in the visual cortex of awake mice as well as gamma activity in slices . While paired recordings are powerful when examining the relationships between these inputs in the local network, such recordings do not provide definitive information about the inputs of a single cell. Moreover, although the instantaneous relationship between excitatory and inhibitory inputs can be revealed by this paired recording approach, the maximum inferred degree of estimated correlation between excitation and inhibition is bounded by the amount of correlation between the cells for each input, which may change across stimulation conditions or brain-state . For example, a reduction in the correlation between excitation, as measured in one cell, and inhibition measured in the other cell can truly suggest smaller correlation between these inputs for each cell, but it can also result from a reduction of synchrony between cells, without any change in the degree of correlation between excitation and inhibition of each cell. This caveat of paired recordings prevents us from finding, for example, if cortical activity shifts between balanced and unbalanced states . Simultaneous measurement of excitatory and inhibitory conductances of a single cell across states will allow these and other questions to be addressed.
e (t) and g i (t) under various assumptions on synaptic dynamics Theoretical and experimental approaches based on the dynamics of excitatory and inhibitory conductances in a single trial were previously proposed. Accordingly, excitation and inhibition are revealed from current clamp recordings in which no current is injected. Approaches based on Bayesian methods which exploit multiple recorded trials were proposed and estimation of these inputs in a single trial were also proposed but lack the ability to track fast changes in these conductances . A group of other computational methods showed that excitatory and inhibitory conductances could be revealed in a single trial when analysing the membrane potential and its distribution. Common to all these methods is the requirement to observe clear fluctuations in the membrane potential. Our method, however, allows revealing these inputs even if no change in membrane potential due to synaptic input is observed (except for the response to the injected sinusoidal current). Changes in conductance are often expected even when the membrane potential is stable, for example when a cell is receiving tonic input (see the step change in excitation and inhibition in Figs and , between 3 to 4 seconds, resulting in a constant membrane potential value) and when a constant balance in excitatory and inhibitory currents exists.
The substantial synchrony of the synaptic inputs among nearby cortical cells allows continuous monitoring of both the excitatory and inhibitory activities in the local network during ongoing and evoked activities. A similar approach was also used to study the relationships between these inputs in the visual cortex of awake mice as well as gamma activity in slices . While paired recordings are powerful when examining the relationships between these inputs in the local network, such recordings do not provide definitive information about the inputs of a single cell. Moreover, although the instantaneous relationship between excitatory and inhibitory inputs can be revealed by this paired recording approach, the maximum inferred degree of estimated correlation between excitation and inhibition is bounded by the amount of correlation between the cells for each input, which may change across stimulation conditions or brain-state . For example, a reduction in the correlation between excitation, as measured in one cell, and inhibition measured in the other cell can truly suggest smaller correlation between these inputs for each cell, but it can also result from a reduction of synchrony between cells, without any change in the degree of correlation between excitation and inhibition of each cell. This caveat of paired recordings prevents us from finding, for example, if cortical activity shifts between balanced and unbalanced states . Simultaneous measurement of excitatory and inhibitory conductances of a single cell across states will allow these and other questions to be addressed.
Theoretically, increasing the frequency of the sinusoidal waveforms of the injected current in our method improves the temporal precision when measuring synaptic conductances. However, this comes at the expense of sensitivity, which reduces as frequency increases ( and ). In our simulations we limited the frequency of the injected current up to about 350Hz. At this range, our simulations, depicting realistic passive cellular properties and typical sensory evoked conductance will result in a clear modulation in voltage when injecting ~1nA sinusoidal current. When bandpass filtering the voltage, the modulation is in the order of only a mV, but is still above the equipment noise. We show that changes in access resistance due to incompletely ruptured membrane or other factors, such as mechanical vibration causing the membrane to move with respect to the pipette, can be well measured and compensated . Hence our approach can be implemented to estimate excitatory and inhibitory inputs of a cell in these realistic conditions. Another aspect that might reduce the sensitivity of our method is the presence of pipette stray capacitance. We developed a modular component in the analysis that can be used to correct some of this stray capacitance . Importantly, no additional measurement is needed beyond the injected sine waves, done in a single trial, to measure this stray capacitance and compensate for its effect. Yet, when stray capacitance is much higher than was demonstrated here, this approach fails to provide a good estimation of the synaptic conductance. Hence, special care will still be needed to minimize any stray capacitance as much as possible. We demonstrate in simulations of morphologically realistic neurons that we can estimate proximal synaptic inputs in a single trial using our approach. Although we underestimated these inputs when compared to simulated voltage-clamp experiments, their shape and relationships were preserved in our measurements if the inputs impinged on dendrites not more distant than 400 μm from the soma of our implementation of a pyramidal cell. Even though this limitation should be considered in real recordings, these data also suggest that the method will provide an adequate assessment of proximal inputs.
Our method can also be used when voltage-dependent conductances evolve naturally, as we can measure these inputs at the resting potential of the cell, as long as the sinusoidal fluctuations across the membrane due to the injected current are small. Such an approach therefore can be used when performing pharmacological tests, such as testing effects of modulators, agonists and antagonists of various ion channels. Due to the ability to measure these inputs in a single trial, the time course of the effects can be studied in rapid time scales while examining the effects of such drugs on both inputs at the same time. In summary, our theoretical study shows that synaptic and other conductances can be measured at high temporal resolution in a single trial when cells are recorded at their resting potential. More research is needed to find if this approach can be used successfully during physiological recordings from real neurons.
The expected signal to noise ratio, based on the addition of realistic noise is sufficiently high to measure the excitatory and inhibitory input during in-vitro recordings. However, it is clear that this framework has to be tested in real recordings of neurons. We fully disclose that we made attempts to test the method in real recordings and discovered that in most of our recordings, none shown here, measurements were unsuccessful. Following tests for impulse response of the amplifier, we found that this results from an active feedback circuit in our intracellular amplifiers. We are currently improving the amplifier circuitry and in parallel developing algorithms that will incorporate the frequency response characteristics of these amplifiers.
Simulations To develop the method we constructed a simple simulation of a single compartment neuron attached to a resistor, simulating the resistance of the recording pipette ( R s is the electrode resistance). I m is the injected current and the other variables as shown in Eqs and . Also note that the capacitive current is given by: I c = I m − k ·( V p − V m )/ R s , where V p is the recorded voltage (across the recording the pipette), V m is the voltage across the membrane only I c is stray current. For k = 0 we assume no stray capacitance and for k = 1, capacitance was included. Hence at each time point we calculated (dt is the time step of the simulation): d V m = d t c ( g l ( V m − V l ) + g e ( V m − V e ) + g i ( V m − V i ) − ( I m − I c ) ) (20) d V p = k * d t · I c / C p (21) V m = V m + d V m (22) for k = 1 , V p = V p + d V p , whereas for k = 0 , V p = V m + I m ⋅ R s (23) To test the performance of our method in extraction of excitatory and inhibitory conductances, we simulated the response of a cell to a train of synaptic inputs which depress according to the mathematical description of short term synaptic depression (STD, ) with τ inact = 0.003 S (inactivation time constant) for excitation and τ rec = 0.5 S (recovery time constant) for excitation and the same inactivation time constant for inhibition (0.003S) but a longer recovery time constant ( τ rec = 1.3 S ) but exhibiting the same utilization (0.7). The values of the passive properties of the cell and the strengths of synaptic conductances in the simulation were chosen to be at a similar range of experimental data [ 8,14,15 ]. Namely, resting input resistance of 150MΩ, total capacitance of 0.15nF and pipette resistance of 30MΩ. Simulations were run using a simple Euler method with a time step of 0.1msfor all point neuron simulations except for (0.025ms). Morphologically realistic simulations We used NEURON 7.6.7 in Python 3.7.6 to simulate a CA1 pyramidal cell . We loaded this cell directly into NEURON without changes to the neuron model. 50 inhibitory and 50 excitatory were distributed on parts of the apical tree. The synaptic mechanism was a modified version of the Tsodyks-Markram synapse where we added a synaptic rise time (NEURON mechanism available at https://github.com/danielmk/ENCoI/tree/main/Python/mechs/tmgexp2syn.mod ). The synaptic parameters are detailed in . Event frequency of both synapses was 10Hz and events were jitter with a Gaussian distribution of 10ms SD. All measurements were performed at the soma. To simulate an access resistor in current clamp we added a section with a specified resistance between the current clamp point process and the soma. The access resistance was 10MOhm. For the stimulation current we summed two sine waves of 210Hz and 315Hz. The combined sine waves had a peak-to-peak amplitude of 1nA. Voltage clamp was performed in separate simulations with 10MOhm access resistance as during current clamp. While isolating the excitatory current, we clamped at the reversal potential of inhibitory synapses (-75mV). While isolating the inhibitory current, we clamped at the reversal potential of excitatory synapses (0mV). To convert current to conductance, we divided the current by the clamped voltage minus the synaptic reversal potential. To investigate the relationship between measurement quality and dendritic path distance to soma, we moved a single excitatory and a single inhibitory synapse to the same dendritic section. Sections were chosen by iterating through the list of apical dendrites in steps of 5. The synaptic parameters are detailed in . Python simulation results were saved as.m files using SciPy . Simultaneous conductance analysis and plotting were performed in MATLAB.
To develop the method we constructed a simple simulation of a single compartment neuron attached to a resistor, simulating the resistance of the recording pipette ( R s is the electrode resistance). I m is the injected current and the other variables as shown in Eqs and . Also note that the capacitive current is given by: I c = I m − k ·( V p − V m )/ R s , where V p is the recorded voltage (across the recording the pipette), V m is the voltage across the membrane only I c is stray current. For k = 0 we assume no stray capacitance and for k = 1, capacitance was included. Hence at each time point we calculated (dt is the time step of the simulation): d V m = d t c ( g l ( V m − V l ) + g e ( V m − V e ) + g i ( V m − V i ) − ( I m − I c ) ) (20) d V p = k * d t · I c / C p (21) V m = V m + d V m (22) for k = 1 , V p = V p + d V p , whereas for k = 0 , V p = V m + I m ⋅ R s (23) To test the performance of our method in extraction of excitatory and inhibitory conductances, we simulated the response of a cell to a train of synaptic inputs which depress according to the mathematical description of short term synaptic depression (STD, ) with τ inact = 0.003 S (inactivation time constant) for excitation and τ rec = 0.5 S (recovery time constant) for excitation and the same inactivation time constant for inhibition (0.003S) but a longer recovery time constant ( τ rec = 1.3 S ) but exhibiting the same utilization (0.7). The values of the passive properties of the cell and the strengths of synaptic conductances in the simulation were chosen to be at a similar range of experimental data [ 8,14,15 ]. Namely, resting input resistance of 150MΩ, total capacitance of 0.15nF and pipette resistance of 30MΩ. Simulations were run using a simple Euler method with a time step of 0.1msfor all point neuron simulations except for (0.025ms).
We used NEURON 7.6.7 in Python 3.7.6 to simulate a CA1 pyramidal cell . We loaded this cell directly into NEURON without changes to the neuron model. 50 inhibitory and 50 excitatory were distributed on parts of the apical tree. The synaptic mechanism was a modified version of the Tsodyks-Markram synapse where we added a synaptic rise time (NEURON mechanism available at https://github.com/danielmk/ENCoI/tree/main/Python/mechs/tmgexp2syn.mod ). The synaptic parameters are detailed in . Event frequency of both synapses was 10Hz and events were jitter with a Gaussian distribution of 10ms SD. All measurements were performed at the soma. To simulate an access resistor in current clamp we added a section with a specified resistance between the current clamp point process and the soma. The access resistance was 10MOhm. For the stimulation current we summed two sine waves of 210Hz and 315Hz. The combined sine waves had a peak-to-peak amplitude of 1nA. Voltage clamp was performed in separate simulations with 10MOhm access resistance as during current clamp. While isolating the excitatory current, we clamped at the reversal potential of inhibitory synapses (-75mV). While isolating the inhibitory current, we clamped at the reversal potential of excitatory synapses (0mV). To convert current to conductance, we divided the current by the clamped voltage minus the synaptic reversal potential. To investigate the relationship between measurement quality and dendritic path distance to soma, we moved a single excitatory and a single inhibitory synapse to the same dendritic section. Sections were chosen by iterating through the list of apical dendrites in steps of 5. The synaptic parameters are detailed in . Python simulation results were saved as.m files using SciPy . Simultaneous conductance analysis and plotting were performed in MATLAB.
|
Getting the Numbers Right in Medicinal Chemistry | e208c5a0-96e1-440b-bb5c-19b848e4fa17 | 11694604 | Pharmacology[mh] | In many sub‐disciplines of Chemistry, the properties of compounds are studied. These properties are often investigated in experiments that furnish quantitative data. Medicinal Chemistry is one of these sub‐disciplines, as Medicinal Chemists often make (or sometimes purchase) compounds and then investigate them for their biological properties. In this regard, Medicinal Chemistry is not completely different from, for instance, Physical Chemistry, with the notable exception that Medicinal Chemists are usually more interested in biological rather than physical properties of chemical matter. In the field of Medicinal Chemistry, quantitative data obtained from biological assays are often represented with parameters such as IC 50 values or percentage remaining activities for target inhibition, or half‐lives for stability in relevant biological media. The way these numbers are reported reflects on the precision of the respective experiment. Medicinal Chemists usually learn at rather early stages in their careers that any work with biological material, even in the controlled setting of an in vitro assay, involves significant experimental uncertainties, leading to rather high standard deviations (SD) of the obtained mean values. Hence, SD values of, for example, 10 %, 20 % or even more of the mean value are not uncommon at all in this discipline, which is perfectly acceptable though. Medicinal Chemists usually aim for improvements of biological properties that are way beyond these percentages when they optimise a hit or lead structure, and we all can therefore be completely at ease with these experimental uncertainties. However, problems arise when such data from biological experiments are reported in a way that is in obvious contradiction to the acceptance of a significant experimental error, that is, when data are provided with unreasonable numbers of significant figures. In this Persepctive, I would like to argue that this is a rather widespread phenomenon in the Medicinal Chemistry literature and that we all should aim to do better. It should be noted that I do not intend to call out specific colleagues or editors, but I would rather like to alert us all (including myself) that we should pay more attention to the way we report quantitiative data in the field of Medicinal Chemistry. In order to make the message of this Perspective as clear as possible, I hereby would like to provide a short reminder of the way quantitative scientific data should ideally be reported. Significant figures are essential in this context, that is, the digits of a number starting with the digit furthest to the left that is not zero, and ending with the digit furthest to the right. For example, the number 1.24 has three significant figures as has the number 0.124 or the number 0.120. The significant figures of an experimentally obtained number should correspond to the precision of the respective experiment: the last digit to the right usually is the one with the experimental uncertainty. Hence, numbers from very precise experiments should be reported with more significant figures than results from less precise experiments with larger errors. In synthetic chemistry, it can be a useful rule to report amounts of employed reagents with three significant figures, if this is justified by the experimental precision. The latter is sometimes not the case when volumes of liquids are measured as this is often done with less precision in synthetic laboratories. Thus, an amount of e. g. 5.24 g for a starting material would probably be universally accepted in the synthetic literature, while a number such as 5.2404 g would be universally criticised and should be rounded. The question therefore arises if a similarly useful rule of thumb can be identified for reporting numbers from biological assays in the field of Medicinal Chemistry. In my estimation, this is perfectly possible when one reflects the aforementioned statements regarding experimental uncertainties in such assays. If SD values of more than 10 % of the mean value are acceptable in most biological assays, then the number of significant figures for the mean value should be limited to two . I would like to further clarify this statement: With such significant experimental errors (>10 %), the experimental uncertainty is usually already reflected in the second significant figure. Hence, any further significant figure would be superfluous and without informational value. The SD then has to be adjusted accordingly so that it has the same number of decimal places as the mean value. However, in my estimation, one should always round up the SD in order to avoid reporting values that appear to be more precise than the experiment has actually been. The latter recommendation might be controversial and not universally accepted though. To illustrate these statements, Table provides some made‐up examples of incorrect and correct versions of experimental data. As with any rule of thumb, the proposed guideline to report mean values of data from biological assays with just two significant figures should be applied with some caveats. For instance, some assays might have an intrinsically higher experimental precision than described above, that is, they might consistently furnish SD values significantly below 10 % of the respective mean values. Naturally, such data should be reported with significant figures that reflect the higher experimental precision. It should be noted though that such precise biological assays are the fairly rare exception in most Medicinal Chemistry publications. I became alert to the problem discussed in this Perspective in a slightly unusual way. We teach a research seminar for our undergraduate students in their last semester prior to their final exam. For this research seminar, groups of two students each present a rather recent paper from the Medicinal Chemistry literature that is then further discussed by the whole class. In this scenario, I have had the opportunity to look at papers in great detail that I probably would just quickly go through when browsing the latest literature in our field. After a while, I have noticed that nearly every presented paper, even those from esteemed journals, had some issues with the way biological data are presented: there simply were too many significant figures provided in most of the tables. However, statements on such a delicate topic should not be misled by subjective impressions, but should rather be based on objective observations. In the preparation of this contribution, I have therefore gone through the most recent issues of five selected and esteemed journals in the field of Medicinal Chemistry: the Journal of Medicinal Chemistry , the European Journal of Medicinal Chemistry , ChemMedChem , ACS Medicinal Chemistry Letters , and RSC Medicinal Chemistry . For each of these issues, I have only taken original research papers into account, no reviews or other articles. Papers without numerical data from biological assays were neglected, and no Supporting Informations were taken into account. My original idea had been to generate some sort of statistics by grouping the relevant papers into different categories with respect to the representation of numbers from biological assays. However, I have soon realised that such a meticulous exercise would not be necessary to get the main point across. I have simply looked for mistakes in the style of those depicted in Table , and I have found them in the vast majority of all studied research papers. An arbitrary selection of such mistakes is provided in Table , with two examples from each of the investigated journal issues. It is astonishing how similar the examples 1–10 (Table ) – that can actually be found in recent Medicinal Chemistry literature – are to the made‐up examples of ′incorrect’ numbers in Table . It should be noted that the imaginary examples listed in Table had been compiled before the described literature search, so they have not been adjusted by any means to what was found in the cited publications. This observed phenomenon is certainly not limited to a specific type of assay or to a certain journal, but appears to be of almost ubiquitous nature. Having in mind that such examples could be found in many papers in all of the five studied journals, one could almost identify an epidemic of superfluous digits in Medicinal Chemistry data . In this context, it is important to make several clarifications: (i) No concerns with the quality of the actual data and their integrity are meant to be implied. The discussed issue is only with the formal representation of experimental data, not with the experiments themselves. (ii) No specific papers are explicitly cited that could serve as particularly bad examples. This article is intended to be a constructive contribution to our scientific community, not some sort of medieval pillory. In particular because the problem appears to be of nearly ubiquitous nature, it would be pointless to call out some selected authors on it. All of the examples listed in Table can easily be found in the respective journal issues if desired. (iii) Sometimes, overly precise numbers are provided with respect to readability. For instance, if a table has nM activity data for most of its entries, it can make sense to list all entries with nM numbers, even if that means that some of the entries will have three (or even more) significant figures. In my estimation, such practice should not be of concern as readability is a noble cause. (iv) In some of the papers I have studied, experimental numbers were treated with great care and are flawlessly presented. However, if negative examples are not explicitly called out here, it appears to be consistent to not explicitly cite best‐practice examples either. Overall, I would simply encourage readers of this Perspective to repeat my exercise and browse recent issues of some Medicinal Chemistry journals. It is very likely that the outcome will be similar, with some examples of good practice and a worrying number of flaws similar to those listed in Table . In summary, this Perspective is intended to discuss a problem with the way quantitative data are often reported in the Medicinal Chemistry literature. The identification of this problem is based on two hypotheses: (i) Medicinal Chemistry is a quantitative science, and therefore, quantitative data should be treated with great care and rigor. (ii) Biological assays conducted in the field of Medicinal Chemistry often come with significant experimental uncertainties (i. e. SD values of more than 10 % of the mean value), which is perfectly acceptable in this discipline. From these hypotheses, a general guideline is derived: For most datasets from biological assays in Medicinal Chemistry, the number of significant figures for the mean value should be limited to two . Browsing the recent Medicinal Chemistry literature, it becomes obvious that it is the exception rather than the rule that biological data are reported in such a consistent way. Far too often, data are provided with significant flaws (see selected examples in Table ). As this is not limited to specific types of assays or to certain journals, an epidemic of superfluous digits in Medicinal Chemistry data can be identified. The obvious argument against these statements might be that they concern a mere technicality rather than the substance of Medicinal Chemistry research. I would like to argue against this. Firstly, a quantitative science should always be rigorous with the way quantitative data are reported. Secondly, one has to wonder if authors who report data in such an unfortunate way are really aware of the inherently limited precision of their experiments. In any case, it would be no problem to do better and there is no obvious reason why we should not aim to do so. It is a bit mysterious to me why the described phenomenon exists in the first place. One explanation might be that some authors believe that reporting numbers with many digits is an indication of precision. However, this is certainly not the case as experimental precision is part of the experiments themselves. An experiment with significant error does not become more precise by throwing many digits at the reader to whom its results are communicated. Another potential explanation would be even simpler: the scientific community of Medicinal Chemists just has not paid sufficient attention to this issue yet. This brings me to another aspect: What can we do about this problem? I would like to propose a significant change in editorial policies in Medicinal Chemistry journals. There should be explicit guidelines for authors on the way quantitative data from biological assays are reported, and editorial offices should check if submissions follow these guidelines. In case of significant violations, submissions should be sent back to their authors before they enter the peer review process. Referees should pay attention to the issue as well and should address any remaining inconsistencies in their reports. In my estimation, all of these measures could be implemented in a relatively straightforward manner. It is important to note that this Perspective only addresses the widespread habit to report data with superfluous digits, but that there are other discussible issues with how data are presented in the Medicinal Chemistry literature. Most notably, there are several scholars who advocate for the use of pIC 50 instead of IC 50 values when activity data are reported (with pIC 50 being the negative decadic logarithm of IC 50 ). This mainly results from the multiplicative nature of experimental errors in biological systems that therefore lead to log‐normal distributions. However, this topic (despite such reasonable arguments) is not within the scope of this contribution. Finally, I would like to point out again that this Perspective is not intended to call out anyone or to cause controversy, but to stimulate fruitful discussions on the presentation of quantitative data within our scientific community. Most of us have probably been guilty of not rounding our numbers properly here and there (including myself), but we all can do our best to improve the way we report data from biological assays. It would be very much appreciated if this contribution might help to reach this goal. The authors declare no conflict of interest. |
Educational Experience of Interventional Cardiology Fellows in the United States and Canada | 5a983144-f014-4e78-ad1f-d227d1d012da | 9924361 | Internal Medicine[mh] | The survey questions were prepared through an interactive process among the coauthors. Fifty-nine questions were included in the final version . The survey was administered using Research Electronic Data Capture , and was distributed to IC fellows (postgraduate year 7 only) via e-mail at the end of the postgraduate year 7 training period for 2021 to 2022 (May 2022). Preferred learning sources were rated on a scale ranging from 0 to 10, with 10 representing the highest preference. Categorical variables are presented as percentages and were compared using the chi-square or Fisher exact test as appropriate. Statistical analyses were performed using Stata version 17.0 (StataCorp). The study was approved by the Institutional Review Board.
Of 360 postgraduate year 7 IC fellows in the United States and Canada, 111 (31%) responded to the survey. Most participants were from the United States (95%) and started their fellowships in the summer of 2021 (98%). Most participants (70%) were from university programs , 79% were men, the median age was 31 to 35 years, the median clinical work hours were 61 to 70 per week, and the time spent on research-related activities was <2 hours/week. The median number of first-year IC fellows per program was 2, the median number of attending interventional cardiologists was 8 to 10, and the median number of percutaneous coronary intervention (PCI) hospitals at which the IC fellows rotated was 2. The median number of cases performed as the first operator was 350 to 399 for PCI and 40 to 49 for peripheral cases. The median number of cases for which the IC fellows scrubbed in was 50 to 59 for ST-segment elevation myocardial infarction, 40 to 49 for structural cases, and 20 to 29 for chronic total occlusion (CTO) PCI . We did not find an association among CTO, structural, and peripheral case numbers and program type (university vs nonuniversity) . Most fellows were very comfortable obtaining femoral access (91%) and radial access (97%) and engaging and treating bypass grafts (63%). However, only 32% were very comfortable with engaging coronary arteries in patients with prior transcatheter aortic valve replacement. For femoral access, ultrasound was always used by 65% of fellows, and a micropuncture needle was used by >95%. For radial access, ultrasound was always used by 12%. IC fellows at university programs were more comfortable with engaging and treating bypass grafts ( P = 0.006), performing and interpreting intravascular ultrasound (IVUS) ( P = 0.001), using covered stents ( P = 0.039), and checking radiation dose for the cases for which they scrubbed in ( P = 0.001) . The Perclose (Abbott Vascular) was used in 50% to 60% and the Angio-Seal (Terumo) in 40% to 50% of femoral arterial closures. Most IC fellows were very comfortable using the Perclose (79%) and Angio-Seal (85%). Most (88%) IC fellows were very comfortable performing and interpreting invasive coronary physiology. The proportions of IC fellows very comfortable with performing and interpreting IVUS and optical coherence tomography (OCT) were 62% and 32%, respectively , and 20% did not have access to OCT. Higher PCI volume was associated with higher comfort level with IVUS but not OCT ( P = 0.04 and P = 0.55, respectively, robust Poisson regression), but variations in PCI volume did not explain the changes in IVUS comfort level (pseudo- R 2 = 0.004). Overall, IC fellows were more comfortable with IVUS than OCT ( P = 0.024, chi-square test). In addition, IC fellows who were more comfortable with IVUS were also more comfortable with OCT ( P = 0.024, chi-square test). The median number of atherectomy cases per IC fellow was 30 to 39. The proportion of fellows very comfortable with various calcium modification techniques was as follows: 84% for intravascular lithotripsy, 42% for rotational atherectomy, 32% for laser atherectomy, and 27% for orbital atherectomy. The median number of thrombectomy techniques used per IC fellow was 16 to 20. The proportion of IC fellows very comfortable with various thrombectomy techniques was as follows: 50% for overall management of intracoronary thrombus, 56% for syringe-based manual aspiration, and 39% for continuous mechanical aspiration with the Penumbra CAT RX. The median number of distal embolic device use per IC fellow was 5 to 10. The proportion of fellows very comfortable with embolic protection devices (EPDs) was 24% for the SpiderFX (Medtronic) and 11% for the FilterWire EZ (Boston Scientific). The proportions of IC fellows very comfortable with the use of various devices and techniques and the median number of devices used were as follows: 89% for intra-aortic balloon pumps (median 21-29), 69% for Impella (Abiomed) (median 21-29), 8% for venoarterial extracorporeal membrane oxygenation (median 1-4), 14% for covered stents (median 1-4), 4% for fat embolization (median 0), and 3% for coil embolization (median 0). The proportions of IC fellows very comfortable with various bifurcation stenting techniques and the median case number for which the stenting technique was used were as follows: 89% for provisional stenting (median >30), 40% for double-kissing (DK) crush (median 15-20), 32% for T and protrusion (median 11-20), and 30% for culotte (median 1-4) . Most participants (73%) reported looking at the blood pressure waveform and electrocardiogram regularly before and after each coronary injection. More than one-half (56%) reported checking the radiation dose in the cases for which they scrubbed in regularly, but only 13% reported looking at their own radiation exposure regularly. Almost one-quarter (24%) reported being told that their radiation exposure was too high, of whom 81% took measures to reduce their radiation exposure. Measures to reduce radiation self-exposure included decreasing fluoroscopy time, increasing distance from the radiation source, use of more physical barriers, and less involvement in diagnostic cases. Proportions of at least 1 use of radiation protection devices were as follows: 85% for radiation protection goggles, 71% for RADPAD devices, 19% for robotic PCI, 10% for Zero-Gravity (Biotronik), and 7% for the Rampart M1128. IC fellows tracked their procedure types and numbers mainly using personal computers (50%) or web-based procedure logs (40%). IC fellowship was considered very stressful by 22%, somewhat stressful by 62%, and not stressful by 16%. Most participants (84%) reported having enough psychological support. Those with insufficient psychological support named access to affordable mental health, more attending interventional cardiologists, and less call burden to improve their mental health . The most preferred learning sources were webinars (7.6/10), presentations at meetings (7.5/10), YouTube (7.5/10), journal articles (6.4/10), books (5.7/10), and Twitter (4.0/10) . The plans of fellows for the following years were private practice (53%), structural fellowship (16%), academic IC (15%), complex and high-risk PCI fellowship (8%), peripheral careers (7%), and noninterventional jobs (1%).
Our study provides a contemporary snapshot of the educational experience of first-year IC fellows in the United States and Canada during the 2021-2022 fellowship year. The main findings of our study are as follows: 1) 13% reported performing <250 PCIs ; 2) 21% are women; 3) 64% and 32% felt very comfortable using IVUS and OCT, respectively ; 4) fewer than one-half felt very comfortable using various atherectomy techniques; 5) fewer than one-quarter felt very comfortable using EPDs; 6) approximately one-quarter were told that their radiation exposure was too high, but only 13% regularly checked their own radiation exposure; and 7) 22% consider IC fellowship very stressful and 62% somewhat stressful, and 16% reported lack of adequate psychological support. The Accreditation Council for Graduate Medical Education (ACGME) program requirements for IC fellowship state that “each fellow should perform a minimum of 250 coronary interventions.” In our survey, 13% reported having performed <250 PCIs . The training period of the participants was July 2021 to July 2022, when cardiac catheterization laboratories were affected by the COVID-19 pandemic and the shortage of iodinated contrast media, which may have diminished the PCI volume because of cancellations of elective procedures. , Similar to our findings, in a 2020 study investigating the impact of the pandemic on IC fellowship in the New York metropolitan area, 21% of program directors expected that their fellows would have <250 PCIs, and another 14% expected that the average PCI number would be <300. , According to the Association of American Medical Colleges report “Active Physicians by Sex and Specialty, 2019,” 8% of practicing interventional cardiologists are women. The Association of American Medical Colleges report “ACGME Residents and Fellows by Sex and Specialty, 2020” indicates that in 2019, 13% of IC fellows were women. In our survey, 21% of participating IC fellows were women, which might indicate increasing recruitment of women in IC. The use of radial access is recommended in the American College of Cardiology/American Heart Association and European Society of Cardiology guidelines , ; however, femoral access is often required (eg, need for mechanical circulatory support or in CTO PCI). State-of-the-art femoral access requires the use of fluoroscopy, ultrasound, a micropuncture needle, femoral angiography, and vascular closure devices, which play a role in the optimization of patient outcomes. , , , In our survey, however, only 65% reported regular use of ultrasound for femoral access, and 19% reported using ultrasound <50% of the time for femoral access. In contrast, in a 2016 survey of interventional cardiologists, despite the availability of ultrasound and technical expertise, only 13% reported routine use of ultrasound for femoral access, indicating that there is room for improvement in ultrasound use. Intravascular imaging is associated with improved PCI outcomes. , , In a survey of IC fellows performed in 2018 and 2019, self-reported sufficient or expert training was reported by 95% for invasive physiology, 82% for IVUS, and 46% for OCT. In our study, most participants (98%) reported feeling somewhat or very comfortable in performing and interpreting invasive coronary physiology. Moreover, IVUS was performed and interpreted somewhat or very comfortably by 96% of the participants, and 57% of participants were somewhat or very comfortable in performing and interpreting OCT . Achieving excellent stent expansion is essential in the optimization of PCI outcomes. The aging patient population, with an increasing prevalence of comorbidities, may contribute to increasing lesion complexity (eg, heavy calcification) in the cardiac catheterization laboratory, which could hinder proper stent placement. Atherectomy techniques may be increasingly needed to modify calcified lesions to facilitate balloon angioplasty and stent implantation. In our survey, most fellows (79%) who performed ≥50 atherectomies felt very comfortable performing atherectomy, but only 20% of fellows who performed 20 to 29 atherectomies felt very comfortable. Although most participants (67%) who performed the DK crush technique ≥30 times felt very comfortable performing DK crush, only 33% of those who performed DK crush 5 to 10 times felt very comfortable. Overall, similar trends were observed for other techniques and devices (including thrombectomy techniques, mechanical circulatory support device use, and covered stents), where fellows with more experience with a given device or technique were more comfortable using or performing it. The use of EPDs in saphenous vein graft (SVG) PCI became the standard of care largely on the basis of the SAFER (Saphenous Vein Graft Angioplasty Free of Emboli Randomized) trial, in which the PercuSurge GuardWire reduced the incidence of 30-day major adverse cardiovascular events by 42%, mainly because of a reduction in periprocedural myocardial infarction, compared with no EPD. However, observational studies conducted later had conflicting results, , and the 2018 European Society of Cardiology/European Association for Cardio-Thoracic Surgery guidelines and the 2021 American College of Cardiology/American Heart Association/Society for Cardiovascular Angiography and Interventions guidelines on coronary artery revascularization have a Class 2a recommendation, a downgrade from the previous Class 1 recommendation. , An analysis of the National Cardiovascular Data Registry CathPCI Registry demonstrated that EPDs were used in approximately 20% of SVG PCIs, with 5.6% of hospitals using EPDs in ≥50% of SVG PCI cases and about one-third of hospitals never using EPDs. Similarly, in our survey, the median number of EPDs deployed by IC fellows was 5 to 10, with only 24% and 11% of the fellows being very comfortable with the SpiderFX and FilterWire EZ, respectively, indicating infrequent use of EPDs. Infrequent use of EPDs is negatively affecting training in the use of these devices, which will likely lead to further reduction in EPD use. Although improvements in catheterization laboratory equipment have led to a decrease in occupational exposure to ionizing radiation, radiation exposure may increase the risk for brain tumors and cataracts. Moreover, the adverse effects of radiation exposure on the thyroid gland, , reproductive system, and cardiovascular system are well documented. Only 56% of IC fellows reported checking radiation dose regularly, and only 13% reported looking at their own radiation exposure. Moreover, 24% of the IC fellows were told that their radiation exposure was too high. High variation in radiation safety practices was also seen in prior surveys, and radiation is rarely discussed in live case presentations. , Therefore, training in catheterization laboratory radiation mitigation strategies (eg, using less extreme angulation, reducing the distance between radiation source and detector, setting a lower frame rate, using real-time radiation monitors, and working with manufacturers and radiation physicists to learn how radiation exposure occurs) is needed. , Our study shows that IC training can take a significant toll on fellows, with 84% of participants describing IC training as somewhat (62%) or very (22%) stressful. Of the participants, 16% reported not having enough psychological support and suggested more attending physicians and mentorship (35%), improved access to affordable mental health (35%), and less nonclinical work along with decreased work hours (25%) to alleviate stress. However, we did not find a relationship between increased stress levels and not having enough psychological support ( P = 0.533). Our findings are comparable with those reported in the “Medscape Cardiologist Lifestyle, Happiness & Burnout Report 2021,” in which approximately one-half of the participating cardiologists were not happy outside work and 75% reported some anxiety about their future. The cardiologists also cited similar reasons for burnout, with “too many bureaucratic tasks” (52%), lack of respect from administrators and employers, colleagues, or staff members (48%), lack of control and autonomy (34%), and government regulations (10%). Our survey suggests that the following improvements could enhance interventional fellows’ education and experience. First, additional training is desirable in intravascular imaging, devices and techniques for treating complications, atherectomy, and radiation awareness. As many of these devices are infrequently used in everyday practice, training could take many forms, such as bench deployment of these devices, simulation, and practical and easily available instructions on how to perform these procedures. Second, access to psychological support would be helpful. Fellows suggested that programs should ensure that they have the career and job-search guidance they need and have “protected time” (eg, Sundays off) to improve well-being. In our study, webinars (7.6/10) and presentations at meetings (7.5/10) were the preferred learning methods . Similarly, a recent survey of interventional cardiologists reported presentations at meetings (66%) and webinars (48%) as the most useful learning sources. This highlights that fellows prefer more “digestible” material and demonstrates opportunities to expand video- and web-based training material, but without compromising accuracy and quality. Study limitations Despite the high response rate for a survey (31%), IC fellows who participated in the survey may be more interested in new developments, which might result in selection bias. Moreover, the case numbers are self-reported and inherently subject to recall bias.
Despite the high response rate for a survey (31%), IC fellows who participated in the survey may be more interested in new developments, which might result in selection bias. Moreover, the case numbers are self-reported and inherently subject to recall bias.
Our study provides insights into the educational experience of IC fellows in the United States and Canada and the comfort levels of IC fellows with various IC core competencies. We show that even though most IC fellows feel somewhat comfortable using various IC techniques, in 2021 and 2022, when the impact of the COVID-19 pandemic and iodinated contrast shortage were appreciable, 13% reported having performed fewer than the ACGME requirement of at least 250 PCIs per year, with improvements needed in the use of state-of-the-art femoral access, intravascular imaging, comfort with atherectomy techniques and devices, EPDs, complication management, radiation awareness and mitigation, and psychological support. Perspectives WHAT IS KNOWN: The COVID-19 pandemic and iodinated contrast shortage may have affected IC fellowship training. WHAT IS NEW: We provide a snapshot of the current educational experience of IC fellows and demonstrate that 13% performed <250 PCIs/y; a large proportion reported lack of comfort with bifurcation PCI techniques, including state-of-the-art femoral access, intravascular imaging, atherectomy, use of covered stents, and EPDs; fellows reported a lack of awareness of radiation exposure; and 16% reported insufficient psychological support. WHAT IS NEXT: Improving access to psychological support and training in various forms, such as bench deployment of these devices, simulation, and practical and easily available instructions on how to perform these procedures, are needed.
The authors are grateful for the philanthropic support of their generous anonymous donors and the philanthropic support of Drs Mary Ann and Donald A. Sens, Mrs Diane and Dr Cline Hickok, Mrs Wilma and Mr Dale Johnson, the Mrs Charlotte and Mr Jerry Golinvaux Family Fund, the Roehl Family Foundation, and the Joseph Durda Foundation. The generous gifts of these donors to the Minneapolis Heart Institute Foundation’s Science Center for Coronary Artery Disease helped support this research project. Dr Azzalini has received honoraria from Teleflex, Abiomed, Asahi Intecc, Philips, GE Healthcare, Abbott Vascular, and Cardiovascular Systems. Dr Sandoval previously served on the advisory boards for Roche Diagnostics and Abbott Diagnostics without personal compensation; and has been a speaker without personal financial compensation for Abbott Diagnostics. Dr Brilakis has received consulting and speaker honoraria from Abbott Vascular, the American Heart Association (associate editor, Circulation ), Amgen, Asahi Intecc, Biotronik, Boston Scientific, the Cardiovascular Innovations Foundation (Board of Directors), ControlRad, Cardiovascular Systems, Elsevier, GE Healthcare, Interventional Medical Device Solutions, InfraRedx, Medicure, Medtronic, Opsens, Siemens, and Teleflex; is an owner of Hippocrates; and is a shareholder in MHI Ventures and Cleerly Health. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.
|
Immediate single‐tooth implant placement in bony defect sites: A 10‐year randomized controlled trial | ab3395e4-ef20-43f1-ab5c-89d3199fba2b | 11866729 | Dentistry[mh] | INTRODUCTION Immediate implant placement in the esthetic zone is a favorable treatment option if there is an intact buccal bone wall. , , , , Some authors state that a compromised alveolar socket might affect the outcome of immediate implant placement. , , Therefore these authors recommend delayed implant placement combined with bone grafting and/or soft tissue grafting when a buccal plate defect is found at implant placement. , , However, there is some evidence that favorable treatment outcomes are also possible on placing implants directly in compromised postextraction sockets, although the included studies were not limited to the maxillary esthetic region. There is very little literature on immediate implant placement in extraction sockets with a buccal plate defect in the maxillary esthetic region. Sarnachiaro et al., Liu et al., Pohl et al., Mizuno et al., and Qian et al. performed prospective studies with a follow‐up of up to 1 year. Noelken et al. studied immediate implant placement in patients with a buccal bone defect, with a median follow‐up of 22 months. Only Kamperos et al., Slagter et al., and Zhao et al. reported 5‐year results for immediate implant placement in buccal defect sites. No long‐term studies with a follow‐up of at least 10 years have been published yet on this specific dental implant treatment. Although it might not be essential to have an intact buccal plate at immediate implant placement, stable buccal bone thickness (BBT) and midbuccal mucosa levels are important long‐term parameters in the esthetic region. The BBT and midbuccal mucosa level should, preferably, be part of the evaluation, starting with preoperative measurements at baseline. Only Liu et al., Slagter et al., Mizuno et al., Qian et al., and Zhao et al. assessed BBT and midbuccal mucosa levels in patients with buccal plate defects. It is important that studies performing immediate implant placement in compromised sites in the maxillary esthetic region report full‐scale evaluation parameters, including information on the buccal mucosa, bone level, and bone thickness. Medium‐term and long‐term follow‐ups are particularly needed (statements from the XV European Workshop in Periodontology). Since full‐scale long‐term evaluations of implants inserted in postextraction sockets with buccal plate defects in the maxillary esthetic region are lacking, we undertook this 10‐year randomized controlled trial to assess changes in bone level, mucosa level, BBT, esthetic ratings by professionals, and patient‐reported satisfaction after immediate implant placement in postextraction sockets with buccal bony defects and delayed implant placement after ridge preservation in the esthetic region. MATERIALS AND METHODS 2.1 Study design Details of the study design, inclusion criteria, exclusion criteria, sample size calculation, patient characteristics, and 1‐ and 5‐year results were described by Slagter et al. in 2016 and Slagter et al. in 2021. , The initial study was set up as a 1‐year randomized controlled trial. The Medical Ethical Committee (METc) of the University Medical Center Groningen (UMCG), the Netherlands, gave their consent for the 1‐year randomized controlled trial (NL32240.042.10). As the 10‐year follow‐up visit was part of a regular control appointment and did not serve to collect additional data, except for a questionnaire to be filled in by the participants, the METc concluded that it was not new clinical research involving test subjects, as referred to in the Medical Research Involving Human Subjects Act (METc UMCG RR number 202100181). The 1‐ and 10‐year studies were registered in the ISRCTN (International Standard Registered Clinical/Social Study Number) registry and the Dutch Trial Register, with the respective numbers ISRCTN57251089 and NTR_NL‐9340. All the patients gave written informed consent before enrollment and verbally approved the use of the research data obtained during the follow‐up. The study was conducted in accordance with the 2013 revised Helsinki Declaration of 1975. Preoperatively, cone beam computed tomography (CBCT) was used to assess whether there was sufficient bone on the palatal side to place an implant since, for primary stability of the implant, sufficient palatal bone is necessary in case of labial dehiscence. The vertical bony defect had to be ≥5 mm at the labial socket wall after removal of the tooth. Forty patients were enrolled and allocated to either an Immediate Group (test group): immediate implant placement and delayed provisionalization or a Delayed Group (control group): delayed implant placement after ridge preservation and delayed provisionalization. 2.2 Surgical procedure The surgical procedure has been described in detail by Slagter et al. In short, both groups’ failing tooth in the maxillary esthetic region was carefully removed and a bone graft was harvested from the tuberosity region with the use of chisels. In all cases, the ailing tooth was removed with a sulcular incision, careful detachment of the periodontal ligament, and use of periotomes. After removal of the tooth, the alveolus was meticulously cleansed, and any alveolar debridement was removed. Preparation of the Immediate Group's alveolus was done at the palatal side guided by a surgical template for ideal positioning. The tuberosity bone graft was shaped with forceps to match the labial bony defect. The bone graft was placed in the extraction socket, with the cortical side facing the periosteum, under the periosteum covering the labial plate defect. A mixture of tuberosity bone and deproteinized bovine bone substitute (Bio‐Oss; Geistlich, Wolhusen, Switzerland) was used to fill the remaining space between the last bur and the tuberosity bone graft. In the Immediate Group, the implant site was prepared on the palatal side of the alveolus without raising a flap using a surgical template for ideal positioning. After this, a tapered dental implant (NobelActive; Nobel Biocare AB, Gothenburg, Sweden) was placed. A soft tissue graft, also from the tuberosity region, was placed to seal the implant site. Three months later, the implant was uncovered and an implant‐level impression was made to manufacture a provisional restoration. The Delayed Group's alveolus was augmented with the same procedure described for the Immediate Group, but without implant placement. Three months after ridge preservation, a pedicled mucoperiosteal flap was raised in the Delayed Group to expose the maxilla, after which the tapered dental implant (NobelActive; Nobel Biocare AB, Gothenburg, Sweden) was placed using a surgical template. Three months after that, the implant was uncovered. The surgical procedures were performed by one experienced oral and maxillofacial surgeon (G.M.R.). 2.3 Prosthetic procedure In both groups, a screw‐retained provisional restoration was placed on the same day that the implant was uncovered. After 3 months, porcelain‐fused‐to‐zirconia definitive crowns were manufactured for both groups. The restoration was either glass‐ionomer cement‐retained or screw‐retained. The abutment screws were torqued at 32 Ncm. The prosthetic procedures were performed by a single dental laboratory and one experienced prosthodontist (H.J.A.M.). 2.4 Outcome measures The outcome measures have been described in detail by Slagter et al. Change in marginal bone level (MBL) was the primary outcome. The following outcome items were evaluated: 2.4.1 MBL and BBT Standardized digital periapical radiographs were made immediately after implant placement (baseline = T0) and 1 (T1) and 120 (T120) months after definitive restoration placement. Changes in MBL were calculated at T1 and T120 in relation to the level at baseline. BBT measurements were done using 3D image diagnostic and treatment planning software (NobelClinician version 2.1; Nobel Biocare Guided Surgery Center, Mechelen, Belgium). Of each patient, the position of the implant was determined by importing the 1‐month and 10‐year CBCT, in DICOM multifile format, into an image computing program (Maxilim version 2.3; Medicim, Sint‐Niklass, Belgium). In NobelClinician, the exact position of the implant, as determined in Maxilim, was aligned with a planning implant. A slightly different procedure was followed for the pretreatment CBCT in which no implant was present. Both the pretreatment CBCT and the 1‐month CBCT were imported in Maxilim. Both images were aligned by the computing program. Because the exact position of the implant was determined for the 1‐month image, it was now possible to implement this position in the pretreatment DICOM file. In this way, a combined file was constructed in which the tooth was still present, and an implant was imported in the exact position where it was going to be after treatment. In fact, the measured distance on the pretreatment CBCT is the distance of the labial surface of the bony layer (if present) to the future implant. The upper 5‐mm section of the implant, starting at the implant neck toward the apical point, was defined as the area of interest (locations M1, M2, M3, M4, M5). Details of the methods for analyzing BBT can be found in Maes et al. and Slagter et al. , 2.4.2 Survival rate The implant and restoration survival rates were defined as the percentage still in function 10 years after implant placement. 2.4.3 Changes in interproximal and midbuccal peri‐implant mucosa levels Standardized digital photographs were taken before extraction of the failing tooth (Tpre) and after 1 month (T1) and 120 months (T120), following the technique published by Meijndert et al. The interproximal and midbuccal changes were compared with the original gingival level of the failing tooth. 2.4.4 Clinical outcomes Clinical variables assessed at Tpre, T1, and T120 were papilla volume, amount of plaque, amount of bleeding, gingival index, and probing pocket depth. 2.4.5 Esthetic assessment Esthetic outcome (pink esthetic score/white esthetic score [PES/WES]) was assessed from the digital photographs. 2.4.6 Patient satisfaction Overall patient satisfaction was assessed by means of a visual analogue scale (VAS), with the possible scores ranging from 0 (completely dissatisfied) to 100 (completely satisfied). 2.4.7 Biological and technical complications Biological complications, namely, peri‐implant mucositis and peri‐implantitis, were calculated according to the consensus reached at the 2017 World Workshop of the American Academy of Periodontology and European Federation of Periodontology, where peri‐implant mucositis (radiographic bone loss <2 mm) is bleeding on probing+ (BoP+) and/or suppuration, and peri‐implantitis is BoP+ and/or suppuration in combination with a marginal bone loss ≥2 mm. , , In addition to the kind and number of technical complications, the restoration success rate was also calculated up to the 10‐year follow‐up visit and assessed using the modified criteria of the United States Public Health Service (USPHS). 2.5 Statistical analysis A radiographic change in marginal bone loss of >0.9 mm (SD 1 mm) after 12 months of definitive crown placement was regarded as a relevant difference between study groups. With an expected effect size of 0.9 mm, an α of 0.05, and a power of 0.80, 38 patients were required—19 in each group. The primary analyses were performed per protocol strategy. The distribution of the continuous data was checked visually on histograms and was supplemented by the Shapiro–Wilk test and Q–Q plots. Normally distributed data have been reported here as means with 95% CI and compared between groups by using the independent sample t test. Non‐normally distributed variables have been reported as medians and interquartile ranges (first quartile to third quartile) and compared between groups with the Mann–Whitney U test. In addition to the primary analyses, several sensitivity analyses were performed. First, the intention‐to‐treat strategy was applied to the primary outcome. Here, we considered the last‐observation‐carried‐forward, best–worst‐scenario (+1 and −1SD), and worst–best‐scenario (−1 and +1SD) methods as appropriate. Furthermore, since some of the outcomes consisted of repeated measurements, multivariable linear mixed‐effect models (LMM) were fitted using restricted maximum likelihood estimations to assess the between‐group differences of these repeated measurements (i.e., including the fixed effects of the type of intervention, baseline outcome measurement and follow‐up in months, and the random effect of patients). Effect estimates of each group at specific timepoints, including corresponding p values, were derived by centering the follow‐up variable for each specific timepoint. Both sensitivity analyses showed no substantial differences with the primary analyses (data available on request). Therefore, the primary analyses were shown to be robust and could thus be regarded as the main analyses. The statistical analyses were performed in R version 4.0.5 (R Core team), using the lme4 and lmertest‐packages. In all the analyses, a p value <0.05 was considered as statistically significant. Study design Details of the study design, inclusion criteria, exclusion criteria, sample size calculation, patient characteristics, and 1‐ and 5‐year results were described by Slagter et al. in 2016 and Slagter et al. in 2021. , The initial study was set up as a 1‐year randomized controlled trial. The Medical Ethical Committee (METc) of the University Medical Center Groningen (UMCG), the Netherlands, gave their consent for the 1‐year randomized controlled trial (NL32240.042.10). As the 10‐year follow‐up visit was part of a regular control appointment and did not serve to collect additional data, except for a questionnaire to be filled in by the participants, the METc concluded that it was not new clinical research involving test subjects, as referred to in the Medical Research Involving Human Subjects Act (METc UMCG RR number 202100181). The 1‐ and 10‐year studies were registered in the ISRCTN (International Standard Registered Clinical/Social Study Number) registry and the Dutch Trial Register, with the respective numbers ISRCTN57251089 and NTR_NL‐9340. All the patients gave written informed consent before enrollment and verbally approved the use of the research data obtained during the follow‐up. The study was conducted in accordance with the 2013 revised Helsinki Declaration of 1975. Preoperatively, cone beam computed tomography (CBCT) was used to assess whether there was sufficient bone on the palatal side to place an implant since, for primary stability of the implant, sufficient palatal bone is necessary in case of labial dehiscence. The vertical bony defect had to be ≥5 mm at the labial socket wall after removal of the tooth. Forty patients were enrolled and allocated to either an Immediate Group (test group): immediate implant placement and delayed provisionalization or a Delayed Group (control group): delayed implant placement after ridge preservation and delayed provisionalization. Surgical procedure The surgical procedure has been described in detail by Slagter et al. In short, both groups’ failing tooth in the maxillary esthetic region was carefully removed and a bone graft was harvested from the tuberosity region with the use of chisels. In all cases, the ailing tooth was removed with a sulcular incision, careful detachment of the periodontal ligament, and use of periotomes. After removal of the tooth, the alveolus was meticulously cleansed, and any alveolar debridement was removed. Preparation of the Immediate Group's alveolus was done at the palatal side guided by a surgical template for ideal positioning. The tuberosity bone graft was shaped with forceps to match the labial bony defect. The bone graft was placed in the extraction socket, with the cortical side facing the periosteum, under the periosteum covering the labial plate defect. A mixture of tuberosity bone and deproteinized bovine bone substitute (Bio‐Oss; Geistlich, Wolhusen, Switzerland) was used to fill the remaining space between the last bur and the tuberosity bone graft. In the Immediate Group, the implant site was prepared on the palatal side of the alveolus without raising a flap using a surgical template for ideal positioning. After this, a tapered dental implant (NobelActive; Nobel Biocare AB, Gothenburg, Sweden) was placed. A soft tissue graft, also from the tuberosity region, was placed to seal the implant site. Three months later, the implant was uncovered and an implant‐level impression was made to manufacture a provisional restoration. The Delayed Group's alveolus was augmented with the same procedure described for the Immediate Group, but without implant placement. Three months after ridge preservation, a pedicled mucoperiosteal flap was raised in the Delayed Group to expose the maxilla, after which the tapered dental implant (NobelActive; Nobel Biocare AB, Gothenburg, Sweden) was placed using a surgical template. Three months after that, the implant was uncovered. The surgical procedures were performed by one experienced oral and maxillofacial surgeon (G.M.R.). Prosthetic procedure In both groups, a screw‐retained provisional restoration was placed on the same day that the implant was uncovered. After 3 months, porcelain‐fused‐to‐zirconia definitive crowns were manufactured for both groups. The restoration was either glass‐ionomer cement‐retained or screw‐retained. The abutment screws were torqued at 32 Ncm. The prosthetic procedures were performed by a single dental laboratory and one experienced prosthodontist (H.J.A.M.). Outcome measures The outcome measures have been described in detail by Slagter et al. Change in marginal bone level (MBL) was the primary outcome. The following outcome items were evaluated: 2.4.1 MBL and BBT Standardized digital periapical radiographs were made immediately after implant placement (baseline = T0) and 1 (T1) and 120 (T120) months after definitive restoration placement. Changes in MBL were calculated at T1 and T120 in relation to the level at baseline. BBT measurements were done using 3D image diagnostic and treatment planning software (NobelClinician version 2.1; Nobel Biocare Guided Surgery Center, Mechelen, Belgium). Of each patient, the position of the implant was determined by importing the 1‐month and 10‐year CBCT, in DICOM multifile format, into an image computing program (Maxilim version 2.3; Medicim, Sint‐Niklass, Belgium). In NobelClinician, the exact position of the implant, as determined in Maxilim, was aligned with a planning implant. A slightly different procedure was followed for the pretreatment CBCT in which no implant was present. Both the pretreatment CBCT and the 1‐month CBCT were imported in Maxilim. Both images were aligned by the computing program. Because the exact position of the implant was determined for the 1‐month image, it was now possible to implement this position in the pretreatment DICOM file. In this way, a combined file was constructed in which the tooth was still present, and an implant was imported in the exact position where it was going to be after treatment. In fact, the measured distance on the pretreatment CBCT is the distance of the labial surface of the bony layer (if present) to the future implant. The upper 5‐mm section of the implant, starting at the implant neck toward the apical point, was defined as the area of interest (locations M1, M2, M3, M4, M5). Details of the methods for analyzing BBT can be found in Maes et al. and Slagter et al. , 2.4.2 Survival rate The implant and restoration survival rates were defined as the percentage still in function 10 years after implant placement. 2.4.3 Changes in interproximal and midbuccal peri‐implant mucosa levels Standardized digital photographs were taken before extraction of the failing tooth (Tpre) and after 1 month (T1) and 120 months (T120), following the technique published by Meijndert et al. The interproximal and midbuccal changes were compared with the original gingival level of the failing tooth. 2.4.4 Clinical outcomes Clinical variables assessed at Tpre, T1, and T120 were papilla volume, amount of plaque, amount of bleeding, gingival index, and probing pocket depth. 2.4.5 Esthetic assessment Esthetic outcome (pink esthetic score/white esthetic score [PES/WES]) was assessed from the digital photographs. 2.4.6 Patient satisfaction Overall patient satisfaction was assessed by means of a visual analogue scale (VAS), with the possible scores ranging from 0 (completely dissatisfied) to 100 (completely satisfied). 2.4.7 Biological and technical complications Biological complications, namely, peri‐implant mucositis and peri‐implantitis, were calculated according to the consensus reached at the 2017 World Workshop of the American Academy of Periodontology and European Federation of Periodontology, where peri‐implant mucositis (radiographic bone loss <2 mm) is bleeding on probing+ (BoP+) and/or suppuration, and peri‐implantitis is BoP+ and/or suppuration in combination with a marginal bone loss ≥2 mm. , , In addition to the kind and number of technical complications, the restoration success rate was also calculated up to the 10‐year follow‐up visit and assessed using the modified criteria of the United States Public Health Service (USPHS). MBL and BBT Standardized digital periapical radiographs were made immediately after implant placement (baseline = T0) and 1 (T1) and 120 (T120) months after definitive restoration placement. Changes in MBL were calculated at T1 and T120 in relation to the level at baseline. BBT measurements were done using 3D image diagnostic and treatment planning software (NobelClinician version 2.1; Nobel Biocare Guided Surgery Center, Mechelen, Belgium). Of each patient, the position of the implant was determined by importing the 1‐month and 10‐year CBCT, in DICOM multifile format, into an image computing program (Maxilim version 2.3; Medicim, Sint‐Niklass, Belgium). In NobelClinician, the exact position of the implant, as determined in Maxilim, was aligned with a planning implant. A slightly different procedure was followed for the pretreatment CBCT in which no implant was present. Both the pretreatment CBCT and the 1‐month CBCT were imported in Maxilim. Both images were aligned by the computing program. Because the exact position of the implant was determined for the 1‐month image, it was now possible to implement this position in the pretreatment DICOM file. In this way, a combined file was constructed in which the tooth was still present, and an implant was imported in the exact position where it was going to be after treatment. In fact, the measured distance on the pretreatment CBCT is the distance of the labial surface of the bony layer (if present) to the future implant. The upper 5‐mm section of the implant, starting at the implant neck toward the apical point, was defined as the area of interest (locations M1, M2, M3, M4, M5). Details of the methods for analyzing BBT can be found in Maes et al. and Slagter et al. , Survival rate The implant and restoration survival rates were defined as the percentage still in function 10 years after implant placement. Changes in interproximal and midbuccal peri‐implant mucosa levels Standardized digital photographs were taken before extraction of the failing tooth (Tpre) and after 1 month (T1) and 120 months (T120), following the technique published by Meijndert et al. The interproximal and midbuccal changes were compared with the original gingival level of the failing tooth. Clinical outcomes Clinical variables assessed at Tpre, T1, and T120 were papilla volume, amount of plaque, amount of bleeding, gingival index, and probing pocket depth. Esthetic assessment Esthetic outcome (pink esthetic score/white esthetic score [PES/WES]) was assessed from the digital photographs. Patient satisfaction Overall patient satisfaction was assessed by means of a visual analogue scale (VAS), with the possible scores ranging from 0 (completely dissatisfied) to 100 (completely satisfied). Biological and technical complications Biological complications, namely, peri‐implant mucositis and peri‐implantitis, were calculated according to the consensus reached at the 2017 World Workshop of the American Academy of Periodontology and European Federation of Periodontology, where peri‐implant mucositis (radiographic bone loss <2 mm) is bleeding on probing+ (BoP+) and/or suppuration, and peri‐implantitis is BoP+ and/or suppuration in combination with a marginal bone loss ≥2 mm. , , In addition to the kind and number of technical complications, the restoration success rate was also calculated up to the 10‐year follow‐up visit and assessed using the modified criteria of the United States Public Health Service (USPHS). Statistical analysis A radiographic change in marginal bone loss of >0.9 mm (SD 1 mm) after 12 months of definitive crown placement was regarded as a relevant difference between study groups. With an expected effect size of 0.9 mm, an α of 0.05, and a power of 0.80, 38 patients were required—19 in each group. The primary analyses were performed per protocol strategy. The distribution of the continuous data was checked visually on histograms and was supplemented by the Shapiro–Wilk test and Q–Q plots. Normally distributed data have been reported here as means with 95% CI and compared between groups by using the independent sample t test. Non‐normally distributed variables have been reported as medians and interquartile ranges (first quartile to third quartile) and compared between groups with the Mann–Whitney U test. In addition to the primary analyses, several sensitivity analyses were performed. First, the intention‐to‐treat strategy was applied to the primary outcome. Here, we considered the last‐observation‐carried‐forward, best–worst‐scenario (+1 and −1SD), and worst–best‐scenario (−1 and +1SD) methods as appropriate. Furthermore, since some of the outcomes consisted of repeated measurements, multivariable linear mixed‐effect models (LMM) were fitted using restricted maximum likelihood estimations to assess the between‐group differences of these repeated measurements (i.e., including the fixed effects of the type of intervention, baseline outcome measurement and follow‐up in months, and the random effect of patients). Effect estimates of each group at specific timepoints, including corresponding p values, were derived by centering the follow‐up variable for each specific timepoint. Both sensitivity analyses showed no substantial differences with the primary analyses (data available on request). Therefore, the primary analyses were shown to be robust and could thus be regarded as the main analyses. The statistical analyses were performed in R version 4.0.5 (R Core team), using the lme4 and lmertest‐packages. In all the analyses, a p value <0.05 was considered as statistically significant. RESULTS 3.1 Patients Twenty patients were included in both the Immediate Group (mean age 44 ± 14 years) and the Delayed Group (mean age 49 ± 16 years). All the patients were treated accordingly. At the 10‐year evaluation, 15 Immediate‐Group patients (2 patients had died, 2 patients had moved, and 1 patient had changed their upper dentition into an implant‐supported overdenture) and 15 Delayed‐Group patients (4 patients had died and 1 patient had lost the implant) were available (Figure ). 3.2 Changes in MBL and BBT Table shows the mean MBL changes at the approximal sites separately and of the approximal sites combined (mean change at the mesial and distal side). The largest MBL change occurred in the period from implant placement until T1 in both groups. After 10 years with the definitive restoration, only minor changes were observed in both groups, without significant differences between the groups (Immediate Group [−0.71 mm; 95% CI, −1.04 to −0.38] vs. Delayed Group [−0.36 mm; 95% CI, −0.58 to −0.14], p = 0.063). BBT (in medians and interquartile ranges) for the M0 to M5 levels are depicted in Table . The preoperative CBCT scan analyses revealed no significant differences between the groups at all six positions. At the 10‐year evaluation, there were no significant differences in BBT between the groups at all six positions. 3.3 Survival rate No implants were lost in the Immediate Group and one implant was lost in the Delayed Group (after 6 years in function), resulting in an implant survival rate of 100% and 93.8%, respectively. One restoration was lost in the Immediate Group after 8 years in function and one restoration was lost in the Delayed Group because the implant had been lost, resulting in a restoration survival rate of 93.3% and 93.8%, respectively. 3.4 Changes in interproximal and midbuccal peri‐implant mucosa levels Changes in soft tissue levels from preoperative to 10 years after placing the definitive restorations are shown in Table . The 10‐year midbuccal mucosa level changes were −0.24 mm (95% CI, −0.67 to 0.19) and −0.19 mm (95% CI, −0.49 to 0.11) in the Immediate Group and Delayed Group, respectively. Hence the difference was not significant ( p = 0.843). 3.5 Clinical outcomes In both groups, low plaque and bleeding indexes and a healthy peri‐implant mucosa were observed, with no significant differences between the groups (Table ). Also, the pocket probing depths were stable throughout the evaluation period, without significant group differences (Table ). 3.6 Esthetic assessment Both groups’ PES/WES were acceptable throughout the follow‐up (Table ). The total esthetic outcome was 15.0 (95% CI, 13.87–16.13) in the Immediate Group and 14.07 (95% CI, 12.95–15.19) in the Delayed Group ( p = 0.218). 3.7 Patient satisfaction Overall patient satisfaction (Table ) was high and not statistically different between the groups throughout the follow‐up ( p = 0.556). 3.8 Biological and technical complications The incidence of peri‐implant mucositis was 20.0% in both the Immediate Group and the Delayed Group, hence the difference between the groups was not significant ( p > 0.999). None of the patients in either group developed peri‐implantitis. In the Immediate Group, a new restoration was made for one patient due to fracture and one patient experienced decementation of the restoration from the abutment (this crown could be cemented again). In the Delayed Group, a new restoration had to be made for one patient because of inserting a new implant and porcelain chipping was seen in two patients (this complication could be solved by polishing, without the need for a new restoration). The restoration success, assessed according to the modified USPHS criteria, is shown in Table . The calculated restoration success rate was 86.6% in the Immediate Group and 93.3% in the Delayed Group. Patients Twenty patients were included in both the Immediate Group (mean age 44 ± 14 years) and the Delayed Group (mean age 49 ± 16 years). All the patients were treated accordingly. At the 10‐year evaluation, 15 Immediate‐Group patients (2 patients had died, 2 patients had moved, and 1 patient had changed their upper dentition into an implant‐supported overdenture) and 15 Delayed‐Group patients (4 patients had died and 1 patient had lost the implant) were available (Figure ). Changes in MBL and BBT Table shows the mean MBL changes at the approximal sites separately and of the approximal sites combined (mean change at the mesial and distal side). The largest MBL change occurred in the period from implant placement until T1 in both groups. After 10 years with the definitive restoration, only minor changes were observed in both groups, without significant differences between the groups (Immediate Group [−0.71 mm; 95% CI, −1.04 to −0.38] vs. Delayed Group [−0.36 mm; 95% CI, −0.58 to −0.14], p = 0.063). BBT (in medians and interquartile ranges) for the M0 to M5 levels are depicted in Table . The preoperative CBCT scan analyses revealed no significant differences between the groups at all six positions. At the 10‐year evaluation, there were no significant differences in BBT between the groups at all six positions. Survival rate No implants were lost in the Immediate Group and one implant was lost in the Delayed Group (after 6 years in function), resulting in an implant survival rate of 100% and 93.8%, respectively. One restoration was lost in the Immediate Group after 8 years in function and one restoration was lost in the Delayed Group because the implant had been lost, resulting in a restoration survival rate of 93.3% and 93.8%, respectively. Changes in interproximal and midbuccal peri‐implant mucosa levels Changes in soft tissue levels from preoperative to 10 years after placing the definitive restorations are shown in Table . The 10‐year midbuccal mucosa level changes were −0.24 mm (95% CI, −0.67 to 0.19) and −0.19 mm (95% CI, −0.49 to 0.11) in the Immediate Group and Delayed Group, respectively. Hence the difference was not significant ( p = 0.843). Clinical outcomes In both groups, low plaque and bleeding indexes and a healthy peri‐implant mucosa were observed, with no significant differences between the groups (Table ). Also, the pocket probing depths were stable throughout the evaluation period, without significant group differences (Table ). Esthetic assessment Both groups’ PES/WES were acceptable throughout the follow‐up (Table ). The total esthetic outcome was 15.0 (95% CI, 13.87–16.13) in the Immediate Group and 14.07 (95% CI, 12.95–15.19) in the Delayed Group ( p = 0.218). Patient satisfaction Overall patient satisfaction (Table ) was high and not statistically different between the groups throughout the follow‐up ( p = 0.556). Biological and technical complications The incidence of peri‐implant mucositis was 20.0% in both the Immediate Group and the Delayed Group, hence the difference between the groups was not significant ( p > 0.999). None of the patients in either group developed peri‐implantitis. In the Immediate Group, a new restoration was made for one patient due to fracture and one patient experienced decementation of the restoration from the abutment (this crown could be cemented again). In the Delayed Group, a new restoration had to be made for one patient because of inserting a new implant and porcelain chipping was seen in two patients (this complication could be solved by polishing, without the need for a new restoration). The restoration success, assessed according to the modified USPHS criteria, is shown in Table . The calculated restoration success rate was 86.6% in the Immediate Group and 93.3% in the Delayed Group. DISCUSSION Both immediate implant placement, in combination with a bone augmentation procedure, and delayed implant placement after ridge preservation in postextraction sockets with buccal bony defects ≥5 mm in the esthetic zone were accompanied by minor peri‐implant bone loss, good peri‐implant parameters and favorable patient satisfaction at the 10‐year evaluation, without significant differences between both procedures. As far as we know, prospective studies reporting full‐scale outcomes with an evaluation period of at least 10 years after immediate dental implant placement in anterior maxillary sites with labial bony defects have not been published yet. Kamperos et al., Slagter et al., and Zhao et al. analyzed immediate implant placement in buccal defect sites over 5 years. However, it must be mentioned that Slagter et al. reported results from the same study group as the present study. Therefore, it would be best to compare the results of our test group with the retrospective studies by Kamperos et al. and Zhao et al. that dealt with immediate implant placement and esthetic and radiographic outcomes at 5 years, and the results of the control group with the 10‐year retrospective study by Iorio‐Siciliano et al., which examined delayed implant placement after alveolar ridge preservation. The Kamperos et al. study only evaluated PES values (possible total score 0–14) and reported a 9.5 PES and a 9.6 PES for the immediate‐ and delayed implant placement groups, respectively. In the present study, the PES values (possible total score 0–10) were 7.4 and 6.9, respectively. Taking the different total possible scores into account, it can be concluded that the PES results are in line with the other study. The 5‐year Zhao et al. study only evaluated radiographic outcomes and PES values from an immediate implant placement group. They reported a mean peri‐implant bone loss of 0.71 and 0.73 mm at the mesial and distal side of the implant, respectively. Again, these figures are in line with our study's results. Zhao et al. also assessed BBT and reported a mean BBT of 2.86 mm at the end of the 5‐year evaluation period, whereas the present study's median 10‐year BBT varied from 1.24 to 1.63 mm. This difference in BBT can be explained by the fact that Zhao et al. augmented the defect more extensively, resulting in an initially higher BBT. Comparatively, both studies’ change in BBT was limited. Regarding PES values, Zhao et al. results’ were higher, which might be due to their more extensive augmentation procedure. Iorio‐Siciliano et al. performed a 10‐year retrospective study of dental implants after alveolar ridge preservation. A provisional restoration was connected 3–6 months after implant placement; the definitive restorations were placed 3 months later. The implant survival rate was high, thus comparable to the present study. A mean peri‐implant bone loss of 1.1 and 1.0 mm at the mesial and distal side of the implant, respectively, was reported by the other authors; it was even more limited in our study. Thus, both studies show limited bone loss after 10 years. It is striking that not only are there no long‐term studies available comparing implant treatment options for extraction sockets with a buccal bone defect but also that there is a scarcity of medium‐ to long‐term results of a single treatment option. Moreover, full‐scale evaluations, including information on buccal mucosa and buccal bone level and thickness, are missing. This also counts for patient‐reported outcomes and long‐term biological and technical complications, even though this was recommended by the XV European Workshop in Periodontology. Midfacial soft tissue level (with underlying buccal bone presence), papilla volume/approximal soft tissue level (with sufficient mesial/distal bone level), and PES/WES are important parameters for determining esthetic treatment success. The midfacial soft tissue level appeared to be very stable in both groups throughout the 10‐year evaluation period. On analyzing the presence of underlying bone at the buccal side of the implant, one can see that both groups’ median BBT was more than 1 mm at the neck of the implant at the 10‐year evaluation timepoint. There was no significant difference between both groups in terms of these outcome parameters. Apparently, immediate implant placement in case of a buccal bony defect does not compromise the esthetic result at the midfacial implant side. Also, papilla volume/approximal soft tissue level and mesial/distal bone appeared to be very stable in both groups throughout the 10‐year evaluation period, without a significant difference between the groups. Both treatment options led to sufficient PES/WES and satisfied patients at both the start of the evaluation (1 month after placement of the definitive restoration) and at the 10‐year evaluation timepoint. Buser et al. recommended a fully intact buccal bone wall with a thickness of at least 1 mm when considering immediate implant placement. Jung et al. recommended delayed implant placement combined with a bone grafting and/or soft tissue grafting approach when a buccal bone defect is noted at implant placement. Possible risk factors for immediate placement in less favorable cases would be orofacial flattening of the soft tissue profile and recession of the facial mucosa. However, our results do not support these statements because of the favorable 10‐year outcomes after immediate implant placement combined with bone augmentation. The biological and technical complications were limited in both groups. Derks and Tomasi published a systematic review in which a prevalence of 43% was mentioned for peri‐implant mucositis and 22% for peri‐implantitis. Fu and Wang and Roccuzzo et al. recently suggested that these high biological disease figures were mainly caused by wrong planning and surgical and prosthetic errors. Our study's biological disease values are much lower. The reason could be that only single‐tooth restorations were included in both study groups, which were carefully planned and could be easily cleaned, as apparent from the clinical outcome parameters. Also, the technical complications were limited in the present study, leading to a high restoration success rate calculated with the modified USPHS criteria. The Donker et al. 10‐year study had a similar high restoration success rate; they used the same criteria as well as the same implant system and restoration design. Both procedures tested in the present study resulted in the same good bone and soft tissue outcomes, and the professionals and patients were equally satisfied. Such similar results mean that professionals can discuss the procedure with the patient and apply the individual's preference. Nevertheless, it must be mentioned that the procedure applied to the Immediate Group requires 3 months less treatment time than the Delayed Group. Some limitations of the current study need to be mentioned. First, the initial group size calculation revealed that 19 patients would be necessary in each group. At the 10‐year evaluation, 15 patients could be analyzed from each group. The dropout rate (i.e., 5 patients in each treatment group) results in a higher probability of false‐negative findings (i.e., Type II error). A post hoc power calculation resulted in an achieved power of 72%. Therefore, the lack of statistical difference in the primary outcome ( p = 0.063) might be due a false‐negative finding as a result of the lower power of the current analyses compared to the baseline sample size calculation. However, the power of the current analysis after 10‐year follow‐up of a randomized controlled trial is still considered high. In addition, the reasons for loss to follow‐up were mainly due to the death of subjects, which is very unlikely to be related to the intervention tested in the current manuscript. Furthermore, the dropout rates of both treatment arms are similar, which lowers the probability of bias due to loss to follow‐up. Thus, although loss to follow‐up is a common limitation in studies with a long‐term follow‐up, our study's results should be interpreted with some caution. Next, the study was carried out in a university setting. This means that highly experienced professionals treated the patients. Also, the participant selection process was strict. Hence, the results of our study may be different from those achieved by a general practice. CONCLUSIONS Despite the limitations, it can be concluded from this 10‐year evaluation that both immediate implant placement, in combination with a bone augmentation procedure, and implant placement 3 months after alveolar ridge preservation in postextraction sockets with buccal bony defects ≥5 mm in the esthetic region result in very favorable objective and subjective outcomes. All the authors contributed substantially to the conception, design, data interpretation, and critical revision of the study and the manuscript, and approved the final version for publication. Henny J. A. Meijer and Kirsten W. Slagter were involved in collecting the data and drafting the manuscript. Henny J. A. Meijer and Barzi Gareb were involved in the data analysis. The study was supported by an unrestricted grant from Nobel Biocare Services AG, Gothenburg, Sweden (by means of implant materials; research grant 2012‐1135). The authors report no conflicts of interest. The authors received no specific funding for this work. |
Referrals to Peer Support for Families in Pediatric Subspecialty Practices: A Qualitative Study | 165ddedf-179f-470e-bd34-bcd2f7edba43 | 11821679 | Pediatrics[mh] | Nearly one in five children in the United States has a special health care need (Maternal & Child Health Bureau, ), with diagnoses encompassing physical, developmental, behavioral, and emotional conditions (Maternal & Child Health Bureau, ). Managing special health care needs and navigating the system of care can be incredibly stressful for families and can lead them to feel socially isolated (Baker & Claridge, ). Research has found that connecting caregivers of children with special health care needs (CSHCN) with others who have similar experiences helps them ask questions about care and understand what to expect about their child’s condition, develop advocacy skills, reduce their sense of isolation, and give them hope for the future (Chakraborti et al., ; Hall et al., ; Hughes, ). These connections are referred to variously as caregiver peer support, parent-to-parent support, family-to-family support, or peer support (PS). Pediatric subspecialists and their practice staff often have ongoing relationships with families of CSHCN with some of the most intensive needs. Consequently, they are in a strong position to identify families who could benefit from referrals to PS (Schor & Fine, ). Referrals to PS can be formal or informal and include referring caregivers to parent support groups, virtual parent-to-parent resources, or individual peer mentors who have experienced similar situations and, ideally, have received some training for that role (Bray et al., ; Tully et al., ). These resources can be inside or outside the medical care setting (Chakraborti et al., ). Although families value PS, there is little known about how referrals to these resources are made (Schor & Fine, ). This study aims to examine referral processes and how subspecialists help people access PS. It follows up on findings from a 2022 survey of pediatric subspecialists in California, which aimed to understand the extent to which the subspecialists provided PS referrals to caregivers of CSHCN (Schor et al., ). Many subspecialists viewed PS referrals favorably, but they were not always familiar with available PS resources. The extent to which they or their practices provided referrals was affected by their knowledge of resources, the time available, staffing, and institutional support. This follow-up study was designed to build on those findings and better understand the subspecialty practices’ processes, barriers, and facilitators regarding PS referrals. This research aims to inform future efforts to improve referrals and access to PS. Overview. The researchers conducted semistructured qualitative interviews of staff at pediatric subspecialty practices across the state of California from August to November 2023. The methods used to conduct this research align with the Consolidated Criteria for Reporting Qualitative Research (COREQ) checklist. Interview Protocol. The study team developed an interview protocol (Appendix ) that contained branching logic based on a respondent’s answer to certain questions, allowing interviewers to tailor conversations depending on respondents’ specialty, role, and involvement with PS referrals. Institutional Review Board (IRB) Review. The team conducted the research in accordance with prevailing ethical principles; the Health Media Labs IRB approved this study before data collection began. Respondent Recruitment. The team primarily recruited from people identified by pediatric subspecialists who responded to the 2022 survey. In that survey, respondents volunteered the name and contact information of someone the researchers could contact in the future for more information about PS referrals in their practices. The aim was to interview 20 people from pediatric subspecialty practices in California based on the numbers needed to reach thematic saturation and to achieve diversity of specialty and practice institution. The team began with a purposive sampling approach to achieve perspectives from different specialties and institutions and contacted 72 people from the 2022 survey; 15 agreed to be interviewed. Based on contacts suggested by the first 15 respondents, the team used snowball sampling to recruit another 5 respondents to achieve diversity. Conducting Interviews. Interviews occurred from August to November 2023 and lasted about 45 min via video call. As compensation, respondents could receive a $50 Amazon gift card or a $50 donation to the Special Olympics. All respondents received and reviewed consent language before the interview and verbally consented to participate. Each interview was conducted by an experienced qualitative researcher who recorded the interviews when the respondents provided verbal consent to record. A professional transcription service transcribed the recordings for coding. Analysis. The team identified a priori themes from the interview protocol to develop a codebook and used NVivo, a qualitative coding software, to code and analyze the data and conduct a thematic analysis by summarizing findings across respondents for each code. The team piloted the codebook with one interview transcript and refined the codebook to ensure interrater reliability. Two team members each coded half of the 20 interview transcripts, and the study lead conducted quality assurance of all coded transcripts to ensure coding consistency. To give a sense of the prevalence of responses among respondents without overemphasizing counts in this qualitative study, many findings are framed using the following terminology: all (100%), many (50–99%), some (1–49%), or none (0%) of the respondents. The final study sample included 20 people working in a variety of roles [Table ], with a majority (80%) being non-physicians; interviewees were primarily social workers because the study team requested that subspecialist participants in the original study identify the person in their practice responsible for PS referrals for a follow-up interview. Both physicians and non-physicians interviewed had experience making PS referrals, although most physicians said they held a more supportive role in the PS referral process. The study also included perspectives from various pediatric subspecialties [Table ] across nine hospitals in California; the most common specialties were hematology/oncology/neuro-oncology, neonatology, and nephrology/neurology. The team observed thematic consistency across respondents, and key themes from the qualitative interviews are outlined below. Referral Process Evaluation of Need. Nearly all respondents depended on structured psychosocial assessments and clinical judgement to identify families’ need for PS. The most cited factors used when considering referrals were the following: Psychosocial needs . Respondents said they referred families that exhibit heightened fear, stress, anxiety, or challenges coping with a diagnosis or treatment plan. Limited resources and social support . Respondents noted that they were more likely to refer families with limited family or community support, language barriers, less experience navigating the health care system, and lower incomes. Diagnosis and needs . Some respondents said they discuss PS opportunities with every family they meet with as part of their standard workflow for a first encounter. In other practices, staff identify families facing rare diagnoses, complex or higher acuity medical needs, surgeries and transplants, and challenges understanding a new diagnosis. Timing . Respondents noted that some medical situations might require immediate decisions by caregivers who then do not have sufficient time to be connected to PS. Some practices do not offer PS initially until they feel the caregivers are ready to connect with others outside their immediate support system. “It’s never something that I say in the first sentence because there’s too much going on,” one social worker remarked. “They have to settle a little from that crisis of hearing their child has the thing that they fear the most in life…” Many respondents noted that caregivers often expressed interest on their own and asked to be connected to PS. Many asked to be connected to “experienced” families (that is, families that have dealt with similar diagnoses and can offer advice and emotional support). Making PS Connections. Beyond knowing what PS resources exist and evaluating a families’ need for support, respondents considered which resources would be a good fit for each family. Many noted that families preferred diagnosis-specific support groups to connect with others experiencing similar diagnoses and treatments. Some respondents mentioned PS programs with specific groups for each family member (such as parents and siblings). Respondents considered families’ preferences when making referrals. For example, some families preferred to be matched with a one-on-one mentor, and others preferred a group PS setting (both in person and virtual). Several respondents noted that, although they do not refer to resources on social media, some families find PS groups and forums on social media through their own online searches. Some respondents said that external organizations provided online PS, including virtual meetings (often over Zoom); external organizations referenced include the Down Syndrome Association, Pediatric Brain Tumor Foundation, Childhood Cancer Foundation, CureDuchenne, and various local family resource centers. Although some families appreciated the flexibility of virtual options, one respondent shared that one of the families that they worked with reported that this method was not as helpful as in-person PS. The process of connecting families to PS was described by respondents as “informal,” lacking a documented or standard procedure for all practice staff to follow. To ensure compliance with the Health Insurance Portability and Accountability Act (known as HIPAA), staff seek consent from families before sharing their contact information with a resource. Some practices use signed release forms, but many obtain verbal consent. When connecting families to PS resources, staff sometimes share the family’s contact information directly with the resource but often leave it up to the family to make the initial contact. Matching Families to Peer Mentors. Many respondents said that they introduce families for peer mentorship, a type of PS that offers one-on-one connections between families. Making an appropriate match is particularly important when connecting mentee and mentor caregivers. Respondents noted the following factors they consider when making a match: Diagnosis—timing and condition. Respondents often aimed to match families who have children with similar diagnoses and courses of treatment. They typically selected caregivers who are further along in their child’s diagnosis and treatment as the mentor, which allows mentee caregivers to receive advice from a family that has more experience with the diagnosis and care plan. Socioeconomic, cultural, and linguistic background. Some respondents noted that families can feel more comfortable when matched with those of similar backgrounds. Several respondents said that pairing Spanish-speaking families allowed them to receive advice in their primary language and enabled them to better understand the condition and care plan. Respondents also noted that families’ socioeconomic circumstances can affect their access to resources, causing differences in their ability to manage their children’s conditions. Mentor caregivers who have faced such difficulties might be more helpful to families who are newly navigating systemic challenges. Patient age. Some respondents matched families of similar-age children. Location. Some respondents matched families that live in the same region so that local mentors and mentees can connect in person and share community-specific resources. Coping and understanding of condition. Respondents emphasized the importance of selecting mentors who are positive, demonstrate good coping skills, and have an accurate understanding of the diagnosis and care. They cautioned against matching mentee caregivers with caregivers who mistrust the health system or who had a negative care experience because this can oppose the medical guidance mentees receive. Training. Respondents noted a range of training requirements for peer mentors, ranging from specific PS training at external organizations, general training for volunteer staff at the practices, to no formal training for this role. Documentation and Follow-Up. Nearly all respondents recorded internal and external referrals in the patient’s electronic health record. Many providers did not bill for the time spent providing referrals; several social workers noted that providing referrals is a component of their salaried role. After providing a PS referral, many respondents followed up with caregivers informally, often by checking in during families’ subsequent visits to see whether they connected with the resource or mentor and how the interaction went. Respondents emphasized the importance of encouraging families’ agency, noting that it should be a family’s decision to pursue PS. As one social worker said, “We can’t make anyone do anything, that’s just not our role. But we empower people to try to get support and we encourage that, and then they take the ball, or they don’t.” Referral Services External Services. Nearly all respondents referenced external PS resources to which they regularly referred families. Common external resources for PS included national organizations that provide educational and PS connections such as the Pediatric Brain Tumor Foundation, Autism Speaks, American Diabetes Foundation, Epilepsy Foundation, and the Center for Rare Diseases. Some respondents preferred referring families to local community-based organizations. Many of the external resources referenced were specific to a diagnosis. Some respondents referred families to family resource centers or networks that had their own processes for assessing families and connecting them with educational, financial, and emotional, or peer supports. Several respondents referred families to family camps or weekend retreats held by foundations that also offered social events and PS groups tailored to parents and caregivers, siblings, and patients. Some respondents said that their practice relies on PS resources that have been long established at their institution and in their respective pediatric subspecialty field. Social workers noted that their knowledge of community resources is a critical aspect of their role; they learn about some resources when they are onboarded to their role and add to that knowledge through internet searches, networking, and experience working with families. Some respondents mentioned that organizations have reached out to their practice to market their programs and services. Respondents expressed wariness about partnering with unfamiliar organizations. They emphasized the importance of vetting organizations to ensure they did not provide inappropriate medical advice or were not solely selling a service. Internal Services. Nearly half the respondents noted that their practices offer some form of internal PS services in group settings or one-on-one sessions. Some practices offer formal PS groups facilitated by social workers or psychologists that meet regularly. One respondent said that their PS groups have a topic for each meeting, such as “coping, adjusting to new diagnoses…or self-care for caregivers.” Other practices offered drop-in group sessions in which caregivers can meet other families and discuss their experiences. Some practices offer one-on-one PS services for families. For example, one practice assigned “parent liaisons” to each family with a child in the neonatal intensive care unit to help families navigate their child’s care, connect them with resources, and provide emotional support. Several practices connected families with peer mentors, as discussed above. Respondents also spoke about social events, such as family days, designed as an opportunity to meet other families. Several respondents said they halted PS groups during the COVID-19 pandemic, and not all have been restarted. Many PS groups were previously held in person and have transitioned to virtual meetings. Barriers and Facilitators Respondents noted the following barriers and challenges to making PS referrals: Cultural and language barriers. Respondents explained that some families are not comfortable discussing their experience with strangers, especially when services are not offered in their preferred language. Lack of established, reliable PS services. Several respondents noted a lack of available or reliable PS services in their communities. Although some respondents mentioned local family resource centers, many expressed a desire for a system to keep track of available PS resources. One respondent suggested that organizations could hold educational meetings to make providers aware of available PS resources. Limited funding for PS programs. Some respondents pointed to limited funding for organizations that offer PS, especially in the wake of the COVID-19 pandemic. Limited time to make referrals. Several respondents noted that they do not have adequate time in their day to make referrals or to follow up with families. Need for more peer mentors. Many respondents wanted more peer mentors in their institutions but recognized that there are not enough mentors to connect with all the families who could benefit from mentorship, particularly as the peer mentor position is often unpaid. Logistical challenges. Respondents saw that it can be difficult for families to attend group PS programs because of scheduling barriers such as finding childcare, travel time, competing daytime responsibilities, and taking time off from work. Respondents noted multiple characteristics that improved their ability to provide PS referrals: Care team collaboration. Some respondents said that strong collaboration between various members of the care team—such as holding regular morning huddles or divvying responsibilities across team members—can facilitate successful referrals. Dissemination of information. Staff provided examples of strategies they use to make families aware of PS resources, including flyers, handouts, emails, and, in one case, a QR code on the back of their badge. Making the introduction. Respondents expressed that sufficient time to introduce families to PS services and to facilitate follow-up can help lead to successful connections between families and PS resources. Relationship with external PS staff. Some respondents described how relationships with staff at external PS programs can facilitate the referral process because it gives them a key person at the organization that they can communicate with. One respondent described how a staff member from an external resource regularly comes to their hospital to share resources and introduce themselves to families who are considering PS. PS structure. Some respondents noted that PS groups with a social and therapeutic element encouraged participation. One respondent said that having a PS group run by multidisciplinary teams (including a psychologist and social worker) ensured that all the needs of families participating could be met. Referral Outcomes Respondents said that referrals to PS can be a valuable resource to families and to providers. Impact on Families. Several respondents mentioned that being connected to another family who has gone through a similar experience helps families. One respondent said, “to have…a parent that is dealing with a diagnosis-specific issue. [Support from another family] is irreplaceable.” Respondents have seen these connections reduce feelings of isolation that families experience when they have a child with special health care needs and noted that PS can validate families’ experience and concerns, which can lead to better stress management and decreased anxiety. Respondents said they have seen the connections transform people’s fears into hope. One respondent noted that these connections are “a very, very important part of helping people stay as healthy as possible.” Many respondents indicated that PS improves caregivers’ skills and confidence as well as families’ ability to process medical information, particularly for non-English speakers. Impact on Providers. According to one respondent, PS often complements the medical advice that providers offer. A social worker explained that referring to PS is helpful because her caseload is large; although she is not always able to provide the level of emotional support that families need, peer mentors take on this emotional support role. Nearly all respondents noted that, in the future, they would like to see their peer referral networks expanded. Nearly half of respondents said that an online resource hub or established referral network should be created to facilitate the referral process. According to one respondent, providers are often unaware of the extent of PS resources available, and having an accessible hub could help improve awareness. Respondents mentioned several PS services that they would like to see offered or expanded in their practice or community, including the following: Multilingual PS services Adolescent and young adult-specific support groups Virtual support groups (for example, via Zoom) Diagnosis-specific support groups Finally, one respondent said that they want to hear from families who have received PS services to learn about their experience with and perspectives about PS. Providers would find this information useful to help make future referrals. Evaluation of Need. Nearly all respondents depended on structured psychosocial assessments and clinical judgement to identify families’ need for PS. The most cited factors used when considering referrals were the following: Psychosocial needs . Respondents said they referred families that exhibit heightened fear, stress, anxiety, or challenges coping with a diagnosis or treatment plan. Limited resources and social support . Respondents noted that they were more likely to refer families with limited family or community support, language barriers, less experience navigating the health care system, and lower incomes. Diagnosis and needs . Some respondents said they discuss PS opportunities with every family they meet with as part of their standard workflow for a first encounter. In other practices, staff identify families facing rare diagnoses, complex or higher acuity medical needs, surgeries and transplants, and challenges understanding a new diagnosis. Timing . Respondents noted that some medical situations might require immediate decisions by caregivers who then do not have sufficient time to be connected to PS. Some practices do not offer PS initially until they feel the caregivers are ready to connect with others outside their immediate support system. “It’s never something that I say in the first sentence because there’s too much going on,” one social worker remarked. “They have to settle a little from that crisis of hearing their child has the thing that they fear the most in life…” Many respondents noted that caregivers often expressed interest on their own and asked to be connected to PS. Many asked to be connected to “experienced” families (that is, families that have dealt with similar diagnoses and can offer advice and emotional support). Making PS Connections. Beyond knowing what PS resources exist and evaluating a families’ need for support, respondents considered which resources would be a good fit for each family. Many noted that families preferred diagnosis-specific support groups to connect with others experiencing similar diagnoses and treatments. Some respondents mentioned PS programs with specific groups for each family member (such as parents and siblings). Respondents considered families’ preferences when making referrals. For example, some families preferred to be matched with a one-on-one mentor, and others preferred a group PS setting (both in person and virtual). Several respondents noted that, although they do not refer to resources on social media, some families find PS groups and forums on social media through their own online searches. Some respondents said that external organizations provided online PS, including virtual meetings (often over Zoom); external organizations referenced include the Down Syndrome Association, Pediatric Brain Tumor Foundation, Childhood Cancer Foundation, CureDuchenne, and various local family resource centers. Although some families appreciated the flexibility of virtual options, one respondent shared that one of the families that they worked with reported that this method was not as helpful as in-person PS. The process of connecting families to PS was described by respondents as “informal,” lacking a documented or standard procedure for all practice staff to follow. To ensure compliance with the Health Insurance Portability and Accountability Act (known as HIPAA), staff seek consent from families before sharing their contact information with a resource. Some practices use signed release forms, but many obtain verbal consent. When connecting families to PS resources, staff sometimes share the family’s contact information directly with the resource but often leave it up to the family to make the initial contact. Matching Families to Peer Mentors. Many respondents said that they introduce families for peer mentorship, a type of PS that offers one-on-one connections between families. Making an appropriate match is particularly important when connecting mentee and mentor caregivers. Respondents noted the following factors they consider when making a match: Diagnosis—timing and condition. Respondents often aimed to match families who have children with similar diagnoses and courses of treatment. They typically selected caregivers who are further along in their child’s diagnosis and treatment as the mentor, which allows mentee caregivers to receive advice from a family that has more experience with the diagnosis and care plan. Socioeconomic, cultural, and linguistic background. Some respondents noted that families can feel more comfortable when matched with those of similar backgrounds. Several respondents said that pairing Spanish-speaking families allowed them to receive advice in their primary language and enabled them to better understand the condition and care plan. Respondents also noted that families’ socioeconomic circumstances can affect their access to resources, causing differences in their ability to manage their children’s conditions. Mentor caregivers who have faced such difficulties might be more helpful to families who are newly navigating systemic challenges. Patient age. Some respondents matched families of similar-age children. Location. Some respondents matched families that live in the same region so that local mentors and mentees can connect in person and share community-specific resources. Coping and understanding of condition. Respondents emphasized the importance of selecting mentors who are positive, demonstrate good coping skills, and have an accurate understanding of the diagnosis and care. They cautioned against matching mentee caregivers with caregivers who mistrust the health system or who had a negative care experience because this can oppose the medical guidance mentees receive. Training. Respondents noted a range of training requirements for peer mentors, ranging from specific PS training at external organizations, general training for volunteer staff at the practices, to no formal training for this role. Documentation and Follow-Up. Nearly all respondents recorded internal and external referrals in the patient’s electronic health record. Many providers did not bill for the time spent providing referrals; several social workers noted that providing referrals is a component of their salaried role. After providing a PS referral, many respondents followed up with caregivers informally, often by checking in during families’ subsequent visits to see whether they connected with the resource or mentor and how the interaction went. Respondents emphasized the importance of encouraging families’ agency, noting that it should be a family’s decision to pursue PS. As one social worker said, “We can’t make anyone do anything, that’s just not our role. But we empower people to try to get support and we encourage that, and then they take the ball, or they don’t.” External Services. Nearly all respondents referenced external PS resources to which they regularly referred families. Common external resources for PS included national organizations that provide educational and PS connections such as the Pediatric Brain Tumor Foundation, Autism Speaks, American Diabetes Foundation, Epilepsy Foundation, and the Center for Rare Diseases. Some respondents preferred referring families to local community-based organizations. Many of the external resources referenced were specific to a diagnosis. Some respondents referred families to family resource centers or networks that had their own processes for assessing families and connecting them with educational, financial, and emotional, or peer supports. Several respondents referred families to family camps or weekend retreats held by foundations that also offered social events and PS groups tailored to parents and caregivers, siblings, and patients. Some respondents said that their practice relies on PS resources that have been long established at their institution and in their respective pediatric subspecialty field. Social workers noted that their knowledge of community resources is a critical aspect of their role; they learn about some resources when they are onboarded to their role and add to that knowledge through internet searches, networking, and experience working with families. Some respondents mentioned that organizations have reached out to their practice to market their programs and services. Respondents expressed wariness about partnering with unfamiliar organizations. They emphasized the importance of vetting organizations to ensure they did not provide inappropriate medical advice or were not solely selling a service. Internal Services. Nearly half the respondents noted that their practices offer some form of internal PS services in group settings or one-on-one sessions. Some practices offer formal PS groups facilitated by social workers or psychologists that meet regularly. One respondent said that their PS groups have a topic for each meeting, such as “coping, adjusting to new diagnoses…or self-care for caregivers.” Other practices offered drop-in group sessions in which caregivers can meet other families and discuss their experiences. Some practices offer one-on-one PS services for families. For example, one practice assigned “parent liaisons” to each family with a child in the neonatal intensive care unit to help families navigate their child’s care, connect them with resources, and provide emotional support. Several practices connected families with peer mentors, as discussed above. Respondents also spoke about social events, such as family days, designed as an opportunity to meet other families. Several respondents said they halted PS groups during the COVID-19 pandemic, and not all have been restarted. Many PS groups were previously held in person and have transitioned to virtual meetings. Respondents noted the following barriers and challenges to making PS referrals: Cultural and language barriers. Respondents explained that some families are not comfortable discussing their experience with strangers, especially when services are not offered in their preferred language. Lack of established, reliable PS services. Several respondents noted a lack of available or reliable PS services in their communities. Although some respondents mentioned local family resource centers, many expressed a desire for a system to keep track of available PS resources. One respondent suggested that organizations could hold educational meetings to make providers aware of available PS resources. Limited funding for PS programs. Some respondents pointed to limited funding for organizations that offer PS, especially in the wake of the COVID-19 pandemic. Limited time to make referrals. Several respondents noted that they do not have adequate time in their day to make referrals or to follow up with families. Need for more peer mentors. Many respondents wanted more peer mentors in their institutions but recognized that there are not enough mentors to connect with all the families who could benefit from mentorship, particularly as the peer mentor position is often unpaid. Logistical challenges. Respondents saw that it can be difficult for families to attend group PS programs because of scheduling barriers such as finding childcare, travel time, competing daytime responsibilities, and taking time off from work. Respondents noted multiple characteristics that improved their ability to provide PS referrals: Care team collaboration. Some respondents said that strong collaboration between various members of the care team—such as holding regular morning huddles or divvying responsibilities across team members—can facilitate successful referrals. Dissemination of information. Staff provided examples of strategies they use to make families aware of PS resources, including flyers, handouts, emails, and, in one case, a QR code on the back of their badge. Making the introduction. Respondents expressed that sufficient time to introduce families to PS services and to facilitate follow-up can help lead to successful connections between families and PS resources. Relationship with external PS staff. Some respondents described how relationships with staff at external PS programs can facilitate the referral process because it gives them a key person at the organization that they can communicate with. One respondent described how a staff member from an external resource regularly comes to their hospital to share resources and introduce themselves to families who are considering PS. PS structure. Some respondents noted that PS groups with a social and therapeutic element encouraged participation. One respondent said that having a PS group run by multidisciplinary teams (including a psychologist and social worker) ensured that all the needs of families participating could be met. Respondents said that referrals to PS can be a valuable resource to families and to providers. Impact on Families. Several respondents mentioned that being connected to another family who has gone through a similar experience helps families. One respondent said, “to have…a parent that is dealing with a diagnosis-specific issue. [Support from another family] is irreplaceable.” Respondents have seen these connections reduce feelings of isolation that families experience when they have a child with special health care needs and noted that PS can validate families’ experience and concerns, which can lead to better stress management and decreased anxiety. Respondents said they have seen the connections transform people’s fears into hope. One respondent noted that these connections are “a very, very important part of helping people stay as healthy as possible.” Many respondents indicated that PS improves caregivers’ skills and confidence as well as families’ ability to process medical information, particularly for non-English speakers. Impact on Providers. According to one respondent, PS often complements the medical advice that providers offer. A social worker explained that referring to PS is helpful because her caseload is large; although she is not always able to provide the level of emotional support that families need, peer mentors take on this emotional support role. Nearly all respondents noted that, in the future, they would like to see their peer referral networks expanded. Nearly half of respondents said that an online resource hub or established referral network should be created to facilitate the referral process. According to one respondent, providers are often unaware of the extent of PS resources available, and having an accessible hub could help improve awareness. Respondents mentioned several PS services that they would like to see offered or expanded in their practice or community, including the following: Multilingual PS services Adolescent and young adult-specific support groups Virtual support groups (for example, via Zoom) Diagnosis-specific support groups Finally, one respondent said that they want to hear from families who have received PS services to learn about their experience with and perspectives about PS. Providers would find this information useful to help make future referrals. Respondents said that PS referrals for families of CSHCN are valuable, offering families similar benefits as those noted in the literature, such as improved psychosocial outcomes and efficacy in caring for the CSHCN (Chakraborti et al., ; Hall et al., ; Hughes, ). As study participants provided perspectives of pediatric care institutions, they also emphasized the value of these referrals for providers. Similar to findings in Schor & Fine, , subspecialists and practice staff in this study are frequently connected with families of CSHCN, understand their medical and social needs, and understand the availability of peer mentors, making them well positioned to provide PS referrals. Routinely integrating offers of PS referrals into regular care might be helpful for simplifying workflows and for meeting families’ needs for information and support. There are also opportunities to standardize the PS matching process, as identifying and forming matches between mentees and mentors for families is currently largely informal and based on clinical judgement. Regardless of their role or subspecialty, respondents identified similar referral process characteristics, concerns, and successes. Social workers, who were the largest subgroup of respondents, were able to provide more exhaustive lists of the external and internal programs to which they frequently refer families compared with respondents in other roles, likely because knowledge about referrals falls directly within their care responsibilities. Although providers generally agreed on the importance of PS referrals and expressed positive views of and experiences with PS, respondents expressed that referrals should not have a one-size-fits-all approach. Providers should consider the following when making referrals: Matching Families to PS Services. Evaluating and making the right connections for each family is critical to a successful outcome. Beyond this, several respondents said that PS services should be properly vetted to ensure they will be a reliable and supportive resource to families. Timing. Often, families are initially overwhelmed by a diagnosis, and care should be taken to ensure that the referral is introduced to them at a time that will help the family and not add additional burden. Desire for Referral. Not all families want a referral to PS. Some families prefer to keep their journey private. Strong care team coordination and adequate resources such as staff time, education, and funding for PS programs could help facilitate successful PS referrals. Limitations. Although this study provides useful insights, several limitations could affect the representativeness of our findings. We observed thematic consistency, but the sample of respondents was limited and the opinions and experiences of pediatric subspecialty hospital staff that volunteered to participate might differ from those who were unable or unwilling to contribute their perspectives. Our sample overrepresents certain institutions in which more people were interested in participating. In addition, we were not able to interview respondents from every pediatric subspecialty, and there may be specialty-specific considerations for PS referrals. Future Directions. Many respondents expressed a desire for the expansion of PS services in the future. Efforts to create a network of PS providers across practices in California could be beneficial to the future of PS referrals, as lack of awareness of reputable PS resources is a current barrier to providers who are offering these referrals. Although this qualitative research provides an understanding of the landscape of PS referrals for providers, the community may benefit from further research on the topic to understand how PS referrals affect families directly and whether families are able to access the PS they need. Surveying or interviewing families who have received such services could provide valuable insight to providers for ways to improve PS referrals and availability and types of PS services to meet families’ needs. Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 26 KB) |
Sampling efficiency and nucleic acid stability during long-term sampling with different bioaerosol samplers | a55e1602-881f-4ddf-a970-d6e4e787639e | 11127824 | Microbiology[mh] | In bioaerosol research, active air sampling is the most common method used. There are several different collection principles (e.g., impaction, impingement, filtration, and condensation growth), each with its advantages and disadvantages (Bhardwaj et al., ; Haig et al., ). Bioaerosol research has traditionally relied on culture-dependent methods, but in later years, there has been a shift toward molecular and sequence-based studies of airborne microorganisms (Hou et al., ; Mbareche et al., ). Molecular methods have the potential to greatly improve our understanding and identification of airborne microorganisms, as a large proportion of airborne microorganisms will not grow under standard laboratory conditions. It has also been reported that several factors, including environmental conditions and particle size, affect the culturability of microorganisms (Dybwad & Skogan, ; Lighthart & Shaffer, ; Peccia & Hernandez, ; Šantl-Temkiv et al., ). The shift from culture-dependent to culture-independent methods also changes the requirements to the air sampling strategy. For culture-based work, it is important to use a gentle air sampling technique to preserve culturability, especially for more stress-sensitive organisms such as gram negatives which are more prone to sampling stress than spores and gram positives (Zhen et al., ). Two commonly used collection principles for studying culturable airborne bacteria are impaction directly onto an agar plate at a low flow rate or gentle collection into a liquid buffer. On the contrary, for culture-independent methods, the biological state of the microorganisms is less important as long as the nucleic acids remain intact and can be recovered from the sample (Lindsley et al., ). The air is generally regarded as a low biomass environment, outdoor air in particular. When investigating the aerosol microbiome and microbial diversity using shotgun sequencing there is a need for higher biomass yields than for amplicon-based methods (e.g., 16S rRNA gene amplicon sequencing). Not achieving high enough yields has typically been solved by pooling samples and/or performing whole genome amplification (Abd Aziz et al., ; Be et al., ; Yooseph et al., ). The use of high-volume air samplers (e.g., SASS3100 and Coriolis μ) and/or increased sampling time has proven to be an effective strategy to collect enough biomass for sequencing studies, and a typical prerequisite to collect enough biomass for shotgun sequencing of air samples (Cao et al., ; Gusareva et al., ; Hou et al., ; Leung et al., ). However, high flow rates and long sampling times increase the risk of negative side effects, such as desiccation, osmotic shock, and evaporation of sampling buffer. The stress factors induced by air sampling can impact culturability and viability, and in the worst case, lead to cell rupture and release/loss of nucleic acids (King et al., ; Zhen et al., ). Desiccation is a typical drawback for filter sampling (dry sampling), as collected material is surrounded by a continuous airflow which desiccates the cells (Bhardwaj et al., ). To process filter samples, particulate matter is extracted into a liquid, and in this process, nucleic acids can be released into the filter extract if cells are damaged. The nucleic acids may remain intact and be recovered for molecular analysis, but it is then important to process the whole sample volume to not lose free-nucleic acids and by that microbial diversity (Bøifot et al., , ; Zhen et al., ). To avoid desiccation of microbial cells during air sampling and maintain culturability and infectivity, collection into a liquid buffer (e.g., impingement and wetted-wall cyclone) is a common strategy (Lindsley et al., ). Although microbial cells and viruses might be protected from desiccation, evaporation of sampling buffer is a common problem for, e.g., impingement (SKC BioSampler) and wetted-wall cyclones (Coriolis μ). Evaporation can reduce the sampling buffer volume below the optimal volume for particle collection, thereby reducing collection efficiency, and it has also been shown that collected material can be reaerosolized and/or suffer from internal loss (Han & Mainelis, ; Lin et al., ; Riemenschneider et al., ; Rufino de Sousa et al., ). To compensate for evaporation, some liquid air samplers, such as the Coriolis μ and SASS2300, can replenish the sampling buffer during collection. However, this increases the air sampler’s complexity (design and operation) and the contamination risk due to inadequate and/or difficult cleaning of the fluidic system. In real-world environments, it will also be difficult to adjust the refill rate as changing temperatures and humidity will affect the evaporation rate, and contamination is a big concern in microbiome studies because of the sensitivity of shotgun sequencing (Eisenhofer et al., ). A recently commercialized alternative based on condensation growth technology, BioSpot-VIVAS, has also the advantage of avoiding desiccation of the cells. Additionally, it is considered to be a gentle and efficient collection principle, suitable for culturing and molecular methods, with a high concentration factor and free selection of collection buffer, convenient for downstream sample processing. However, the flow rate is low (8 L/min) compared to the high-volume air samplers (≥ 300 L/min) that have successfully been used in shotgun sequencing studies (Archer et al., ; Gusareva et al., ; Leung et al., ; Qin et al., ). Condensation growth has been evaluated in several chamber studies and has shown good recovery and preservation of various microorganisms (Degois et al., ; Nieto-Caballero et al., ; Pan et al., ; Raynor et al., ). During the recent pandemic, several studies have reported the successful use of condensation growth sampling to study the abundance and infectivity of SARS-CoV-2 (Banholzer et al., ; Fortin et al., ; Vass et al., ). However, there have been few studies comparing condensation growth with other collection principles using bacteria and molecular methods (Nieto-Caballero, ). Many of the challenges faced by the different collection principles will become more prevalent with increased sampling time. This can cause microbial cells and nucleic acids to be differentially damaged due to varying degrees of resistance to sampling-associated stress factors, or the microorganisms can be lost through reaerosolization (Lemieux et al., ; Zhen et al., ). There are several considerations to be made when selecting an air sampler depending on study design and aim, e.g., sampling efficiency, compatibility with downstream processes, battery operation, low noise level, low weight (mobility), and high flow rate for high resolution or collection of sufficient biomass. The many factors that must be considered when selecting an air sampler are likely one of the reasons why there is a lack of standardized and harmonized methods within bioaerosol research. For decades, it has been highlighted that there is a need to standardize and harmonize methods to allow for comparison between studies to advance the field as different collection methods can yield different results (Cox et al., ; Griffiths & DeCosemo, ; Lemieux et al., ; Mainelis, ; Millner, ). To improve our understanding of different collection principles, several studies have sought to compare different collection principles and air samplers and how they can affect infectivity (Degois et al., ; Raynor et al., ), culturability (Dybwad et al., ), microbial diversity in real-world environments (Hoisington et al., ; Lemieux et al., ; Mbareche et al., ), cell damage (Zhen et al., ), RNA stability with low volume air samplers (Degois et al., ; Ratnesar-Shumate et al., ), DNA intactness (King & McFarland, ), and DNA and RNA stability (Guo et al., ; Zhen et al., ). Comparison of different air samplers in real-world environments has shown distinct differences in microbial diversity. Though the underlying mechanism for the differences is not well established, collection efficiency, reaerosolization, and degradation of nucleic acids are potential factors (Hoisington et al., ; Lemieux et al., ; Mbareche et al., ). Previous studies investigating DNA and RNA stability do not give a clear conclusion. Guo et al. showed that liquid sampling in general had a higher nucleic acid recovery than filter sampling, while Zhen et al. showed the opposite with spike experiments. Ratnesar-Shumate et al. found that there are no large differences between filter and liquid sampling, while Degois et al. found variability depending on virus species. Air sampler comparison studies, using aerosolized microorganisms to investigate effects of long-term sampling have only used viruses, and without high-volume air samplers commonly used in microbiome studies (Coriolis μ and SASS3100). In the new era of microbiome studies, there is an increasing need to ensure that representative samples are collected and maintained. Knowledge of nucleic acid stability during long-term sampling with different collection principles is therefore important as long-term sampling is a common strategy to collect enough biomass for metagenomic sequencing. In this study, we investigated physical sampling efficiency and nucleic acid stability in an aerosol test chamber (ATC) for different collection principles during long-term sampling using Uranine and two model organisms, the gram-negative vegetative bacteria Pantoea agglomerans (PA) and the bacteriophage MS2 (MS2). Isopore membrane filters (reference) were compared towards four bioaerosol samplers, the high-volume air samplers SASS3100 (electret filter) and Coriolis μ (wetted-wall cyclone) commonly used in microbiome studies, BioSpot-VIVAS-300P (condensation growth) which has shown promising results in virus studies, and the well-established SKC BioSampler. We characterized their physical sampling efficiency for three different particle sizes (0.8, 1, and 3 μm) relative to a reference sampler using a fluorescent tracer (Uranine). We also investigated nucleic acid recovery and stability of PA (dsDNA) and MS2 (ssRNA) during short- and long-term sampling (10 min and 2 h, respectively) at two particle sizes (1 and 3 μm). We hypothesized that we would find a decrease in nucleic acid yields after long-term sampling. The results from this study could help interpret the suitability of different air samplers and collection principles for use in studies where long-term sampling is needed to obtain sufficient biomass (e.g., for shotgun sequencing).
Evaluated air samplers Five different bioaerosol samplers were included in this study (Table ), BioSpot-VIVAS 300-P (hereafter referred to as VIVAS; Aerosol Devices Inc., Fort Collins, CO, USA), SASS3100 (hereafter referred to as SASS; Research International, Monroe, WA, USA), Coriolis μ with long time monitoring option (hereafter referred to as Coriolis; Bertin Technologies, Montigny-le-Bretonneux, France), SKC BioSampler (20 ml; hereafter referred to as BioSampler; SKC Inc., Eight Four, PA, USA), and isopore membrane filters (HTTP03700; hereafter referred to as isopore filters; Merck KGaA, Darmstadt, Germany). All air samplers were used according to the manufacturers’ instructions, except BioSampler which used a longer sampling time than recommended without the addition of mineral oil or glycerol. Isopore filters were selected as a reference air sampler as they displayed higher physical sampling efficiencies, and better DNA/RNA stability during long-term sampling compared to the more commonly used reference sampler BioSampler, for the test conditions in this study (Supplementary Text in Supplementary File ). Isopore filters were placed in 2-piece conductive filter cassettes (SKC 225–2902, SKC Inc., PA, USA) with cellulose filter support pads. A rotary vane vacuum pump (SECO SV 1008 C, Busch Vacuum Solutions Norway AS, Drøbak, Norway) was used to achieve a flow rate of 15 L/min, and the flow rate was controlled by a rotameter (Aalborg model P, Aalborg Instruments & Controls, Inc., Orangeburg, NY, USA). SASS used an electret filter (7100–134-232–01, Research International) which consists of a mesh of fibers with electric charge, and had a flow rate of 300 L/min. VIVAS uses a laminar-flow water condensation particle growth technique to capture aerosol particles at 8 L/min. The temperature settings used for the VIVAS were 5 °C for the conditioner, 45 °C for the initiator, 12 °C moderator, 25 °C for the nozzle, and 15 °C for the sample. Particles were deposited into a 35 mm × 10 mm petri dish with 1.5-ml collection buffer. The liquid cyclone Coriolis had a flow rate of 300 L/min and was tested using the long-time monitoring option with buffer refill during sampling. BioSampler collects particles through liquid impingement and was run continuously with the starting buffer volume of 20 ml for the long-term sampling. The BioSampler was operated by a rotary vane vacuum pump (GAST 1023-V2-G608NGX, Gast Manufacturing Inc., MI, USA) at 12.5 L/min, and the airflow was measured with a mass flow meter (Sierra Top-Trak® model 826, Sierra Instruments, Monterey, CA, USA). Aerosol test facility Air sampler testing was performed in an aerosol test chamber (ATC, Dycor Technologies, Edmonton, AB, Canada) previously described in Dybwad et al. and Bøifot et al., . Briefly, the ATC was fitted with external heating, ventilation, air conditioning (HVAC), high-efficiency particulate air (HEPA)-filtration systems, two mixing fans, and metrology sensors. An optical particle counter (Grimm 1.108, Grimm Technologies, Douglasville, GA, USA) and an Aerodynamic Particle Sizer (APS 3321, TSI, Shoreview, MN, USA) were used for real-time monitoring of test aerosol concentration and particle size distribution. In addition to the APS, a Fast Mobility Particle Sizer (FMPS 3091, TSI) was used to measure particles < 0.5 μm to control that the total particle concentration in the ATC was below the maximum limit for VIVAS (10 5 particles/cm 3 ). The ATC was kept at 50% relative humidity and a temperature of 23.1 ± 1.5 °C during the trials. Test agents and spray solutions For physical sampling efficiency testing, three different spray solutions were prepared in MQ (MQ water; Purification System, Merck KgaA) water, one for each particle size. The final Uranine concentration (1.08462, Merck KgaA) was 0.5 mg/ml for 0.8 μm particles, 5 mg/ml for 1 μm particles, and 1.5 mg/ml for 3 μm particles. Two well-characterized model organisms, PA and MS2, were selected as representatives for gram-negative bacteria and viruses, respectively (Bhardwaj et al., ; Dybwad & Skogan, ). Spray solutions containing PA (ATCC 33243, ATCC, Manassas, VA, USA) were prepared fresh each day. PA was cultured in 30 ml nutrient broth (105,443, Merck KgaA) and incubated overnight (20 h) at 30 °C in an orbital shaking incubator (Corning LSE 71, Corning Inc., Corning, NY, USA) at 200 rpm. The culture was centrifuged at 2500 g (ThermoFisher Scientific Multifuge X1R, ThermoFisher Scientific, Waltham, MA, USA) for 15 min and the supernatant was removed. For 1 μm particles, the bacterial pellet was resuspended in 30 ml of MQ water, and 2 ml was transferred to 48 ml sterile MQ water with a 0.025 mg/ml final concentration of Uranine. For 3 μm particles, the bacterial pellet was resuspended in 48 ml sterile MQ water together with Uranine at a final concentration of 0.2 mg/ml. A stock solution of MS2 phage (DSM 13767, DSMZ German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany) was prepared, and fresh spray solutions were made each day from the stock. In brief, 1.75 ml of an overnight culture of Escherichia coli (DSM 5695, DSMZ GmbH) was used to inoculate 50 ml of NZCYM broth (544. NZCYM-medium, DSMZ GmbH) containing 2 mg/l streptomycin (S9137, Merck KgaA) and incubated in an orbital shaking incubator at 37 °C and 200 rpm until the OD 600nm was 0.3–0.6. Approximately 1 × 10 10 PFU MS2 phage was added to the E. coli culture and further incubated overnight (20 h) in the orbital shaking incubator at 37 °C and 200 rpm. To the culture, 100 µl lysozyme (25 mg/ml; 1.05281, Merck KgaA) was added and incubated for 30 min (37 °C) before the addition of 100 µl chloroform (1.02444, Merck KgaA) and 100 µl EDTA (0.5 M; 1.08418, Merck KgaA), and allowed to incubate for another 30 min. The culture was centrifuged at 2000 g to remove cell debris before the supernatant was filtered through a 0.2-μm syringe filter (WHA10462200, Merck KgaA), and the stock was stored at 4 °C. The MS2 stock solution was quantified using a phage plaque assay and the concentration was 4 × 10 10 PFU/ml. For 1 μm particles, 0.5 ml of MS2 stock solution was diluted in sterile MQ to a final volume of 50 ml together with Uranine at a final concentration of 0.025 mg/ml. For 3 μm particles, 1 ml of the MS2 stock solution was diluted in sterile MQ to a final volume of 40 ml together with Uranine at a final concentration of 0.5 mg/ml. Aerosol generation For physical sampling efficiency testing with Uranine, the mass median aerodynamic diameter (MMAD) was 0.8, 1.3, and 3.4 μm, and the MMAD for aerosols containing MS2 or PA was 1.5 and 3.4 μm. Hereafter, referred to as 1 and 3 μm particles. The particle size distributions were calculated based on APS 3321 measurements from at least five separate experiments as mean (± standard deviation) of numerical median aerosol diameter (NMAD; μm), MMAD (μm), and geometric standard deviation (GSD; unitless), and can be found in Table in Supplementary File with the particle concentration (particles/ml). The particle background in the empty ATC had an NMAD of 0.7–0.8 μm and a concentration of 0.1–0.2 particles/ml, and during long-term sampling, the NMAD was 0.8 μm with a concentration of 0.5–0.9 particles/ml. When instruments were running inside the ATC, small particles were generated, and this is reflected in the slightly higher particle background during long-term sampling compared to the empty ATC. Aerosol particles of Uranine with an MMAD of 0.8 μm and 1 μm were generated using a Hudson RCI 1710 Up-Draft nebulizer (Medline International B.V., Arnhem, Netherlands) propelled with N 2 gas. Aerosol particles with an MMAD of 1 μm (MS2 and PA) and 3 μm (Uranine, MS2, and PA) were produced with a 120-kHz and 48-kHz ultrasonic atomizer nozzle (Sono-Tek, Milton, NY, USA) respectively, and powered with 2 W by an ECHO multiband ultrasonic generator (Model 06–05–00330, Sono-Tek). The spray solution was loaded into 50-ml Luer lock syringes placed in a syringe infusion pump (Model 997E, Sono-Tek), and the ultrasonic atomizer was fed with 1–1.5 ml/min for 3–4 min. After dissemination, the ATC was homogenized with the internal mixing fans for 1 min before initiating sampling (Fig. ). The mixing fans continued to operate throughout the experiments to create stirred settling sampling conditions. Appropriate instrument settings for the ATC and its subsystems were determined during pre-study experiments and kept static throughout the study. The total amount disseminated was adjusted for each test setting such that the total aerosol biomass collected in 10 min was within the quantitation limits of the quantitative PCR (qPCR) assays for all air samplers. The airflow inside the ATC has previously been measured and shown to be < 0.7 ms −1 in all sampling locations (Bøifot et al., , ). Aerosol collection All air samplers were positioned inside the ATC, except the VIVAS which was placed underneath with a conductive tube extending in a straight vertical line into the ATC. All air sampler inlets were located 20 cm above the ATC floor. For sampling with Uranine, five trials for each particle size (0.8, 1, 3 μm) were performed with simultaneous collection with all air samplers for 10 min. Between each trial, the ATC was purged (~ 10 min) before air samples were recovered. Isopore and SASS filters were placed in 10 ml PBS (P4417, Merck KgaA) with 0.05% Trition-X-100 (Merck 11,869, Merck KgaA) and 0.005% Antifoam-A (A5633, Merck KgaA; PBSTA) and vortexed (20 s) for extraction of particles. Coriolis, VIVAS, and BioSampler used MQ water as a collection buffer for the physical sampling efficiency tests with Uranine. Autoclaved MQ water was used as a refill buffer in VIVAS and injected into the growth tube wicks at 20 μl/min. MQ water was also used as a refill buffer in Coriolis. After sampling, the end volumes for the liquid samples were recorded, and samples were kept in the dark at 4 °C before fluorometric analysis. For bioaerosols, simultaneous sampling with the reference sampler was conducted at least 5 times for each particle size (1 and 3 μm) and test agent (MS2 and PA). Two different sampling times were used, 10 min (short-term sampling) and 2 h (long-term sampling). The short-term sampling acted as a reference to compare the effect of sampling stress during long-term sampling. For long-term sampling, there was 10 min of active sampling before the ATC was purged and the air samplers could continue sampling clean air for 110 min, in total 2 h (Fig. ). Filter extraction was performed as described for Uranine collection. For bioaerosol sampling, PBS was used as a collection buffer for Coriolis, VIVAS, and BioSampler, while MQ water was used as a refill buffer in Coriolis and VIVAS as described for Uranine collection. Similar to Uranine collection, end volumes for the liquid samples were recorded. Since VIVAS had a lower end volume than the other air samplers, the entire end volume was transferred to 7.5 ml PBS before aliquots were taken for nucleic acid extraction. All samples were vortexed for 20 s before an aliquot was transferred to 10 ml NucliSENS lysis buffer (BioMérieux, Marcy-l’Étoile, France). For samples containing MS2, a 4 ml aliquot was used for all samplers. For PA, a 4 ml aliquot was used for VIVAS, BioSampler, and isopore filters. For the high-volume samplers (SASS and Coriolis), 0.4 ml was used, as 4 ml resulted in concentration above the limit of quantification for the qPCR assay. Lysis buffer samples for nucleic acid extraction were stored at 4 °C, or at − 80 °C if samples were not processed within 3 days. To investigate the potential for sample-to-sample cross-contamination, samples were collected in an empty ATC following the same conditions as described above, and the results showed negligible traces of contamination (> 100-fold less than during aerosol experiments) with test agent-specific qPCR assays. Nucleic acid extraction and qPCR Nucleic acid extraction was performed with the NucliSENS Magnetic Extraction Reagent Kit (BioMérieux, Marcy-l’Étoile, France). The manufacturer’s protocol was followed but with 90 μl magnetic beads instead of 50 μl. Nucleic acids were quantified with qPCR using test agent-specific primers and probes (Table in Supplementary File ; Invitrogen, Waltham, MA, USA) for MS2 (O’Connell et al., ) and PA (Braun-Kiewnick et al., ). The MS2 assay used the RNA Virus Master (Cat. No. 06754155001, Roche Diagnostics, Oslo, Norway) with a total reaction volume of 20 μl, with 5 μl sample and a final concentration of 0.5 μM forward and reverse primer and 0.25 μM probe. The amplification was performed on a LightCycler 96 (Roche) starting with reverse transcription at 50 °C for 10 min, and followed by 45 cycles at 95 °C for 5 s and 60 °C for 30 s. The PA assay was performed in a 20 μl volume using SYBR Green Master (Cat. No. 04707516001, Roche) with 2-μl sample and a final concentration of 0.5 μM of each primer. Amplification was performed on a LightCycler 96 with 10 min preincubation at 95 °C, followed by 40 cycles of 95 °C for 10 s, 60 °C for 20 s, and 72 °C for 30 s. Standard curves were created by serial dilution of MS2 RNA and PA DNA. Fluorimeter analysis Uranine concentrations were measured using a FLUOStar Optima microplate fluorimeter (BMG Labtech, Offenberg, Germany). All samples were vortexed for 20 s before aliquots were taken for analysis. Due to the high flow rate of SASS and Coriolis, these samples were diluted tenfold. Samples from filter and liquid samplers were obtained in different buffers, and to gain an equal concentration of Triton X-100 before fluorescence measurement, 100 μl of sample was mixed with 100 μl of either PBS or PBSTA. Thereafter, 200 μl 0.1 M Tris-base buffer pH 9.5 (Sigma-Aldrich, St. Louis, MO, USA) was added to each sample and mixed well before 100 μl triplicates were measured using Corning 3915 black 96-well microplates (Sigma-Aldrich). To generate a standard curve, Uranine was serially diluted in the same buffer as the samples. Calculation and statistical analysis Results were expressed as μg Uranine/m 3 of air (physical sampling efficiency) or genome copies/m 3 of air (nucleic acid stability) to compensate for the different flow rates and made relative to isopore filters (reference). SPSS 29.0 (IBM SPSS Statistics) was used to analyze the results. An independent samples Kruskal–Wallis test was performed for pairwise comparison of air samplers and particle sizes. Post hoc Dunn’s test was performed in cases where the Kruskal–Wallis test was significant. An independent samples Mann–Whitney U test was performed to investigate the significance level between 10 min and 2 h. Bonferroni correction was used to correct P -values for multiple comparisons. The significance level was set to < 0.05. Boxplots (Figs. and ) were created in R using Tidyverse and ggsignif, while boxplots in supplementary were created in SPSS 29.0.
Five different bioaerosol samplers were included in this study (Table ), BioSpot-VIVAS 300-P (hereafter referred to as VIVAS; Aerosol Devices Inc., Fort Collins, CO, USA), SASS3100 (hereafter referred to as SASS; Research International, Monroe, WA, USA), Coriolis μ with long time monitoring option (hereafter referred to as Coriolis; Bertin Technologies, Montigny-le-Bretonneux, France), SKC BioSampler (20 ml; hereafter referred to as BioSampler; SKC Inc., Eight Four, PA, USA), and isopore membrane filters (HTTP03700; hereafter referred to as isopore filters; Merck KGaA, Darmstadt, Germany). All air samplers were used according to the manufacturers’ instructions, except BioSampler which used a longer sampling time than recommended without the addition of mineral oil or glycerol. Isopore filters were selected as a reference air sampler as they displayed higher physical sampling efficiencies, and better DNA/RNA stability during long-term sampling compared to the more commonly used reference sampler BioSampler, for the test conditions in this study (Supplementary Text in Supplementary File ). Isopore filters were placed in 2-piece conductive filter cassettes (SKC 225–2902, SKC Inc., PA, USA) with cellulose filter support pads. A rotary vane vacuum pump (SECO SV 1008 C, Busch Vacuum Solutions Norway AS, Drøbak, Norway) was used to achieve a flow rate of 15 L/min, and the flow rate was controlled by a rotameter (Aalborg model P, Aalborg Instruments & Controls, Inc., Orangeburg, NY, USA). SASS used an electret filter (7100–134-232–01, Research International) which consists of a mesh of fibers with electric charge, and had a flow rate of 300 L/min. VIVAS uses a laminar-flow water condensation particle growth technique to capture aerosol particles at 8 L/min. The temperature settings used for the VIVAS were 5 °C for the conditioner, 45 °C for the initiator, 12 °C moderator, 25 °C for the nozzle, and 15 °C for the sample. Particles were deposited into a 35 mm × 10 mm petri dish with 1.5-ml collection buffer. The liquid cyclone Coriolis had a flow rate of 300 L/min and was tested using the long-time monitoring option with buffer refill during sampling. BioSampler collects particles through liquid impingement and was run continuously with the starting buffer volume of 20 ml for the long-term sampling. The BioSampler was operated by a rotary vane vacuum pump (GAST 1023-V2-G608NGX, Gast Manufacturing Inc., MI, USA) at 12.5 L/min, and the airflow was measured with a mass flow meter (Sierra Top-Trak® model 826, Sierra Instruments, Monterey, CA, USA).
Air sampler testing was performed in an aerosol test chamber (ATC, Dycor Technologies, Edmonton, AB, Canada) previously described in Dybwad et al. and Bøifot et al., . Briefly, the ATC was fitted with external heating, ventilation, air conditioning (HVAC), high-efficiency particulate air (HEPA)-filtration systems, two mixing fans, and metrology sensors. An optical particle counter (Grimm 1.108, Grimm Technologies, Douglasville, GA, USA) and an Aerodynamic Particle Sizer (APS 3321, TSI, Shoreview, MN, USA) were used for real-time monitoring of test aerosol concentration and particle size distribution. In addition to the APS, a Fast Mobility Particle Sizer (FMPS 3091, TSI) was used to measure particles < 0.5 μm to control that the total particle concentration in the ATC was below the maximum limit for VIVAS (10 5 particles/cm 3 ). The ATC was kept at 50% relative humidity and a temperature of 23.1 ± 1.5 °C during the trials.
For physical sampling efficiency testing, three different spray solutions were prepared in MQ (MQ water; Purification System, Merck KgaA) water, one for each particle size. The final Uranine concentration (1.08462, Merck KgaA) was 0.5 mg/ml for 0.8 μm particles, 5 mg/ml for 1 μm particles, and 1.5 mg/ml for 3 μm particles. Two well-characterized model organisms, PA and MS2, were selected as representatives for gram-negative bacteria and viruses, respectively (Bhardwaj et al., ; Dybwad & Skogan, ). Spray solutions containing PA (ATCC 33243, ATCC, Manassas, VA, USA) were prepared fresh each day. PA was cultured in 30 ml nutrient broth (105,443, Merck KgaA) and incubated overnight (20 h) at 30 °C in an orbital shaking incubator (Corning LSE 71, Corning Inc., Corning, NY, USA) at 200 rpm. The culture was centrifuged at 2500 g (ThermoFisher Scientific Multifuge X1R, ThermoFisher Scientific, Waltham, MA, USA) for 15 min and the supernatant was removed. For 1 μm particles, the bacterial pellet was resuspended in 30 ml of MQ water, and 2 ml was transferred to 48 ml sterile MQ water with a 0.025 mg/ml final concentration of Uranine. For 3 μm particles, the bacterial pellet was resuspended in 48 ml sterile MQ water together with Uranine at a final concentration of 0.2 mg/ml. A stock solution of MS2 phage (DSM 13767, DSMZ German Collection of Microorganisms and Cell Cultures GmbH, Braunschweig, Germany) was prepared, and fresh spray solutions were made each day from the stock. In brief, 1.75 ml of an overnight culture of Escherichia coli (DSM 5695, DSMZ GmbH) was used to inoculate 50 ml of NZCYM broth (544. NZCYM-medium, DSMZ GmbH) containing 2 mg/l streptomycin (S9137, Merck KgaA) and incubated in an orbital shaking incubator at 37 °C and 200 rpm until the OD 600nm was 0.3–0.6. Approximately 1 × 10 10 PFU MS2 phage was added to the E. coli culture and further incubated overnight (20 h) in the orbital shaking incubator at 37 °C and 200 rpm. To the culture, 100 µl lysozyme (25 mg/ml; 1.05281, Merck KgaA) was added and incubated for 30 min (37 °C) before the addition of 100 µl chloroform (1.02444, Merck KgaA) and 100 µl EDTA (0.5 M; 1.08418, Merck KgaA), and allowed to incubate for another 30 min. The culture was centrifuged at 2000 g to remove cell debris before the supernatant was filtered through a 0.2-μm syringe filter (WHA10462200, Merck KgaA), and the stock was stored at 4 °C. The MS2 stock solution was quantified using a phage plaque assay and the concentration was 4 × 10 10 PFU/ml. For 1 μm particles, 0.5 ml of MS2 stock solution was diluted in sterile MQ to a final volume of 50 ml together with Uranine at a final concentration of 0.025 mg/ml. For 3 μm particles, 1 ml of the MS2 stock solution was diluted in sterile MQ to a final volume of 40 ml together with Uranine at a final concentration of 0.5 mg/ml.
For physical sampling efficiency testing with Uranine, the mass median aerodynamic diameter (MMAD) was 0.8, 1.3, and 3.4 μm, and the MMAD for aerosols containing MS2 or PA was 1.5 and 3.4 μm. Hereafter, referred to as 1 and 3 μm particles. The particle size distributions were calculated based on APS 3321 measurements from at least five separate experiments as mean (± standard deviation) of numerical median aerosol diameter (NMAD; μm), MMAD (μm), and geometric standard deviation (GSD; unitless), and can be found in Table in Supplementary File with the particle concentration (particles/ml). The particle background in the empty ATC had an NMAD of 0.7–0.8 μm and a concentration of 0.1–0.2 particles/ml, and during long-term sampling, the NMAD was 0.8 μm with a concentration of 0.5–0.9 particles/ml. When instruments were running inside the ATC, small particles were generated, and this is reflected in the slightly higher particle background during long-term sampling compared to the empty ATC. Aerosol particles of Uranine with an MMAD of 0.8 μm and 1 μm were generated using a Hudson RCI 1710 Up-Draft nebulizer (Medline International B.V., Arnhem, Netherlands) propelled with N 2 gas. Aerosol particles with an MMAD of 1 μm (MS2 and PA) and 3 μm (Uranine, MS2, and PA) were produced with a 120-kHz and 48-kHz ultrasonic atomizer nozzle (Sono-Tek, Milton, NY, USA) respectively, and powered with 2 W by an ECHO multiband ultrasonic generator (Model 06–05–00330, Sono-Tek). The spray solution was loaded into 50-ml Luer lock syringes placed in a syringe infusion pump (Model 997E, Sono-Tek), and the ultrasonic atomizer was fed with 1–1.5 ml/min for 3–4 min. After dissemination, the ATC was homogenized with the internal mixing fans for 1 min before initiating sampling (Fig. ). The mixing fans continued to operate throughout the experiments to create stirred settling sampling conditions. Appropriate instrument settings for the ATC and its subsystems were determined during pre-study experiments and kept static throughout the study. The total amount disseminated was adjusted for each test setting such that the total aerosol biomass collected in 10 min was within the quantitation limits of the quantitative PCR (qPCR) assays for all air samplers. The airflow inside the ATC has previously been measured and shown to be < 0.7 ms −1 in all sampling locations (Bøifot et al., , ).
All air samplers were positioned inside the ATC, except the VIVAS which was placed underneath with a conductive tube extending in a straight vertical line into the ATC. All air sampler inlets were located 20 cm above the ATC floor. For sampling with Uranine, five trials for each particle size (0.8, 1, 3 μm) were performed with simultaneous collection with all air samplers for 10 min. Between each trial, the ATC was purged (~ 10 min) before air samples were recovered. Isopore and SASS filters were placed in 10 ml PBS (P4417, Merck KgaA) with 0.05% Trition-X-100 (Merck 11,869, Merck KgaA) and 0.005% Antifoam-A (A5633, Merck KgaA; PBSTA) and vortexed (20 s) for extraction of particles. Coriolis, VIVAS, and BioSampler used MQ water as a collection buffer for the physical sampling efficiency tests with Uranine. Autoclaved MQ water was used as a refill buffer in VIVAS and injected into the growth tube wicks at 20 μl/min. MQ water was also used as a refill buffer in Coriolis. After sampling, the end volumes for the liquid samples were recorded, and samples were kept in the dark at 4 °C before fluorometric analysis. For bioaerosols, simultaneous sampling with the reference sampler was conducted at least 5 times for each particle size (1 and 3 μm) and test agent (MS2 and PA). Two different sampling times were used, 10 min (short-term sampling) and 2 h (long-term sampling). The short-term sampling acted as a reference to compare the effect of sampling stress during long-term sampling. For long-term sampling, there was 10 min of active sampling before the ATC was purged and the air samplers could continue sampling clean air for 110 min, in total 2 h (Fig. ). Filter extraction was performed as described for Uranine collection. For bioaerosol sampling, PBS was used as a collection buffer for Coriolis, VIVAS, and BioSampler, while MQ water was used as a refill buffer in Coriolis and VIVAS as described for Uranine collection. Similar to Uranine collection, end volumes for the liquid samples were recorded. Since VIVAS had a lower end volume than the other air samplers, the entire end volume was transferred to 7.5 ml PBS before aliquots were taken for nucleic acid extraction. All samples were vortexed for 20 s before an aliquot was transferred to 10 ml NucliSENS lysis buffer (BioMérieux, Marcy-l’Étoile, France). For samples containing MS2, a 4 ml aliquot was used for all samplers. For PA, a 4 ml aliquot was used for VIVAS, BioSampler, and isopore filters. For the high-volume samplers (SASS and Coriolis), 0.4 ml was used, as 4 ml resulted in concentration above the limit of quantification for the qPCR assay. Lysis buffer samples for nucleic acid extraction were stored at 4 °C, or at − 80 °C if samples were not processed within 3 days. To investigate the potential for sample-to-sample cross-contamination, samples were collected in an empty ATC following the same conditions as described above, and the results showed negligible traces of contamination (> 100-fold less than during aerosol experiments) with test agent-specific qPCR assays.
Nucleic acid extraction was performed with the NucliSENS Magnetic Extraction Reagent Kit (BioMérieux, Marcy-l’Étoile, France). The manufacturer’s protocol was followed but with 90 μl magnetic beads instead of 50 μl. Nucleic acids were quantified with qPCR using test agent-specific primers and probes (Table in Supplementary File ; Invitrogen, Waltham, MA, USA) for MS2 (O’Connell et al., ) and PA (Braun-Kiewnick et al., ). The MS2 assay used the RNA Virus Master (Cat. No. 06754155001, Roche Diagnostics, Oslo, Norway) with a total reaction volume of 20 μl, with 5 μl sample and a final concentration of 0.5 μM forward and reverse primer and 0.25 μM probe. The amplification was performed on a LightCycler 96 (Roche) starting with reverse transcription at 50 °C for 10 min, and followed by 45 cycles at 95 °C for 5 s and 60 °C for 30 s. The PA assay was performed in a 20 μl volume using SYBR Green Master (Cat. No. 04707516001, Roche) with 2-μl sample and a final concentration of 0.5 μM of each primer. Amplification was performed on a LightCycler 96 with 10 min preincubation at 95 °C, followed by 40 cycles of 95 °C for 10 s, 60 °C for 20 s, and 72 °C for 30 s. Standard curves were created by serial dilution of MS2 RNA and PA DNA.
Uranine concentrations were measured using a FLUOStar Optima microplate fluorimeter (BMG Labtech, Offenberg, Germany). All samples were vortexed for 20 s before aliquots were taken for analysis. Due to the high flow rate of SASS and Coriolis, these samples were diluted tenfold. Samples from filter and liquid samplers were obtained in different buffers, and to gain an equal concentration of Triton X-100 before fluorescence measurement, 100 μl of sample was mixed with 100 μl of either PBS or PBSTA. Thereafter, 200 μl 0.1 M Tris-base buffer pH 9.5 (Sigma-Aldrich, St. Louis, MO, USA) was added to each sample and mixed well before 100 μl triplicates were measured using Corning 3915 black 96-well microplates (Sigma-Aldrich). To generate a standard curve, Uranine was serially diluted in the same buffer as the samples.
Results were expressed as μg Uranine/m 3 of air (physical sampling efficiency) or genome copies/m 3 of air (nucleic acid stability) to compensate for the different flow rates and made relative to isopore filters (reference). SPSS 29.0 (IBM SPSS Statistics) was used to analyze the results. An independent samples Kruskal–Wallis test was performed for pairwise comparison of air samplers and particle sizes. Post hoc Dunn’s test was performed in cases where the Kruskal–Wallis test was significant. An independent samples Mann–Whitney U test was performed to investigate the significance level between 10 min and 2 h. Bonferroni correction was used to correct P -values for multiple comparisons. The significance level was set to < 0.05. Boxplots (Figs. and ) were created in R using Tidyverse and ggsignif, while boxplots in supplementary were created in SPSS 29.0.
Physical sampling efficiency Physical sampling efficiencies relative to the reference sampler for 0.8, 1, and 3 μm particles were determined for the evaluated air samplers using Uranine (Fig. and Table in Supplementary File ). SASS had a significantly higher sampling efficiency for 3 μm particles compared to 0.8 μm (99 ± 2% vs 82 ± 4%, P = 0.007), while no significant difference was found between 1 μm (96 ± 3%) and the two other particle sizes. Coriolis showed a significantly lower sampling efficiency for 0.8 μm particles compared to 3 μm particles (7 ± 0.3% vs 91 ± 11%, P = 0.001), while no significant difference was found between 1 μm (50 ± 2%) and the two other particle sizes. VIVAS showed no significant difference in sampling efficiency for the three particle sizes (0.8 μm: 92 ± 3%, 1 μm: 94 ± 8% and 3 μm: 86 ± 2%), nor did BioSampler (0.8 μm: 91 ± 5%, 1 μm: 92 ± 11% and 3 μm: 94 ± 4%). A nonparametric Kruskal–Wallis test showed that there were no significant differences between the air samplers for 3 μm particles ( P = 0.092), while significant differences were found for 0.8 ( P = 0.002) and 1 μm ( P = 0.008) particles. A post hoc pairwise comparison of the different air samplers showed that for 0.8 μm particles there was a significant difference between Coriolis and VIVAS (7 ± 0.3% vs 92 ± 3%, P = 0.003), and Coriolis and BioSampler (7 ± 0.3% vs 91 ± 5%, P = 0.015). For 1 μm particles, there was a significant difference between Coriolis and SASS (50 ± 2% vs 96 ± 3%, P = 0.011), and Coriolis and VIVAS (50 ± 2% vs 94 ± 8%, P = 0.027). In summary, there were no significant differences ( P ≥ 0.301) between BioSampler, SASS, and VIVAS for 0.8 and 1 μm particles, and for 3 μm particles, there was no significant difference ( P = 0.092) between any of the air samplers. However, Coriolis showed significantly lower sampling efficiencies (for 0.8 and 1 μm particles) compared to the other air samplers. Nucleic acid stability Aerosols containing test agents (MS2 and PA) were generated at two different particle sizes (1 and 3 μm), totaling four test conditions, to investigate the stability of nucleic acids during long-term sampling (Fig. and Table in Supplementary File ). For 1 μm MS2 particles, there was a significant decrease from 10 min to 2 h for Coriolis (84 ± 11% vs 45 ± 20%, P = 0.016) and similarly for BioSampler (93 ± 6% vs 23 ± 5%, P = 0.008), while no significant difference was observed for SASS (86 ± 21% vs 102 ± 23%, P = 0.421) and VIVAS (91 ± 13% vs 80 ± 15%, P = 0.151). For 3 μm MS2 particles, there was a significant decrease after 2 h for BioSampler (116 ± 31% vs 77 ± 21%, P = 0.030), while no significant difference was observed for SASS (98 ± 31% vs 87 ± 13%, P = 0.662), Coriolis (129 ± 13% vs 82 ± 41%, P = 0.151) and VIVAS (110 ± 6% vs 101 ± 22%, P = 0.177). For 1 μm PA particles, there was no significant difference between 10 min and 2 h for any of the air samplers, SASS (97 ± 14% vs 103 ± 21%, P = 0.481), Coriolis (86 ± 10% vs 72 ± 12%, P = 0.222), VIVAS (21 ± 4% vs 30 ± 12%, P = 0.222), and BioSampler (140 ± 17% vs 129 ± 28%, P = 0.247). For 3 μm PA particles, there was a significant decrease for Coriolis (87 ± 9% vs 34 ± 24%, P = 0.008) and BioSampler (124 ± 11% vs 101 ± 6%, P = 0.002), while no significant difference was observed for SASS (79 ± 18% vs 76 ± 8%, P = 0.841) and VIVAS (25 ± 3% vs 47 ± 23%, P = 0.151). Only Coriolis and BioSampler, both using a collection principle that leads to loss of collection buffer, showed a significant decrease in genome copies/m 3 relative to the reference after 2-h sampling. Coriolis also had a significant decrease for all test conditions in Uranine concentration (Fig. in Supplementary File ) after 2 h for 1 μm MS2 (94 ± 13% vs 36 ± 22%, P = 0.008), 3 μm MS2 (108 ± 14% vs 64 ± 33%, P = 0.032), 1 μm PA (102 ± 8% vs 66 ± 6%, P = 0.008), and 3 μm PA (97 ± 14% vs 28 ± 14%, P = 0.008). BioSampler, which also showed a decrease in genome copies/m 3 , did not display a similar decrease in Uranine concentrations. A new set of experiments was therefore conducted with Coriolis, using PBS spiked with Uranine as a collection buffer, and running the instrument for up to 2 h (Supplementary Text in Supplementary File ). Coriolis showed a significant decrease in Uranine concentration after 10 min, 1 h, and 2 h, suggesting that Uranine was lost during operation of the instrument. Rinsing the air inlet and metal flow cane of the instrument with water reduced variability in the spike results. This suggests that rinsing removed contamination in the air inlet and metal flow cane that could otherwise contaminate the sample through backflow. It was observed that 1 μm PA showed stable genome copy yields relative to the reference for all air samplers, also for Coriolis, which had shown a significant decrease in Uranine concentration. The characterization of the reference sampler (Supplementary Text Supplementary File ) showed that there was a decrease in DNA for 1 μm PA from 10 min to 2 h. Therefore, raw values (genome copies/m 3 ) were used to identify if there was a decrease in genome copies after 2 h (Table and in Supplementary File ) for all air samplers. There was a significant decrease in genome copies from 10 min to 2 h for 1 μm PA for the reference (4.21 × 10 7 vs 2.72 × 10 7 , P = 0.013) and a non-significant decrease in genome copies for SASS. Coriolis and BioSampler also had a significant decrease in genome copies from 10 min to 2 h for 1 μm PA based on raw values, while VIVAS showed a stable concentration. VIVAS showed a notable difference in sampling efficiency between MS2 and PA. However, the Uranine concentration (tracer) did not differ significantly ( P ≥ 0.329) between MS2 and PA, suggesting that the experiments and the instrument had worked successfully. Theoretical calculations were performed to investigate if there was an uneven distribution of PA and Uranine particles which could give rise to differential sampling (Supplementary Text in Supplementary File ). Theoretical calculations showed that every 3 μm PA particle would contain Uranine and several viable PA cells. For 1 μm PA, all particles would contain Uranine, while only 10% of the particles would contain both viable PA and Uranine. The low fraction of PA particles could potentially lead to differential sampling, but as this was only observed for 1 μm PA it was unlikely that differential sampling was an issue. This led to additional experiments examining the potential adhesion of PA cells or cell fragments to the petri dish in which VIVAS samples were deposited (Supplementary Text in Supplementary File ), but no signs of adhesion were observed for any of the plasticware tested.
Physical sampling efficiencies relative to the reference sampler for 0.8, 1, and 3 μm particles were determined for the evaluated air samplers using Uranine (Fig. and Table in Supplementary File ). SASS had a significantly higher sampling efficiency for 3 μm particles compared to 0.8 μm (99 ± 2% vs 82 ± 4%, P = 0.007), while no significant difference was found between 1 μm (96 ± 3%) and the two other particle sizes. Coriolis showed a significantly lower sampling efficiency for 0.8 μm particles compared to 3 μm particles (7 ± 0.3% vs 91 ± 11%, P = 0.001), while no significant difference was found between 1 μm (50 ± 2%) and the two other particle sizes. VIVAS showed no significant difference in sampling efficiency for the three particle sizes (0.8 μm: 92 ± 3%, 1 μm: 94 ± 8% and 3 μm: 86 ± 2%), nor did BioSampler (0.8 μm: 91 ± 5%, 1 μm: 92 ± 11% and 3 μm: 94 ± 4%). A nonparametric Kruskal–Wallis test showed that there were no significant differences between the air samplers for 3 μm particles ( P = 0.092), while significant differences were found for 0.8 ( P = 0.002) and 1 μm ( P = 0.008) particles. A post hoc pairwise comparison of the different air samplers showed that for 0.8 μm particles there was a significant difference between Coriolis and VIVAS (7 ± 0.3% vs 92 ± 3%, P = 0.003), and Coriolis and BioSampler (7 ± 0.3% vs 91 ± 5%, P = 0.015). For 1 μm particles, there was a significant difference between Coriolis and SASS (50 ± 2% vs 96 ± 3%, P = 0.011), and Coriolis and VIVAS (50 ± 2% vs 94 ± 8%, P = 0.027). In summary, there were no significant differences ( P ≥ 0.301) between BioSampler, SASS, and VIVAS for 0.8 and 1 μm particles, and for 3 μm particles, there was no significant difference ( P = 0.092) between any of the air samplers. However, Coriolis showed significantly lower sampling efficiencies (for 0.8 and 1 μm particles) compared to the other air samplers.
Aerosols containing test agents (MS2 and PA) were generated at two different particle sizes (1 and 3 μm), totaling four test conditions, to investigate the stability of nucleic acids during long-term sampling (Fig. and Table in Supplementary File ). For 1 μm MS2 particles, there was a significant decrease from 10 min to 2 h for Coriolis (84 ± 11% vs 45 ± 20%, P = 0.016) and similarly for BioSampler (93 ± 6% vs 23 ± 5%, P = 0.008), while no significant difference was observed for SASS (86 ± 21% vs 102 ± 23%, P = 0.421) and VIVAS (91 ± 13% vs 80 ± 15%, P = 0.151). For 3 μm MS2 particles, there was a significant decrease after 2 h for BioSampler (116 ± 31% vs 77 ± 21%, P = 0.030), while no significant difference was observed for SASS (98 ± 31% vs 87 ± 13%, P = 0.662), Coriolis (129 ± 13% vs 82 ± 41%, P = 0.151) and VIVAS (110 ± 6% vs 101 ± 22%, P = 0.177). For 1 μm PA particles, there was no significant difference between 10 min and 2 h for any of the air samplers, SASS (97 ± 14% vs 103 ± 21%, P = 0.481), Coriolis (86 ± 10% vs 72 ± 12%, P = 0.222), VIVAS (21 ± 4% vs 30 ± 12%, P = 0.222), and BioSampler (140 ± 17% vs 129 ± 28%, P = 0.247). For 3 μm PA particles, there was a significant decrease for Coriolis (87 ± 9% vs 34 ± 24%, P = 0.008) and BioSampler (124 ± 11% vs 101 ± 6%, P = 0.002), while no significant difference was observed for SASS (79 ± 18% vs 76 ± 8%, P = 0.841) and VIVAS (25 ± 3% vs 47 ± 23%, P = 0.151). Only Coriolis and BioSampler, both using a collection principle that leads to loss of collection buffer, showed a significant decrease in genome copies/m 3 relative to the reference after 2-h sampling. Coriolis also had a significant decrease for all test conditions in Uranine concentration (Fig. in Supplementary File ) after 2 h for 1 μm MS2 (94 ± 13% vs 36 ± 22%, P = 0.008), 3 μm MS2 (108 ± 14% vs 64 ± 33%, P = 0.032), 1 μm PA (102 ± 8% vs 66 ± 6%, P = 0.008), and 3 μm PA (97 ± 14% vs 28 ± 14%, P = 0.008). BioSampler, which also showed a decrease in genome copies/m 3 , did not display a similar decrease in Uranine concentrations. A new set of experiments was therefore conducted with Coriolis, using PBS spiked with Uranine as a collection buffer, and running the instrument for up to 2 h (Supplementary Text in Supplementary File ). Coriolis showed a significant decrease in Uranine concentration after 10 min, 1 h, and 2 h, suggesting that Uranine was lost during operation of the instrument. Rinsing the air inlet and metal flow cane of the instrument with water reduced variability in the spike results. This suggests that rinsing removed contamination in the air inlet and metal flow cane that could otherwise contaminate the sample through backflow. It was observed that 1 μm PA showed stable genome copy yields relative to the reference for all air samplers, also for Coriolis, which had shown a significant decrease in Uranine concentration. The characterization of the reference sampler (Supplementary Text Supplementary File ) showed that there was a decrease in DNA for 1 μm PA from 10 min to 2 h. Therefore, raw values (genome copies/m 3 ) were used to identify if there was a decrease in genome copies after 2 h (Table and in Supplementary File ) for all air samplers. There was a significant decrease in genome copies from 10 min to 2 h for 1 μm PA for the reference (4.21 × 10 7 vs 2.72 × 10 7 , P = 0.013) and a non-significant decrease in genome copies for SASS. Coriolis and BioSampler also had a significant decrease in genome copies from 10 min to 2 h for 1 μm PA based on raw values, while VIVAS showed a stable concentration. VIVAS showed a notable difference in sampling efficiency between MS2 and PA. However, the Uranine concentration (tracer) did not differ significantly ( P ≥ 0.329) between MS2 and PA, suggesting that the experiments and the instrument had worked successfully. Theoretical calculations were performed to investigate if there was an uneven distribution of PA and Uranine particles which could give rise to differential sampling (Supplementary Text in Supplementary File ). Theoretical calculations showed that every 3 μm PA particle would contain Uranine and several viable PA cells. For 1 μm PA, all particles would contain Uranine, while only 10% of the particles would contain both viable PA and Uranine. The low fraction of PA particles could potentially lead to differential sampling, but as this was only observed for 1 μm PA it was unlikely that differential sampling was an issue. This led to additional experiments examining the potential adhesion of PA cells or cell fragments to the petri dish in which VIVAS samples were deposited (Supplementary Text in Supplementary File ), but no signs of adhesion were observed for any of the plasticware tested.
In this study, we evaluated four air samplers (SASS 3100, Coriolis μ with long-time monitoring option, BioSpot-VIVAS 300P, and SKC BioSampler) for physical sampling efficiency and nucleic acid stability during long-term sampling. All air samplers, except Coriolis, achieved high physical sampling efficiencies (> 80%) for all evaluated particle sizes (0.8, 1, and 3 μm). Our results showed that BioSampler (impingement) and Coriolis (wetted-wall cyclone) had a reduction of DNA (PA) and RNA (MS2) after 2-h sampling, while SASS (electret filter) only experienced a reduction of DNA for 1 μm PA particles. VIVAS showed stable RNA and DNA quantities after 2-h sampling but had a relatively poor sampling efficiency for PA compared to the reference sampler. Physical sampling efficiency is a measure of how efficiently an air sampler collects particles and how efficiently collected material can be recovered from a sample. As mentioned, the physical sampling efficiencies were high for all test conditions and samplers (> 80%) except Coriolis, and the efficiencies were as expected based on previous reports and manufacturer-supplied specifications and test data (Aerosol Devices Inc., ; Bøifot et al., , ; Dybwad et al., ; Kesavan et al., ; SKC Inc., ). Coriolis had a lower physical sampling efficiency (7%) than expected for 0.8 μm particles based on the specified D50 for < 0.5 μm (50%) (Bertin Technologies, ). However, Coriolis had a high physical sampling (> 80%) for 3 μm particles and as expected for 1 μm particles (50%) based on previous reports (Dybwad et al., ). While the physical sampling efficiency for Coriolis based on Uranine was 50 ± 2% for 1 μm particles, the tracer (Uranine) used in experiments with 1 μm MS2 and PA showed higher sampling efficiencies with 94 ± 13% and 102 ± 8%, respectively. This was likely caused by a difference in the aerosol generation method which resulted in a lower NMAD for physical sampling efficiency experiments (Hudson nebulizer) compared to that of MS2 and PA experiments (120 kHz Sono-Tek), though the MMAD was similar. Differences in aerosol generation methods could also explain the unexpectedly low sampling efficiency for 0.8 μm particles compared to the D50 stated by the manufacturer. When investigating the nucleic acid stability during long-term sampling, only VIVAS showed stable DNA and RNA quantities after 2 h compared to 10 min for all test conditions. On the contrary, the liquid samplers Coriolis and BioSampler displayed reduced DNA and RNA stability relative to the reference for all conditions. However, for 1 μm PA, both Coriolis and BioSampler displayed stable concentrations relative to the reference after a 2-h sampling, but the apparent stability was caused by a significant reduction (genome copies/m 3 ) in the reference sampler, which was not observed for the other test conditions. Based on raw values (genome copies/m 3 ), Coriolis and BioSampler had a significant reduction after 2 h for 1 μm PA, and a reduction was also observed for SASS, though not significant. The observed reduction of DNA in 1-μm PA experiments for filter samplers is likely a result of desiccation and degradation of DNA during long-term sampling. This was not an issue for 3 μm PA but microorganisms in smaller particles can be more exposed to desiccation than larger particles (Lighthart & Shaffer, ). No reduction in RNA for filter samplers was observed for either 1 or 3 μm MS2, but it has previously been suggested that due to the small size of MS2 (27 nm) even particles from 100 to 450 nm provide a shielding effect for survival of MS2 (Zuo et al., ). The results show that reduction of DNA is an issue with filter sampling for certain conditions during long-term sampling. In real-world sampling, this would lead to a non-representative sample by underestimating PA compared to MS2. However, this study only included two test agents and did not include gram positives or spores which are considered to be more resistant to sampling stress. Further studies are needed to understand if this is a widespread issue for smaller particles containing microorganisms and, by that, the impact it may have on microbiome studies. Coriolis did not only show a reduction in genome copies, a significant decrease in Uranine concentration was observed for all test conditions. Additional spike experiments with Uranine showed a significant reduction in Uranine concentrations even after 10 min of running the Coriolis. The loss of Uranine can be attributed to evaporation/reaerosolization as no evidence of photobleaching was found for the duration and environmental conditions used in the spike experiments. Rufino de Sousa et al. have previously shown that reaerosolized material can be deposited internally in Coriolis. The Uranine concentration in spike samples was highly variable after 2 h, but the variability was reduced when the air inlet and metal flow cane was rinsed with water in between every run. As large volumes of collection buffer evaporate during long-term sampling with Coriolis (Tseng et al., ), there are concerns that this liquid may condense in the interior walls of the air inlet and flow cane. This can cause a random backflush into the sampling cone, which can contribute to cross-contamination between samples and could explain the large variations observed for Coriolis. Based on our findings, we would recommend cleaning or rinsing between each run, especially for long-term sampling, and not each day or between each controlled room as stated in the manufacturer’s manual. While Coriolis displayed a decrease in both Uranine and genome copies, BioSampler only showed a decrease in genome copies and had relatively stable Uranine concentrations for all test conditions. Reaerosolization from BioSampler has been reported several times (Lin et al., ; Riemenschneider et al., ), but Han and Mainelis found that the largest loss of material was internally in the BioSampler. Lemieux et al. have shown that different bacteria can reaerosolize at different rates in the BioSampler. Differential reaerosolization could explain the difference in stability between Uranine and MS2/PA in BioSampler, but based on our results we cannot conclude what mechanisms (reaerosolization, internal loss, or degradation) contributed to the loss of MS2 and PA. It is also important to note that sampling buffers can impact on evaporation rates and by that reaerosolization. For BioSampler it is recommended to use mineral oil or glycerol to avoid evaporation during long-term sampling but can affect downstream molecular processes (Pan et al., ; SKC Inc., ). Loss of collected material from liquid air samplers is a recognized problem and should be taken into account if used in microbiome studies where representative samples are essential. An interesting finding in this study was the difference in sampling efficiency (relative to reference) observed between PA and MS2 with VIVAS, where MS2 had sampling efficiencies around 100% and PA around 20% for short-term sampling. Translated into real-world sampling, this would result in a non-representative sample, where PA was underestimated. The MS2 results were as expected based on other studies, which have shown that the condensation growth principle employed by VIVAS has high sampling efficiencies for MS2 and viruses when compared to other air samplers (Degois et al., ; Jiang et al., ; Raynor et al., ). Despite the low sampling efficiency for PA, Uranine was stable between all experiments, suggesting that the sampler had operated correctly. Other factors to explain the difference in sampling efficiency were investigated, such as VIVAS’ upper particle concentration limit, differential sampling between PA and Uranine, and adhesion of PA to the collection petri dish. However, none of the investigated factors could explain the observed result. A limitation of the spike experiments investigating adhesion is that aerosolization may impact the surface properties of the particles/microorganisms which is not easily reproduced. Few published studies have compared condensation growth to other sampling principles using bacteria. Nieto-Caballero et al. looked at the stability of Bacillus subtilis using the SpotSampler (based on the same condensation growth technique as VIVAS) and found a small decrease in 16S rRNA gene copy numbers after a 50-min sampling, which corresponds well with this study. Nieto-Caballero compared SpotSampler, BioSampler, and polycarbonate filters (isopore filters), and showed that the SpotSampler performed better (judged by 16S rRNA gene copy number/m 3 ) than the two other samplers for B. subtilis . Differences in experimental factors (e.g., collection buffer, filter extraction, test agents, instruments, and DNA isolation) may have contributed to the opposite conclusions, but there are not enough experimental details available for a thorough comparison. Therefore, further studies are warranted to identify the cause of the discrepancy. It would be interesting to compare VIVAS using other bacterial and fungal species, and collection directly into a nucleic acid preservative as this would enhance the preservation of collected samples. We did not explore the effect of different collection principles on real-world samples in this study, but previous studies have compared microbial diversity for some of the air samplers evaluated in this study (SASS, Coriolis, BioSampler, and isopore filters). Real-world data shows conflicting results regarding the comparability of the air samplers. Mbareche et al. concluded that Coriolis did not cover most of the bacterial and fungal diversity found with SASS. Lemieux et al. found that SASS and isopore filters had statistically higher species richness than Coriolis and BioSampler and that the two filter samplers had a comparable bacterial diversity (top 20) that was different from that found by Coriolis and BioSampler. On the other hand, Luhung et al. concluded that SASS and Coriolis displayed comparable microbial diversity based on the top 40 most abundant organisms. While Mbareche et al. and Lemieux et al. have used an almost identical methodology and sampled in a similar environment, Luhung et al. have used a different sample processing, sequencing, and bioinformatics analysis scheme, which may have contributed to the conflicting results. The effect different protocols have on the microbiome should not be underestimated. There is a need to further characterize and harmonize air sampler selection and experimental protocols for microbiome studies so results can be leveraged across studies, to advance our understanding of the air microbiome.
Increasing air sampling time to collect enough biomass for sequencing studies can come at an expense. The stability of microorganisms and their nucleic acids during long-term sampling is a concern as representative sample collection is crucial for the validity of microbiome studies. By challenging different air samplers with viruses and bacteria, we studied the stability of nucleic acids during long-term sampling to improve our understanding of how this strategy can affect real-world microbiome studies. We hypothesized that nucleic acid yields would decrease after a 2-h sampling, and this was the case for all test conditions (1 and 3 μm PA, and 1 and 3 μm MS2) for liquid-based collection with BioSampler and Coriolis, and for 1 μm PA for filter-based collection with SASS and isopore filters. VIVAS displayed stable yields for long-term sampling, but with lower sampling efficiency for PA compared to MS2. All air samplers included in this study were associated with some limitations that would affect aerosol microbiome studies. Long-term sampling with filters and sampling with condensation growth would, based on our result, collect a non-representative sample, while valuable biomass would be lost from liquid-based air samplers (e.g., through reaerosolization). Our results support that there are fundamental differences between the collection principles which can manifest as differences in microbial diversity. This shows the importance of considering the bias introduced by air sampling when selecting air samplers for microbiome studies, and when interpreting microbiome data. As it stands, no air sampler is perfect, and new investigations are needed to understand the mechanisms behind the bias and how they can be overcome to unlock the true microbial diversity.
Below is the link to the electronic supplementary material. Supplementary file1 (DOCX 48.4 KB) Supplementary file2 (XLSX 395 KB)
|
scMMAE: masked cross-attention network for single-cell multimodal omics fusion to enhance unimodal omics | 4ca42636-4c10-4058-adb1-85d26f972d7e | 11757910 | Biochemistry[mh] | Single-cell multimodal omics (multi-omics) techniques offer a pivotal opportunity to deepen the understanding of biological systems. Integrating different modal data of single cells, which provides multi-faceted insights into cellular processes, allows us to create a more comprehensive and nuanced view of cellular functions and interactions. Technological advancements made it possible to measure various types of molecules within a single cell. Among the studies of these molecules, transcriptomics and proteomics are two pivotal branches in understanding cellular function and phenotype. Transcriptomics reveals the cellular responses to stimuli, differentiation processes, and the development of specific phenotypes. Proteomics provides a direct assessment of cellular function as proteins are the actual functional molecules in the cell. Techniques such as cellular indexing of transcriptomes and epitopes by sequencing (CITE-seq) exemplify these innovations, enabling simultaneous analysis of transcriptomics and proteomics in individual cells. Despite the remarkable progress in sequencing technologies , a variety of computational approaches have been proposed to integrate the multimodal omics between transcriptomics and proteomics. Currently, most methods for the integration analysis of transcriptomics and proteomics data are based on probabilistic graphical models, which can be further divided into two subcategories. One subcategory includes models based on variational autoencoders (VAE) and its variants, such as scCTCLust and scMM. ScCTCLust integrates transcriptomic and proteomic data from a single cell utilizing VAE and canonical correlation analysis. ScMM is specifically designed for the analysis of CITE-seq data, emphasizing joint representation and predictions across different modalities. TotalVI presents an end-to-end framework for the joint analysis of CITE-seq data. It probabilistically characterizes the data by integrating both biological and technological factors, such as protein background and batch effects. The other category comprises models that utilize traditional Bayesian methods with Gibbs sampling . For instance, jointDIMMSC and BREMSC are Bayesian Random Effects Mixture Models designed for joint clustering of CITE-Seq data. These models employ Gibbs sampling to sample from the posterior distribution given the observed data. Recently, the SCOIT model was introduced as a probabilistic tensor decomposition framework, specifically designed to extract embeddings from paired single-cell multi-omic data, including CITE-seq data. On the other hand, the cross-modal features can be beneficial for unimodal analysis. The existing methods were incapable of transferring the knowledge acquired from multimodal omics to aid unimodal omics, yet the majority of single-cell cohorts only encompass one omics particularly single-cell RNA sequencing (scRNA-seq). Therefore, it is critical to transfer the knowledge learnt from multimodal omics to assist unimodal omics analysis. In this study, we proposed a cross-attention neural network architecture based on the masked autoencoder called scMMAE (single-cell multimodal masked autoencoder) that can simultaneously extract the common features and preserve the distinctive information of the respective omics . Moreover, scMMAE can transfer the knowledge learnt from the fusion of proteomics and transcriptomics to enhance the representation of scRNA-seq . ScMMAE contained two encoders for gene and protein expressions, a cross-attention mechanism for two omics, and the fusion of two distinct information (DI) and cross-modal features . A masked autoencoder (MAE) was applied for model pretraining and transfer learning was used to transfer knowledge to enhance single-cell transcriptome. We evaluated the performance of scMMAE in transcriptomic and proteomic integration with 10 metrics including adjusted Rand index (ARI), normalized mutual information (NMI), and Fowlkes–Mallows index (FMI) using five CITE-seq cohorts. The performance of scMMAE for transcriptomic representation was also evaluated using four scRNA-seq cohorts. Our model demonstrated exceptional performance both in the integration of single-cell transcriptomics and proteomics and facilitating single-cell transcriptome analysis. It can provide better representation in resolving cell types such as CD4 and CD8 T cells and downstream analysis such as diagnostic biomarkers identification. Overview of ScMMAE As transcriptomics and proteomics may include complementary information of cells, scMMAE took DI from transcriptomics and proteomics into consideration in multi-modal omics fusion . The neural network architecture of scMMAE first applied autoencoder with cross-attention mechanism to learn cross-modal information, bridging the gap between two omics. Both the encoder and decoder used multi-head self-attention layers. The cross-attention mechanism was applied in the latent space described as equation . (1) [12pt]{minimal} & (E_{i}W_{1},E_{j}W_{2},E_{j}W_{3})=(W_{1}(E_{j}W_{2})^{T}}{W_{2}}}})E_{j}W_{3}, where [12pt]{minimal} $E_{i}$ , [12pt]{minimal} $E_{j}$ are the multi-head self-attention encoder of two omics, [12pt]{minimal} $W_{1}$ , [12pt]{minimal} $W_{2}$ , [12pt]{minimal} $W_{3}$ present three learnable networks. Then, it combined the cross-modal information with DI from encoders of different modalities to capture the intricate relationships and dependencies between genes and proteins, mapping them into a unified latent space for better representation. We denoted DI from two omics as [12pt]{minimal} $I_{i}$ and [12pt]{minimal} $I_{j}$ . The fused representation can be written as equation . (2) [12pt]{minimal} u=&(I_{i}+(E_{i}W_{1},E_{j}W_{2},E_{j}W_{3}))\\ &+(I_{j}+(E_{j}W_{1},E_{i}W_{2},E_{i}W_{3})). Importantly, since most of the samples were profiled by scRNA-seq, scMMAE transferred the knowledge learnt from multi-modalities to unimodality to enhance the representation of scRNA-seq. The training of scMMAE is comprised of three stages. Stage 1 : Limited by the number of annotated CITE-seq data, we applied a self-supervised learning method, MAE, to pretrain the modal. We masked part of the genes and proteins and forced scMMAE to reconstruct the missing inputs using unmasked cells from five CITE-seq datasets. We denoted the original input of transcriptomics and proteomics as [12pt]{minimal} $X_{RNA}$ and [12pt]{minimal} $X_{ADT}$ . We masked part of the input, denoted as [12pt]{minimal} $X_{RNA}^{masked}$ and [12pt]{minimal} $X_{ADT}^{masked}$ , and fed into the model. The model encoded the masked input into low dimensional latent space [12pt]{minimal} $X_{RNA}^{masked}, X_{ADT}^{masked} Z_{RNA, ADT}$ and tried to reconstruct the original input through the decoder [12pt]{minimal} $Z_{RNA, ADT} X_{RNA}$ , [12pt]{minimal} $ X_{ADT}$ , where [12pt]{minimal} $Z_{RNA, ADT}$ represents the output of the encoder in the latent space and [12pt]{minimal} $ X_{RNA}, X_{ADT}$ represent the output of two decoders. The parameters of the network model can be obtained through optimizing the loss function in equation . (3) [12pt]{minimal} & _{}=}_{dist}(X_{RNA},\ {}_{RNA})+}_{dist}(X_{ADT},\ _{ADT})}. Basic information and the relationship of genes and proteins were learnt by scMMAE in this stage. Stage 2 : ScMMAE was trained by a small part of the CITE-seq data with cell annotation as labels. It was required to accurately predict cell types and learn cell information based on transcriptomics and proteomics through this process. Stage 3 : ScMMAE transferred the knowledge learnt from multimodal omics to enhance single-cell transcriptome analysis by training with only part of scRNA-seq data. The cross-attention mechanism kept the knowledge of multiomics and was revised into a self-attention for unimodal analysis. The data structure of the input, network parameters, training processes, and other details were illustrated in the following parts. CITE-seq and ScRNA-seq datasets preprocessing ScMMAE adopted the popular training strategy, i.e. pre-train and fine-tuning. At the pre-training and fine-tuning stage, we utilized five CITE-seq datasets , and four RNA-seq datasets during the prediction stage and detailed information regarding these datasets is provided. Of note, in the absence of annotated cell types for three of the CITE-seq datasets (PBMC5K, PBMC10K, and MALT10K), we employed weighted nearest neighbor methods to conduct multi-omics analysis using Scanpy . The remaining datasets were annotated according to their sources. Since the code for the scCTCLust method is incomplete and unusable, we did not include it in the comparison. We applied totalVI, SCOIT, jointDIMMSC, scMM, and BREMSC to embed CITE-seq data cells in a common latent space as benchmark methods in the fine-tuning stage and applied Seurat , Scanpy, SCVI , and Pagoda2 in single modal predicting. See and for procedures and parameterization of multi-omics and unimodal omics, respectively. We applied distinct normalization strategies tailored to each data type: RPKM normalization for RNA-seq data, and centered log ratio (CLR) normalization for the proteomic data to mitigate compositional effects. (4) [12pt]{minimal} & (x_{i}) = ( }{[n]{_{j=1}^{n} x_{j}}} ), where [12pt]{minimal} $x_{i}$ represents the [12pt]{minimal} $i$ th protein expression value in the cell, with [12pt]{minimal} $n$ denoting the total number of proteins in a cell, and [12pt]{minimal} $j$ iterating over all proteins. After normalization, we selected 4000 genes with high variability along with all proteins for model input to capture the most informative features. All preprocessing procedures were executed using Scanpy’s integrated functions. For the sepsis case studies, we collected scRNA-seq data for sepsis patients and healthy controls from the Broad Institute Single Cell Portal, portal ID SCP548 (subject PBMCs). The collection contains scRNA-seq data for 126 351 cells from 29 septic patients and 36 controls across three cohorts. The key cohorts focused on those who had a urinary tract infection (UTI) early in their illness progression. Subjects with UTIs and either mild or transient organ dysfunction (Int-URO), UTIs with evident or persistent organ dysfunction (Urosepsis, URO), bacteremic patients with sepsis in hospital wards (Bac-SEP), and patients admitted to the medical intensive care unit (ICU) with sepsis (ICU-SEP) are among the septic patients. Subjects with UTI and leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) without organ dysfunction (Leuk-UTI), patients hospitalized in the medical intensive care unit (ICU-NoSEP), and healthy uninfected controls were included in the control samples. Network architecture of ScMMAE We proposed a cross-attention-based network called scMMAE that can integrate single-cell transcriptomics and proteomics data and transfer the fused knowledge to enhance scRNA-seq data. scMMAE was constructed based on the MAE framework , tailored for handling multimodal tasks, with the incorporation of cross-attention mechanisms for data integration . ScMMAE comprised three main stages. In the first stage, referred to as the pre-training stage, scMMAE included two encoders, two decoders, and one cross-attention architecture. Following the pre-training stage, our model discarded one of the decoder architectures, and the cross-attention output was augmented with the residual to serve as the model’s final output. Lastly, in stage 3, we streamlined the architecture, reducing its complexity compared to the models in stage 2, except for the cross-attention architecture, and used one full connectivity layer to replace original modal queries. Transcriptomics and proteomics data reconstruction Suppose the input CITE-seq dataset notated as [12pt]{minimal} $S\ =\ {s_{RNA}^{k},{\ s}_{ADT}^{k}},\ i\ [1,n]$ , where k is the cell number. It consists of transcriptomics and proteomics, our objective is to learn a unified representation [12pt]{minimal} $u ^{k m}$ for each sample s in integrating CITE-seq data, where m is the embedding dimension of final cell embedding, and we set it as 128 in this experiment. Before learning the unified representations, we prioritized the model to grasp gene and protein expression information, then acquired cell type information, and ultimately validated the model. Hence, the initial step is to reconstruct the transcriptomics and proteomics matrices. The input feature [12pt]{minimal} $X_{in}$ in the first stage consists of the feature of transcriptomics [12pt]{minimal} $X_{RNA} ^{k 4000}$ and proteomics [12pt]{minimal} $X_{ADT\ } ^{k\ \ (protein\ number)}$ , 4000 represents the number of highly variable gene (HVG) we used in this experiment, and the number of proteins was determined by the respective dataset due to the differences involved in each dataset. [12pt]{minimal} $X_{RNA}$ contains the information of gene expression value [12pt]{minimal} $X_{RNA}^{ex} ^{k 4000}$ , and gene symbol embedding [12pt]{minimal} $X_{RNA}^{sym} ^{k 4000}$ , and [12pt]{minimal} $X_{ADT}$ contains the information of protein expression value [12pt]{minimal} $X_{ADT}^{ex} ^{k \ (protein\ number)}$ , and gene symbol embedding [12pt]{minimal} $X_{ADT}^{sym} ^{k\ \ (protein\ number)}$ . It can be represented as follows: (5) [12pt]{minimal} & {X}_{in}=(X_{RNA}^{ex}+X_{RNA}^{sym},X_{ADT}^{ex}+X_{ADT}^{sym}), where [12pt]{minimal} $X_{in}$ represents the input feature, [12pt]{minimal} $X_{RNA}^{ex}$ and [12pt]{minimal} $X_{ADT}^{ex}$ are the gene expression values and protein expression values, [12pt]{minimal} $X_{RNA}^{sym}$ and [12pt]{minimal} $X_{ADT}^{sym}$ are the gene symbol embedding and protein symbol embedding. We set it up as a learnable vector in this experiment. To learn the representation [12pt]{minimal} $u$ , we initially randomly masked gene and protein expression values. Subsequently, the unmasked transcriptomics and proteomics data, that is, the unmasked features [12pt]{minimal} $X_{in}^{unm}$ , were encoded separately using a transformer block based on a multi-head attention mechanism. The attention of each head is calculated as follows: (6) [12pt]{minimal} h_{h} &= \ (X_{in}^{unm}\ W_{in}^{Q}, X_{in}^{unm}\ W_{in}^{K},X_{in}^{unm}\ W_{in}^{V})\\ &= Softmax\ (^{unm}\ W_{in}^{Q})(X_{in}^{unm}\ W_{in}^{K})^{T}}{^{unm}\ W_{in}^{K}}}} ) X_{in}^{unm} W_{in}^{V}, where [12pt]{minimal} $W_{in}^{Q}$ , [12pt]{minimal} $W_{in}^{K}$ , and [12pt]{minimal} $W_{in}^{V}$ are the weight matrices, [12pt]{minimal} $$ is a function that focuses the network’s attention on the most important small part of the data, and all encoding process can be described as: (7) [12pt]{minimal} & E_{RNA}^{unm}=(h_{1},h_{2},,h_{h} ) W^{O}, where [12pt]{minimal} $ $ is a function that concatenates the features inside, [12pt]{minimal} $W^{O}$ is the output weight matrix, and h is the number of heads. In this experiment, we use two heads. After the unmasked gene and protein data enter the encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ are the output of the gene encoder and protein encoder, respectively. Then [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ will be fed into the cross-attention structure as follows: (8) [12pt]{minimal} C_{RNA} &= (E_{ADT}^{unm}, E_{RNA}^{unm}, E_{RNA}^{unm}) \\ &= (^{unm}(E_{RNA}^{unm})^{T}}{^{unm}}}})E_{RNA}^{unm}, (9) [12pt]{minimal} C_{ADT} &= (E_{RNA}^{unm}, E_{ADT}^{unm}, E_{ADT}^{unm}) \\ &= (^{unm}(E_{ADT}^{unm})^{T}}{^{unm}}}})E_{ADT}^{unm}, where [12pt]{minimal} $ $ is an attention mechanism in transformer architecture that mixes two different modalities: transcriptomics and proteomics, cross-attention calculations are performed for transcriptomics data using queries from proteomics, and vice versa for proteomics data. [12pt]{minimal} $C_{RNA}$ and [12pt]{minimal} $C_{ADT}$ represent the cross-attention results of transcriptomic and proteomics, respectively. In this experiment, we will employ two cross-attention structures. The function [12pt]{minimal} $Softmax (x)$ means the output values will be between 0 and 1, and the sum of all output values will be 1. Of note, we used DI (denoted as [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ ) to preserve the unique information of different omics after computing the cross-attention mechanism for each modality, which is the output of the respective modal encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ . The input of the decoder consists of two parts: (10) [12pt]{minimal} & D_{RNA}^{in} = E_{RNA}^{unm} + C_{RNA}, (11) [12pt]{minimal} & D_{ADT}^{in} = E_{ADT}^{unm} + C_{ADT}. Subsequently, the calculation process for decoding steps mirrors that of the encoding part. Last, the outputs of the two decoders ( [12pt]{minimal} $D_{RNA}^{in}$ and [12pt]{minimal} $D_{ADT}^{in}$ ) are subjected to loss calculations with the initial transcriptomics and proteomics matrices, respectively. The formula is as follows: (12) [12pt]{minimal} L_{} = ( _{i=1}^{n} (x_{RNA,i}^{ex} - D_{RNA,i}^{out}) ) \\ + ( _{j=1}^{n} (x_{ADT,i}^{ex} - D_{ADT,i}^{out}) ), where [12pt]{minimal} $ $ and [12pt]{minimal} $ $ are the hyperparameters used to determine the loss weights for the two modes independently, and [12pt]{minimal} $n$ is the cell number in experiments. [12pt]{minimal} $x_{RNA, i}^{ex} $ , [12pt]{minimal} $x_{ADT, i}^{ex}$ , [12pt]{minimal} $ D_{RNA, i}^{out}$ , [12pt]{minimal} $D_{ADT, i}^{out}$ are the elements of [12pt]{minimal} $x_{RNA}^{ex}$ , [12pt]{minimal} $x_{ADT}^{ex}$ , [12pt]{minimal} $ D_{RNA}^{out}$ , and [12pt]{minimal} $ D_{ADT}^{out} $ , which represents gene true expression values, gene predicted values, protein true expression values, and protein predicted values, respectively. Learning representation of transcriptomics and proteomics In the second stage where the model learns cell type information, we discard the decoder part, and all gene and protein data will be fed into the encoder without masking. Next, the outputs of encoders directly concatenate the output of cross-attention with DI as the final result. (13) [12pt]{minimal} & u = (I_{RNA} + C_{RNA}) + (I_{ADT} + C_{ADT}). Here, [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ denote the outputs of gene and protein encoder without masking, respectively. The representation [12pt]{minimal} $u$ at the layer before the softmax layer will be utilized for downstream tasks. For loss calculation, [12pt]{minimal} $u$ undergoes a softmax layer to generate predicted cell types and compare them with the true cell types. This stage’s objective function is defined using cross-entropy loss: (14) [12pt]{minimal} & _{} = - _{i=1}^{n} [ y_{i} (_{i}) + (1 - y_{i}) (1 - _{i}) ]^{}\!, where [12pt]{minimal} $n$ is the cell number of CITE-seq data in the training process, [12pt]{minimal} $y_{i}$ and [12pt]{minimal} $_{i}$ are the true cell type labels and predicted cell type labels, respectively. Transfer multi-omic knowledge to transcriptome In the final stage, only a single modal data was utilized, halving all structures. (15) [12pt]{minimal} & u_{RNA} = I_{RNA} + C_{RNA}, where [12pt]{minimal} $ u_{RNA}$ is the cell representation of transcriptomics. Notably, we employed a fully connected layer connected to itself to substitute the query from another modality data. The loss is calculated as follows: (16) [12pt]{minimal} & _{} = - _{i=1}^{m} [ y_{i} (_{j}) + (1 - y_{j}) (1 - _{j}) ]^{}\!, where [12pt]{minimal} $m$ is the cell number of scRNA-seq data in the training process, [12pt]{minimal} $y_{j}$ and [12pt]{minimal} $_{j}$ are the true cell type labels and predicted cell type labels, respectively. Model training and hyperparameters The initial step in scMMAE involves reconstructing RNA and protein matrices for transcriptomic and proteomic data, respectively. We observed distinct behaviors based on dataset size. For datasets containing fewer than 10 000 cells, scMMAE typically converged within 20 epochs. Conversely, larger datasets required approximately 150 epochs to reach convergence. Notably, scMMAE demonstrates improved resource efficiency and faster processing times compared to Bayesian Gibbs sampling-based methods such as BREMSC and jointDIMMSC, which typically take over an hour to run with default parameter settings. In contrast, scMMAE completes one epoch in approximately 15 s. As for the masking strategy, we randomly mask the genes and proteins in a cell each time, and the masking positions of the two omics are not synchronized in the first stage. In the second stage, we will set the masking ratio to 0. In the second stage, we refined the pre-trained model by incorporating 30 % annotated data. After convergence was reached, the model generated global cell embeddings for the entire dataset, as well as local cell embeddings for the respective modalities of data, in preparation for downstream tasks. Finally, to validate our model, we employed scRNA-seq data and followed a training process similar to that of the second stage. In the training process, it should be noted that we performed pre-training and fine-tuning experiments on five cite-seq datasets separately. In addition, we conducted ablation studies on five CITE-seq datasets to underscore the effectiveness of the cross-attention architecture. We utilized three distinct evaluation metrics (ARI, NMI, and FMI) to determine the impact of cross-attention-based compared to the baseline model, which relies solely on element-wise addition . The results showed that the indicators increased substantially on all datasets except NMI on the SPL206 dataset. These findings provide further evidence of the effectiveness of the proposed structural innovation. Additionally, we conducted mask ratio ablation experiments on five CITE-seq datasets, evaluated using ARI, NMI, and FMI, revealing that a 15% mask ratio yields optimal results . When choosing the number of HVG, we tested using 1000 to 5000 HVGs. When the number of HVG was set to 4000, the model performed consistently high. Therefore, we used 4000 HVGs in our experiments . DI and omics information were also selectively ablated to assess the individual contributions of cross-modal and modality-specific information to the performance of downstream cell type prediction tasks . Training strategy was the key to the huge performance of scMMAE. We found that if the scMMAE is not pre-trained and directly processes data, the performance will drop a lot. This shown that the pre-training strategy is still necessary, which can allow the model to better understand the expression information of genes and proteins . Throughout the experiment process, we determined the optimal distribution of loss function, prioritizing transcriptomic data (0.7) over proteomic data (0.3). The finalized model configuration includes six encoder layers and four decoder layers, with all heads set to 2 and a dropout ratio maintained at 0.1. Evaluation metric of clustering results All clustering and community detection results are measured using the ARI , NMI , and FMI . Given two sets of clusterings Y (true labels) and [12pt]{minimal} $$ (predicted labels) on x samples, Y contains m clusters [12pt]{minimal} $\{U_{1}, U_{2}, , U_{m}\}$ , and [12pt]{minimal} $$ contains n clusters [12pt]{minimal} $\{V_{1}, V_{2}, , V_{n}\}$ . [12pt]{minimal} $n_{}$ denotes the number of samples belonging to [12pt]{minimal} $Y_{}$ and [12pt]{minimal} $_{}$ . ARI formula is as follows: (17) [12pt]{minimal} & ARI = }{2} - [_{i} |}{2} _{j} _{j}|}{2}] / }{ [_{i} _{i}|}{2} + _{j} _{j}|}{2}] - [_{i} |}{2} _{j} _{j}|}{2}] / }, where [12pt]{minimal} $ |Y_{i}| $ and [12pt]{minimal} $ |_{j}| $ denote the number of samples in [12pt]{minimal} $ _{i} $ and [12pt]{minimal} $_{j} $ , respectively. NMI is a metric for evaluating network segmentation achieved by community-finding techniques, which can be computed as (18) [12pt]{minimal} & NMI = ^{r} _{j=1}^{s} n_{ij} }{|Y_{i}| |_{j}|}}{[_{i=1}^{r} |Y_{i}| |}{n} + _{j=1}^{s} |_{j}| _{j}|}{n}]}. FMI is used to determine the similarity between two clustering, which defined as (19) [12pt]{minimal} & =}{+}}{+}}. Other evaluation metrics are illustrated in . In sepsis case studies, we used the area under the receiver operating characteristic (AUROC) curve to evaluate the model’s performance across sepsis datasets. Downstream analysis for sepsis with ScMMAE We initially employed scMMAE to cluster data from both sepsis and control groups, followed by annotating the identified clusters using true labels. After this clustering and annotation process, we conducted a binary classification task for sepsis. The performance of the fine-tuned network was evaluated across the aforementioned datasets using the AUROC curve. Our assessment benchmarked this performance against existing biomarkers, including FCMR/PLAC8, SeptiCyte, and sNIP, along with our scMMAE model. Post establishing these clusters with data from all participants, we further analyzed the variance in cell state abundances between sepsis and control samples, with particular emphasis on the MS1 and MK subpopulations to explore changes in their proportions. Visualization, clustering, and annotation In our study, we employed a [12pt]{minimal} $k m$ global cell embedding matrix to represent low-dimensional embeddings of [12pt]{minimal} $k$ cells. These embeddings were subsequently utilized for downstream analyses, which included constructing a cell adjacency matrix and performing cell clustering. The adjacency matrix was derived from the cell embeddings using a K-nearest neighbors algorithm, with the number of neighbors set to the default value of 20. Cell clustering was then executed using the Leiden algorithm , which operates on the adjacency matrix. To aid visualization, we applied the UMAP algorithm to reduce the cell embeddings and extracted latent features to a 2D space, enabling the visual discrimination of gene expression levels across different cell clusters. For the UMAP parameters, we set the number of neighbors to 15, the minimum distance to 0.1, and the number of components to 2. These parameters were consistently applied across all benchmark methods for comparison. Following clustering, we identified differential genes within each cluster, serving as distinctive signatures for various cell types. This was achieved using the Wilcoxon rank-sum test, which assesses differences between two populations based on their relative ranks. Finally, we annotated the cell clusters utilizing marker genes identified in the previous step. In the comparison of batch effect removal, Scanpy and TotalVI required further parameters and settings for batch effect. For the fairness of comparison between different methods, we only used the default settings for Scanpy and TotalVI. As transcriptomics and proteomics may include complementary information of cells, scMMAE took DI from transcriptomics and proteomics into consideration in multi-modal omics fusion . The neural network architecture of scMMAE first applied autoencoder with cross-attention mechanism to learn cross-modal information, bridging the gap between two omics. Both the encoder and decoder used multi-head self-attention layers. The cross-attention mechanism was applied in the latent space described as equation . (1) [12pt]{minimal} & (E_{i}W_{1},E_{j}W_{2},E_{j}W_{3})=(W_{1}(E_{j}W_{2})^{T}}{W_{2}}}})E_{j}W_{3}, where [12pt]{minimal} $E_{i}$ , [12pt]{minimal} $E_{j}$ are the multi-head self-attention encoder of two omics, [12pt]{minimal} $W_{1}$ , [12pt]{minimal} $W_{2}$ , [12pt]{minimal} $W_{3}$ present three learnable networks. Then, it combined the cross-modal information with DI from encoders of different modalities to capture the intricate relationships and dependencies between genes and proteins, mapping them into a unified latent space for better representation. We denoted DI from two omics as [12pt]{minimal} $I_{i}$ and [12pt]{minimal} $I_{j}$ . The fused representation can be written as equation . (2) [12pt]{minimal} u=&(I_{i}+(E_{i}W_{1},E_{j}W_{2},E_{j}W_{3}))\\ &+(I_{j}+(E_{j}W_{1},E_{i}W_{2},E_{i}W_{3})). Importantly, since most of the samples were profiled by scRNA-seq, scMMAE transferred the knowledge learnt from multi-modalities to unimodality to enhance the representation of scRNA-seq. The training of scMMAE is comprised of three stages. Stage 1 : Limited by the number of annotated CITE-seq data, we applied a self-supervised learning method, MAE, to pretrain the modal. We masked part of the genes and proteins and forced scMMAE to reconstruct the missing inputs using unmasked cells from five CITE-seq datasets. We denoted the original input of transcriptomics and proteomics as [12pt]{minimal} $X_{RNA}$ and [12pt]{minimal} $X_{ADT}$ . We masked part of the input, denoted as [12pt]{minimal} $X_{RNA}^{masked}$ and [12pt]{minimal} $X_{ADT}^{masked}$ , and fed into the model. The model encoded the masked input into low dimensional latent space [12pt]{minimal} $X_{RNA}^{masked}, X_{ADT}^{masked} Z_{RNA, ADT}$ and tried to reconstruct the original input through the decoder [12pt]{minimal} $Z_{RNA, ADT} X_{RNA}$ , [12pt]{minimal} $ X_{ADT}$ , where [12pt]{minimal} $Z_{RNA, ADT}$ represents the output of the encoder in the latent space and [12pt]{minimal} $ X_{RNA}, X_{ADT}$ represent the output of two decoders. The parameters of the network model can be obtained through optimizing the loss function in equation . (3) [12pt]{minimal} & _{}=}_{dist}(X_{RNA},\ {}_{RNA})+}_{dist}(X_{ADT},\ _{ADT})}. Basic information and the relationship of genes and proteins were learnt by scMMAE in this stage. Stage 2 : ScMMAE was trained by a small part of the CITE-seq data with cell annotation as labels. It was required to accurately predict cell types and learn cell information based on transcriptomics and proteomics through this process. Stage 3 : ScMMAE transferred the knowledge learnt from multimodal omics to enhance single-cell transcriptome analysis by training with only part of scRNA-seq data. The cross-attention mechanism kept the knowledge of multiomics and was revised into a self-attention for unimodal analysis. The data structure of the input, network parameters, training processes, and other details were illustrated in the following parts. As transcriptomics and proteomics may include complementary information of cells, scMMAE took DI from transcriptomics and proteomics into consideration in multi-modal omics fusion . The neural network architecture of scMMAE first applied autoencoder with cross-attention mechanism to learn cross-modal information, bridging the gap between two omics. Both the encoder and decoder used multi-head self-attention layers. The cross-attention mechanism was applied in the latent space described as equation . (1) [12pt]{minimal} & (E_{i}W_{1},E_{j}W_{2},E_{j}W_{3})=(W_{1}(E_{j}W_{2})^{T}}{W_{2}}}})E_{j}W_{3}, where [12pt]{minimal} $E_{i}$ , [12pt]{minimal} $E_{j}$ are the multi-head self-attention encoder of two omics, [12pt]{minimal} $W_{1}$ , [12pt]{minimal} $W_{2}$ , [12pt]{minimal} $W_{3}$ present three learnable networks. Then, it combined the cross-modal information with DI from encoders of different modalities to capture the intricate relationships and dependencies between genes and proteins, mapping them into a unified latent space for better representation. We denoted DI from two omics as [12pt]{minimal} $I_{i}$ and [12pt]{minimal} $I_{j}$ . The fused representation can be written as equation . (2) [12pt]{minimal} u=&(I_{i}+(E_{i}W_{1},E_{j}W_{2},E_{j}W_{3}))\\ &+(I_{j}+(E_{j}W_{1},E_{i}W_{2},E_{i}W_{3})). Importantly, since most of the samples were profiled by scRNA-seq, scMMAE transferred the knowledge learnt from multi-modalities to unimodality to enhance the representation of scRNA-seq. The training of scMMAE is comprised of three stages. Stage 1 : Limited by the number of annotated CITE-seq data, we applied a self-supervised learning method, MAE, to pretrain the modal. We masked part of the genes and proteins and forced scMMAE to reconstruct the missing inputs using unmasked cells from five CITE-seq datasets. We denoted the original input of transcriptomics and proteomics as [12pt]{minimal} $X_{RNA}$ and [12pt]{minimal} $X_{ADT}$ . We masked part of the input, denoted as [12pt]{minimal} $X_{RNA}^{masked}$ and [12pt]{minimal} $X_{ADT}^{masked}$ , and fed into the model. The model encoded the masked input into low dimensional latent space [12pt]{minimal} $X_{RNA}^{masked}, X_{ADT}^{masked} Z_{RNA, ADT}$ and tried to reconstruct the original input through the decoder [12pt]{minimal} $Z_{RNA, ADT} X_{RNA}$ , [12pt]{minimal} $ X_{ADT}$ , where [12pt]{minimal} $Z_{RNA, ADT}$ represents the output of the encoder in the latent space and [12pt]{minimal} $ X_{RNA}, X_{ADT}$ represent the output of two decoders. The parameters of the network model can be obtained through optimizing the loss function in equation . (3) [12pt]{minimal} & _{}=}_{dist}(X_{RNA},\ {}_{RNA})+}_{dist}(X_{ADT},\ _{ADT})}. Basic information and the relationship of genes and proteins were learnt by scMMAE in this stage. Stage 2 : ScMMAE was trained by a small part of the CITE-seq data with cell annotation as labels. It was required to accurately predict cell types and learn cell information based on transcriptomics and proteomics through this process. Stage 3 : ScMMAE transferred the knowledge learnt from multimodal omics to enhance single-cell transcriptome analysis by training with only part of scRNA-seq data. The cross-attention mechanism kept the knowledge of multiomics and was revised into a self-attention for unimodal analysis. The data structure of the input, network parameters, training processes, and other details were illustrated in the following parts. ScMMAE adopted the popular training strategy, i.e. pre-train and fine-tuning. At the pre-training and fine-tuning stage, we utilized five CITE-seq datasets , and four RNA-seq datasets during the prediction stage and detailed information regarding these datasets is provided. Of note, in the absence of annotated cell types for three of the CITE-seq datasets (PBMC5K, PBMC10K, and MALT10K), we employed weighted nearest neighbor methods to conduct multi-omics analysis using Scanpy . The remaining datasets were annotated according to their sources. Since the code for the scCTCLust method is incomplete and unusable, we did not include it in the comparison. We applied totalVI, SCOIT, jointDIMMSC, scMM, and BREMSC to embed CITE-seq data cells in a common latent space as benchmark methods in the fine-tuning stage and applied Seurat , Scanpy, SCVI , and Pagoda2 in single modal predicting. See and for procedures and parameterization of multi-omics and unimodal omics, respectively. We applied distinct normalization strategies tailored to each data type: RPKM normalization for RNA-seq data, and centered log ratio (CLR) normalization for the proteomic data to mitigate compositional effects. (4) [12pt]{minimal} & (x_{i}) = ( }{[n]{_{j=1}^{n} x_{j}}} ), where [12pt]{minimal} $x_{i}$ represents the [12pt]{minimal} $i$ th protein expression value in the cell, with [12pt]{minimal} $n$ denoting the total number of proteins in a cell, and [12pt]{minimal} $j$ iterating over all proteins. After normalization, we selected 4000 genes with high variability along with all proteins for model input to capture the most informative features. All preprocessing procedures were executed using Scanpy’s integrated functions. For the sepsis case studies, we collected scRNA-seq data for sepsis patients and healthy controls from the Broad Institute Single Cell Portal, portal ID SCP548 (subject PBMCs). The collection contains scRNA-seq data for 126 351 cells from 29 septic patients and 36 controls across three cohorts. The key cohorts focused on those who had a urinary tract infection (UTI) early in their illness progression. Subjects with UTIs and either mild or transient organ dysfunction (Int-URO), UTIs with evident or persistent organ dysfunction (Urosepsis, URO), bacteremic patients with sepsis in hospital wards (Bac-SEP), and patients admitted to the medical intensive care unit (ICU) with sepsis (ICU-SEP) are among the septic patients. Subjects with UTI and leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) without organ dysfunction (Leuk-UTI), patients hospitalized in the medical intensive care unit (ICU-NoSEP), and healthy uninfected controls were included in the control samples. ScMMAE adopted the popular training strategy, i.e. pre-train and fine-tuning. At the pre-training and fine-tuning stage, we utilized five CITE-seq datasets , and four RNA-seq datasets during the prediction stage and detailed information regarding these datasets is provided. Of note, in the absence of annotated cell types for three of the CITE-seq datasets (PBMC5K, PBMC10K, and MALT10K), we employed weighted nearest neighbor methods to conduct multi-omics analysis using Scanpy . The remaining datasets were annotated according to their sources. Since the code for the scCTCLust method is incomplete and unusable, we did not include it in the comparison. We applied totalVI, SCOIT, jointDIMMSC, scMM, and BREMSC to embed CITE-seq data cells in a common latent space as benchmark methods in the fine-tuning stage and applied Seurat , Scanpy, SCVI , and Pagoda2 in single modal predicting. See and for procedures and parameterization of multi-omics and unimodal omics, respectively. We applied distinct normalization strategies tailored to each data type: RPKM normalization for RNA-seq data, and centered log ratio (CLR) normalization for the proteomic data to mitigate compositional effects. (4) [12pt]{minimal} & (x_{i}) = ( }{[n]{_{j=1}^{n} x_{j}}} ), where [12pt]{minimal} $x_{i}$ represents the [12pt]{minimal} $i$ th protein expression value in the cell, with [12pt]{minimal} $n$ denoting the total number of proteins in a cell, and [12pt]{minimal} $j$ iterating over all proteins. After normalization, we selected 4000 genes with high variability along with all proteins for model input to capture the most informative features. All preprocessing procedures were executed using Scanpy’s integrated functions. For the sepsis case studies, we collected scRNA-seq data for sepsis patients and healthy controls from the Broad Institute Single Cell Portal, portal ID SCP548 (subject PBMCs). The collection contains scRNA-seq data for 126 351 cells from 29 septic patients and 36 controls across three cohorts. The key cohorts focused on those who had a urinary tract infection (UTI) early in their illness progression. Subjects with UTIs and either mild or transient organ dysfunction (Int-URO), UTIs with evident or persistent organ dysfunction (Urosepsis, URO), bacteremic patients with sepsis in hospital wards (Bac-SEP), and patients admitted to the medical intensive care unit (ICU) with sepsis (ICU-SEP) are among the septic patients. Subjects with UTI and leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) without organ dysfunction (Leuk-UTI), patients hospitalized in the medical intensive care unit (ICU-NoSEP), and healthy uninfected controls were included in the control samples. We proposed a cross-attention-based network called scMMAE that can integrate single-cell transcriptomics and proteomics data and transfer the fused knowledge to enhance scRNA-seq data. scMMAE was constructed based on the MAE framework , tailored for handling multimodal tasks, with the incorporation of cross-attention mechanisms for data integration . ScMMAE comprised three main stages. In the first stage, referred to as the pre-training stage, scMMAE included two encoders, two decoders, and one cross-attention architecture. Following the pre-training stage, our model discarded one of the decoder architectures, and the cross-attention output was augmented with the residual to serve as the model’s final output. Lastly, in stage 3, we streamlined the architecture, reducing its complexity compared to the models in stage 2, except for the cross-attention architecture, and used one full connectivity layer to replace original modal queries. We proposed a cross-attention-based network called scMMAE that can integrate single-cell transcriptomics and proteomics data and transfer the fused knowledge to enhance scRNA-seq data. scMMAE was constructed based on the MAE framework , tailored for handling multimodal tasks, with the incorporation of cross-attention mechanisms for data integration . ScMMAE comprised three main stages. In the first stage, referred to as the pre-training stage, scMMAE included two encoders, two decoders, and one cross-attention architecture. Following the pre-training stage, our model discarded one of the decoder architectures, and the cross-attention output was augmented with the residual to serve as the model’s final output. Lastly, in stage 3, we streamlined the architecture, reducing its complexity compared to the models in stage 2, except for the cross-attention architecture, and used one full connectivity layer to replace original modal queries. Suppose the input CITE-seq dataset notated as [12pt]{minimal} $S\ =\ {s_{RNA}^{k},{\ s}_{ADT}^{k}},\ i\ [1,n]$ , where k is the cell number. It consists of transcriptomics and proteomics, our objective is to learn a unified representation [12pt]{minimal} $u ^{k m}$ for each sample s in integrating CITE-seq data, where m is the embedding dimension of final cell embedding, and we set it as 128 in this experiment. Before learning the unified representations, we prioritized the model to grasp gene and protein expression information, then acquired cell type information, and ultimately validated the model. Hence, the initial step is to reconstruct the transcriptomics and proteomics matrices. The input feature [12pt]{minimal} $X_{in}$ in the first stage consists of the feature of transcriptomics [12pt]{minimal} $X_{RNA} ^{k 4000}$ and proteomics [12pt]{minimal} $X_{ADT\ } ^{k\ \ (protein\ number)}$ , 4000 represents the number of highly variable gene (HVG) we used in this experiment, and the number of proteins was determined by the respective dataset due to the differences involved in each dataset. [12pt]{minimal} $X_{RNA}$ contains the information of gene expression value [12pt]{minimal} $X_{RNA}^{ex} ^{k 4000}$ , and gene symbol embedding [12pt]{minimal} $X_{RNA}^{sym} ^{k 4000}$ , and [12pt]{minimal} $X_{ADT}$ contains the information of protein expression value [12pt]{minimal} $X_{ADT}^{ex} ^{k \ (protein\ number)}$ , and gene symbol embedding [12pt]{minimal} $X_{ADT}^{sym} ^{k\ \ (protein\ number)}$ . It can be represented as follows: (5) [12pt]{minimal} & {X}_{in}=(X_{RNA}^{ex}+X_{RNA}^{sym},X_{ADT}^{ex}+X_{ADT}^{sym}), where [12pt]{minimal} $X_{in}$ represents the input feature, [12pt]{minimal} $X_{RNA}^{ex}$ and [12pt]{minimal} $X_{ADT}^{ex}$ are the gene expression values and protein expression values, [12pt]{minimal} $X_{RNA}^{sym}$ and [12pt]{minimal} $X_{ADT}^{sym}$ are the gene symbol embedding and protein symbol embedding. We set it up as a learnable vector in this experiment. To learn the representation [12pt]{minimal} $u$ , we initially randomly masked gene and protein expression values. Subsequently, the unmasked transcriptomics and proteomics data, that is, the unmasked features [12pt]{minimal} $X_{in}^{unm}$ , were encoded separately using a transformer block based on a multi-head attention mechanism. The attention of each head is calculated as follows: (6) [12pt]{minimal} h_{h} &= \ (X_{in}^{unm}\ W_{in}^{Q}, X_{in}^{unm}\ W_{in}^{K},X_{in}^{unm}\ W_{in}^{V})\\ &= Softmax\ (^{unm}\ W_{in}^{Q})(X_{in}^{unm}\ W_{in}^{K})^{T}}{^{unm}\ W_{in}^{K}}}} ) X_{in}^{unm} W_{in}^{V}, where [12pt]{minimal} $W_{in}^{Q}$ , [12pt]{minimal} $W_{in}^{K}$ , and [12pt]{minimal} $W_{in}^{V}$ are the weight matrices, [12pt]{minimal} $$ is a function that focuses the network’s attention on the most important small part of the data, and all encoding process can be described as: (7) [12pt]{minimal} & E_{RNA}^{unm}=(h_{1},h_{2},,h_{h} ) W^{O}, where [12pt]{minimal} $ $ is a function that concatenates the features inside, [12pt]{minimal} $W^{O}$ is the output weight matrix, and h is the number of heads. In this experiment, we use two heads. After the unmasked gene and protein data enter the encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ are the output of the gene encoder and protein encoder, respectively. Then [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ will be fed into the cross-attention structure as follows: (8) [12pt]{minimal} C_{RNA} &= (E_{ADT}^{unm}, E_{RNA}^{unm}, E_{RNA}^{unm}) \\ &= (^{unm}(E_{RNA}^{unm})^{T}}{^{unm}}}})E_{RNA}^{unm}, (9) [12pt]{minimal} C_{ADT} &= (E_{RNA}^{unm}, E_{ADT}^{unm}, E_{ADT}^{unm}) \\ &= (^{unm}(E_{ADT}^{unm})^{T}}{^{unm}}}})E_{ADT}^{unm}, where [12pt]{minimal} $ $ is an attention mechanism in transformer architecture that mixes two different modalities: transcriptomics and proteomics, cross-attention calculations are performed for transcriptomics data using queries from proteomics, and vice versa for proteomics data. [12pt]{minimal} $C_{RNA}$ and [12pt]{minimal} $C_{ADT}$ represent the cross-attention results of transcriptomic and proteomics, respectively. In this experiment, we will employ two cross-attention structures. The function [12pt]{minimal} $Softmax (x)$ means the output values will be between 0 and 1, and the sum of all output values will be 1. Of note, we used DI (denoted as [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ ) to preserve the unique information of different omics after computing the cross-attention mechanism for each modality, which is the output of the respective modal encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ . The input of the decoder consists of two parts: (10) [12pt]{minimal} & D_{RNA}^{in} = E_{RNA}^{unm} + C_{RNA}, (11) [12pt]{minimal} & D_{ADT}^{in} = E_{ADT}^{unm} + C_{ADT}. Subsequently, the calculation process for decoding steps mirrors that of the encoding part. Last, the outputs of the two decoders ( [12pt]{minimal} $D_{RNA}^{in}$ and [12pt]{minimal} $D_{ADT}^{in}$ ) are subjected to loss calculations with the initial transcriptomics and proteomics matrices, respectively. The formula is as follows: (12) [12pt]{minimal} L_{} = ( _{i=1}^{n} (x_{RNA,i}^{ex} - D_{RNA,i}^{out}) ) \\ + ( _{j=1}^{n} (x_{ADT,i}^{ex} - D_{ADT,i}^{out}) ), where [12pt]{minimal} $ $ and [12pt]{minimal} $ $ are the hyperparameters used to determine the loss weights for the two modes independently, and [12pt]{minimal} $n$ is the cell number in experiments. [12pt]{minimal} $x_{RNA, i}^{ex} $ , [12pt]{minimal} $x_{ADT, i}^{ex}$ , [12pt]{minimal} $ D_{RNA, i}^{out}$ , [12pt]{minimal} $D_{ADT, i}^{out}$ are the elements of [12pt]{minimal} $x_{RNA}^{ex}$ , [12pt]{minimal} $x_{ADT}^{ex}$ , [12pt]{minimal} $ D_{RNA}^{out}$ , and [12pt]{minimal} $ D_{ADT}^{out} $ , which represents gene true expression values, gene predicted values, protein true expression values, and protein predicted values, respectively. Suppose the input CITE-seq dataset notated as [12pt]{minimal} $S\ =\ {s_{RNA}^{k},{\ s}_{ADT}^{k}},\ i\ [1,n]$ , where k is the cell number. It consists of transcriptomics and proteomics, our objective is to learn a unified representation [12pt]{minimal} $u ^{k m}$ for each sample s in integrating CITE-seq data, where m is the embedding dimension of final cell embedding, and we set it as 128 in this experiment. Before learning the unified representations, we prioritized the model to grasp gene and protein expression information, then acquired cell type information, and ultimately validated the model. Hence, the initial step is to reconstruct the transcriptomics and proteomics matrices. The input feature [12pt]{minimal} $X_{in}$ in the first stage consists of the feature of transcriptomics [12pt]{minimal} $X_{RNA} ^{k 4000}$ and proteomics [12pt]{minimal} $X_{ADT\ } ^{k\ \ (protein\ number)}$ , 4000 represents the number of highly variable gene (HVG) we used in this experiment, and the number of proteins was determined by the respective dataset due to the differences involved in each dataset. [12pt]{minimal} $X_{RNA}$ contains the information of gene expression value [12pt]{minimal} $X_{RNA}^{ex} ^{k 4000}$ , and gene symbol embedding [12pt]{minimal} $X_{RNA}^{sym} ^{k 4000}$ , and [12pt]{minimal} $X_{ADT}$ contains the information of protein expression value [12pt]{minimal} $X_{ADT}^{ex} ^{k \ (protein\ number)}$ , and gene symbol embedding [12pt]{minimal} $X_{ADT}^{sym} ^{k\ \ (protein\ number)}$ . It can be represented as follows: (5) [12pt]{minimal} & {X}_{in}=(X_{RNA}^{ex}+X_{RNA}^{sym},X_{ADT}^{ex}+X_{ADT}^{sym}), where [12pt]{minimal} $X_{in}$ represents the input feature, [12pt]{minimal} $X_{RNA}^{ex}$ and [12pt]{minimal} $X_{ADT}^{ex}$ are the gene expression values and protein expression values, [12pt]{minimal} $X_{RNA}^{sym}$ and [12pt]{minimal} $X_{ADT}^{sym}$ are the gene symbol embedding and protein symbol embedding. We set it up as a learnable vector in this experiment. To learn the representation [12pt]{minimal} $u$ , we initially randomly masked gene and protein expression values. Subsequently, the unmasked transcriptomics and proteomics data, that is, the unmasked features [12pt]{minimal} $X_{in}^{unm}$ , were encoded separately using a transformer block based on a multi-head attention mechanism. The attention of each head is calculated as follows: (6) [12pt]{minimal} h_{h} &= \ (X_{in}^{unm}\ W_{in}^{Q}, X_{in}^{unm}\ W_{in}^{K},X_{in}^{unm}\ W_{in}^{V})\\ &= Softmax\ (^{unm}\ W_{in}^{Q})(X_{in}^{unm}\ W_{in}^{K})^{T}}{^{unm}\ W_{in}^{K}}}} ) X_{in}^{unm} W_{in}^{V}, where [12pt]{minimal} $W_{in}^{Q}$ , [12pt]{minimal} $W_{in}^{K}$ , and [12pt]{minimal} $W_{in}^{V}$ are the weight matrices, [12pt]{minimal} $$ is a function that focuses the network’s attention on the most important small part of the data, and all encoding process can be described as: (7) [12pt]{minimal} & E_{RNA}^{unm}=(h_{1},h_{2},,h_{h} ) W^{O}, where [12pt]{minimal} $ $ is a function that concatenates the features inside, [12pt]{minimal} $W^{O}$ is the output weight matrix, and h is the number of heads. In this experiment, we use two heads. After the unmasked gene and protein data enter the encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ are the output of the gene encoder and protein encoder, respectively. Then [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ will be fed into the cross-attention structure as follows: (8) [12pt]{minimal} C_{RNA} &= (E_{ADT}^{unm}, E_{RNA}^{unm}, E_{RNA}^{unm}) \\ &= (^{unm}(E_{RNA}^{unm})^{T}}{^{unm}}}})E_{RNA}^{unm}, (9) [12pt]{minimal} C_{ADT} &= (E_{RNA}^{unm}, E_{ADT}^{unm}, E_{ADT}^{unm}) \\ &= (^{unm}(E_{ADT}^{unm})^{T}}{^{unm}}}})E_{ADT}^{unm}, where [12pt]{minimal} $ $ is an attention mechanism in transformer architecture that mixes two different modalities: transcriptomics and proteomics, cross-attention calculations are performed for transcriptomics data using queries from proteomics, and vice versa for proteomics data. [12pt]{minimal} $C_{RNA}$ and [12pt]{minimal} $C_{ADT}$ represent the cross-attention results of transcriptomic and proteomics, respectively. In this experiment, we will employ two cross-attention structures. The function [12pt]{minimal} $Softmax (x)$ means the output values will be between 0 and 1, and the sum of all output values will be 1. Of note, we used DI (denoted as [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ ) to preserve the unique information of different omics after computing the cross-attention mechanism for each modality, which is the output of the respective modal encoder, [12pt]{minimal} $E_{RNA}^{unm}$ and [12pt]{minimal} $E_{ADT}^{unm}$ . The input of the decoder consists of two parts: (10) [12pt]{minimal} & D_{RNA}^{in} = E_{RNA}^{unm} + C_{RNA}, (11) [12pt]{minimal} & D_{ADT}^{in} = E_{ADT}^{unm} + C_{ADT}. Subsequently, the calculation process for decoding steps mirrors that of the encoding part. Last, the outputs of the two decoders ( [12pt]{minimal} $D_{RNA}^{in}$ and [12pt]{minimal} $D_{ADT}^{in}$ ) are subjected to loss calculations with the initial transcriptomics and proteomics matrices, respectively. The formula is as follows: (12) [12pt]{minimal} L_{} = ( _{i=1}^{n} (x_{RNA,i}^{ex} - D_{RNA,i}^{out}) ) \\ + ( _{j=1}^{n} (x_{ADT,i}^{ex} - D_{ADT,i}^{out}) ), where [12pt]{minimal} $ $ and [12pt]{minimal} $ $ are the hyperparameters used to determine the loss weights for the two modes independently, and [12pt]{minimal} $n$ is the cell number in experiments. [12pt]{minimal} $x_{RNA, i}^{ex} $ , [12pt]{minimal} $x_{ADT, i}^{ex}$ , [12pt]{minimal} $ D_{RNA, i}^{out}$ , [12pt]{minimal} $D_{ADT, i}^{out}$ are the elements of [12pt]{minimal} $x_{RNA}^{ex}$ , [12pt]{minimal} $x_{ADT}^{ex}$ , [12pt]{minimal} $ D_{RNA}^{out}$ , and [12pt]{minimal} $ D_{ADT}^{out} $ , which represents gene true expression values, gene predicted values, protein true expression values, and protein predicted values, respectively. In the second stage where the model learns cell type information, we discard the decoder part, and all gene and protein data will be fed into the encoder without masking. Next, the outputs of encoders directly concatenate the output of cross-attention with DI as the final result. (13) [12pt]{minimal} & u = (I_{RNA} + C_{RNA}) + (I_{ADT} + C_{ADT}). Here, [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ denote the outputs of gene and protein encoder without masking, respectively. The representation [12pt]{minimal} $u$ at the layer before the softmax layer will be utilized for downstream tasks. For loss calculation, [12pt]{minimal} $u$ undergoes a softmax layer to generate predicted cell types and compare them with the true cell types. This stage’s objective function is defined using cross-entropy loss: (14) [12pt]{minimal} & _{} = - _{i=1}^{n} [ y_{i} (_{i}) + (1 - y_{i}) (1 - _{i}) ]^{}\!, where [12pt]{minimal} $n$ is the cell number of CITE-seq data in the training process, [12pt]{minimal} $y_{i}$ and [12pt]{minimal} $_{i}$ are the true cell type labels and predicted cell type labels, respectively. In the second stage where the model learns cell type information, we discard the decoder part, and all gene and protein data will be fed into the encoder without masking. Next, the outputs of encoders directly concatenate the output of cross-attention with DI as the final result. (13) [12pt]{minimal} & u = (I_{RNA} + C_{RNA}) + (I_{ADT} + C_{ADT}). Here, [12pt]{minimal} $I_{RNA}$ and [12pt]{minimal} $I_{ADT}$ denote the outputs of gene and protein encoder without masking, respectively. The representation [12pt]{minimal} $u$ at the layer before the softmax layer will be utilized for downstream tasks. For loss calculation, [12pt]{minimal} $u$ undergoes a softmax layer to generate predicted cell types and compare them with the true cell types. This stage’s objective function is defined using cross-entropy loss: (14) [12pt]{minimal} & _{} = - _{i=1}^{n} [ y_{i} (_{i}) + (1 - y_{i}) (1 - _{i}) ]^{}\!, where [12pt]{minimal} $n$ is the cell number of CITE-seq data in the training process, [12pt]{minimal} $y_{i}$ and [12pt]{minimal} $_{i}$ are the true cell type labels and predicted cell type labels, respectively. In the final stage, only a single modal data was utilized, halving all structures. (15) [12pt]{minimal} & u_{RNA} = I_{RNA} + C_{RNA}, where [12pt]{minimal} $ u_{RNA}$ is the cell representation of transcriptomics. Notably, we employed a fully connected layer connected to itself to substitute the query from another modality data. The loss is calculated as follows: (16) [12pt]{minimal} & _{} = - _{i=1}^{m} [ y_{i} (_{j}) + (1 - y_{j}) (1 - _{j}) ]^{}\!, where [12pt]{minimal} $m$ is the cell number of scRNA-seq data in the training process, [12pt]{minimal} $y_{j}$ and [12pt]{minimal} $_{j}$ are the true cell type labels and predicted cell type labels, respectively. In the final stage, only a single modal data was utilized, halving all structures. (15) [12pt]{minimal} & u_{RNA} = I_{RNA} + C_{RNA}, where [12pt]{minimal} $ u_{RNA}$ is the cell representation of transcriptomics. Notably, we employed a fully connected layer connected to itself to substitute the query from another modality data. The loss is calculated as follows: (16) [12pt]{minimal} & _{} = - _{i=1}^{m} [ y_{i} (_{j}) + (1 - y_{j}) (1 - _{j}) ]^{}\!, where [12pt]{minimal} $m$ is the cell number of scRNA-seq data in the training process, [12pt]{minimal} $y_{j}$ and [12pt]{minimal} $_{j}$ are the true cell type labels and predicted cell type labels, respectively. The initial step in scMMAE involves reconstructing RNA and protein matrices for transcriptomic and proteomic data, respectively. We observed distinct behaviors based on dataset size. For datasets containing fewer than 10 000 cells, scMMAE typically converged within 20 epochs. Conversely, larger datasets required approximately 150 epochs to reach convergence. Notably, scMMAE demonstrates improved resource efficiency and faster processing times compared to Bayesian Gibbs sampling-based methods such as BREMSC and jointDIMMSC, which typically take over an hour to run with default parameter settings. In contrast, scMMAE completes one epoch in approximately 15 s. As for the masking strategy, we randomly mask the genes and proteins in a cell each time, and the masking positions of the two omics are not synchronized in the first stage. In the second stage, we will set the masking ratio to 0. In the second stage, we refined the pre-trained model by incorporating 30 % annotated data. After convergence was reached, the model generated global cell embeddings for the entire dataset, as well as local cell embeddings for the respective modalities of data, in preparation for downstream tasks. Finally, to validate our model, we employed scRNA-seq data and followed a training process similar to that of the second stage. In the training process, it should be noted that we performed pre-training and fine-tuning experiments on five cite-seq datasets separately. In addition, we conducted ablation studies on five CITE-seq datasets to underscore the effectiveness of the cross-attention architecture. We utilized three distinct evaluation metrics (ARI, NMI, and FMI) to determine the impact of cross-attention-based compared to the baseline model, which relies solely on element-wise addition . The results showed that the indicators increased substantially on all datasets except NMI on the SPL206 dataset. These findings provide further evidence of the effectiveness of the proposed structural innovation. Additionally, we conducted mask ratio ablation experiments on five CITE-seq datasets, evaluated using ARI, NMI, and FMI, revealing that a 15% mask ratio yields optimal results . When choosing the number of HVG, we tested using 1000 to 5000 HVGs. When the number of HVG was set to 4000, the model performed consistently high. Therefore, we used 4000 HVGs in our experiments . DI and omics information were also selectively ablated to assess the individual contributions of cross-modal and modality-specific information to the performance of downstream cell type prediction tasks . Training strategy was the key to the huge performance of scMMAE. We found that if the scMMAE is not pre-trained and directly processes data, the performance will drop a lot. This shown that the pre-training strategy is still necessary, which can allow the model to better understand the expression information of genes and proteins . Throughout the experiment process, we determined the optimal distribution of loss function, prioritizing transcriptomic data (0.7) over proteomic data (0.3). The finalized model configuration includes six encoder layers and four decoder layers, with all heads set to 2 and a dropout ratio maintained at 0.1. The initial step in scMMAE involves reconstructing RNA and protein matrices for transcriptomic and proteomic data, respectively. We observed distinct behaviors based on dataset size. For datasets containing fewer than 10 000 cells, scMMAE typically converged within 20 epochs. Conversely, larger datasets required approximately 150 epochs to reach convergence. Notably, scMMAE demonstrates improved resource efficiency and faster processing times compared to Bayesian Gibbs sampling-based methods such as BREMSC and jointDIMMSC, which typically take over an hour to run with default parameter settings. In contrast, scMMAE completes one epoch in approximately 15 s. As for the masking strategy, we randomly mask the genes and proteins in a cell each time, and the masking positions of the two omics are not synchronized in the first stage. In the second stage, we will set the masking ratio to 0. In the second stage, we refined the pre-trained model by incorporating 30 % annotated data. After convergence was reached, the model generated global cell embeddings for the entire dataset, as well as local cell embeddings for the respective modalities of data, in preparation for downstream tasks. Finally, to validate our model, we employed scRNA-seq data and followed a training process similar to that of the second stage. In the training process, it should be noted that we performed pre-training and fine-tuning experiments on five cite-seq datasets separately. In addition, we conducted ablation studies on five CITE-seq datasets to underscore the effectiveness of the cross-attention architecture. We utilized three distinct evaluation metrics (ARI, NMI, and FMI) to determine the impact of cross-attention-based compared to the baseline model, which relies solely on element-wise addition . The results showed that the indicators increased substantially on all datasets except NMI on the SPL206 dataset. These findings provide further evidence of the effectiveness of the proposed structural innovation. Additionally, we conducted mask ratio ablation experiments on five CITE-seq datasets, evaluated using ARI, NMI, and FMI, revealing that a 15% mask ratio yields optimal results . When choosing the number of HVG, we tested using 1000 to 5000 HVGs. When the number of HVG was set to 4000, the model performed consistently high. Therefore, we used 4000 HVGs in our experiments . DI and omics information were also selectively ablated to assess the individual contributions of cross-modal and modality-specific information to the performance of downstream cell type prediction tasks . Training strategy was the key to the huge performance of scMMAE. We found that if the scMMAE is not pre-trained and directly processes data, the performance will drop a lot. This shown that the pre-training strategy is still necessary, which can allow the model to better understand the expression information of genes and proteins . Throughout the experiment process, we determined the optimal distribution of loss function, prioritizing transcriptomic data (0.7) over proteomic data (0.3). The finalized model configuration includes six encoder layers and four decoder layers, with all heads set to 2 and a dropout ratio maintained at 0.1. All clustering and community detection results are measured using the ARI , NMI , and FMI . Given two sets of clusterings Y (true labels) and [12pt]{minimal} $$ (predicted labels) on x samples, Y contains m clusters [12pt]{minimal} $\{U_{1}, U_{2}, , U_{m}\}$ , and [12pt]{minimal} $$ contains n clusters [12pt]{minimal} $\{V_{1}, V_{2}, , V_{n}\}$ . [12pt]{minimal} $n_{}$ denotes the number of samples belonging to [12pt]{minimal} $Y_{}$ and [12pt]{minimal} $_{}$ . ARI formula is as follows: (17) [12pt]{minimal} & ARI = }{2} - [_{i} |}{2} _{j} _{j}|}{2}] / }{ [_{i} _{i}|}{2} + _{j} _{j}|}{2}] - [_{i} |}{2} _{j} _{j}|}{2}] / }, where [12pt]{minimal} $ |Y_{i}| $ and [12pt]{minimal} $ |_{j}| $ denote the number of samples in [12pt]{minimal} $ _{i} $ and [12pt]{minimal} $_{j} $ , respectively. NMI is a metric for evaluating network segmentation achieved by community-finding techniques, which can be computed as (18) [12pt]{minimal} & NMI = ^{r} _{j=1}^{s} n_{ij} }{|Y_{i}| |_{j}|}}{[_{i=1}^{r} |Y_{i}| |}{n} + _{j=1}^{s} |_{j}| _{j}|}{n}]}. FMI is used to determine the similarity between two clustering, which defined as (19) [12pt]{minimal} & =}{+}}{+}}. Other evaluation metrics are illustrated in . In sepsis case studies, we used the area under the receiver operating characteristic (AUROC) curve to evaluate the model’s performance across sepsis datasets. All clustering and community detection results are measured using the ARI , NMI , and FMI . Given two sets of clusterings Y (true labels) and [12pt]{minimal} $$ (predicted labels) on x samples, Y contains m clusters [12pt]{minimal} $\{U_{1}, U_{2}, , U_{m}\}$ , and [12pt]{minimal} $$ contains n clusters [12pt]{minimal} $\{V_{1}, V_{2}, , V_{n}\}$ . [12pt]{minimal} $n_{}$ denotes the number of samples belonging to [12pt]{minimal} $Y_{}$ and [12pt]{minimal} $_{}$ . ARI formula is as follows: (17) [12pt]{minimal} & ARI = }{2} - [_{i} |}{2} _{j} _{j}|}{2}] / }{ [_{i} _{i}|}{2} + _{j} _{j}|}{2}] - [_{i} |}{2} _{j} _{j}|}{2}] / }, where [12pt]{minimal} $ |Y_{i}| $ and [12pt]{minimal} $ |_{j}| $ denote the number of samples in [12pt]{minimal} $ _{i} $ and [12pt]{minimal} $_{j} $ , respectively. NMI is a metric for evaluating network segmentation achieved by community-finding techniques, which can be computed as (18) [12pt]{minimal} & NMI = ^{r} _{j=1}^{s} n_{ij} }{|Y_{i}| |_{j}|}}{[_{i=1}^{r} |Y_{i}| |}{n} + _{j=1}^{s} |_{j}| _{j}|}{n}]}. FMI is used to determine the similarity between two clustering, which defined as (19) [12pt]{minimal} & =}{+}}{+}}. Other evaluation metrics are illustrated in . In sepsis case studies, we used the area under the receiver operating characteristic (AUROC) curve to evaluate the model’s performance across sepsis datasets. We initially employed scMMAE to cluster data from both sepsis and control groups, followed by annotating the identified clusters using true labels. After this clustering and annotation process, we conducted a binary classification task for sepsis. The performance of the fine-tuned network was evaluated across the aforementioned datasets using the AUROC curve. Our assessment benchmarked this performance against existing biomarkers, including FCMR/PLAC8, SeptiCyte, and sNIP, along with our scMMAE model. Post establishing these clusters with data from all participants, we further analyzed the variance in cell state abundances between sepsis and control samples, with particular emphasis on the MS1 and MK subpopulations to explore changes in their proportions. We initially employed scMMAE to cluster data from both sepsis and control groups, followed by annotating the identified clusters using true labels. After this clustering and annotation process, we conducted a binary classification task for sepsis. The performance of the fine-tuned network was evaluated across the aforementioned datasets using the AUROC curve. Our assessment benchmarked this performance against existing biomarkers, including FCMR/PLAC8, SeptiCyte, and sNIP, along with our scMMAE model. Post establishing these clusters with data from all participants, we further analyzed the variance in cell state abundances between sepsis and control samples, with particular emphasis on the MS1 and MK subpopulations to explore changes in their proportions. In our study, we employed a [12pt]{minimal} $k m$ global cell embedding matrix to represent low-dimensional embeddings of [12pt]{minimal} $k$ cells. These embeddings were subsequently utilized for downstream analyses, which included constructing a cell adjacency matrix and performing cell clustering. The adjacency matrix was derived from the cell embeddings using a K-nearest neighbors algorithm, with the number of neighbors set to the default value of 20. Cell clustering was then executed using the Leiden algorithm , which operates on the adjacency matrix. To aid visualization, we applied the UMAP algorithm to reduce the cell embeddings and extracted latent features to a 2D space, enabling the visual discrimination of gene expression levels across different cell clusters. For the UMAP parameters, we set the number of neighbors to 15, the minimum distance to 0.1, and the number of components to 2. These parameters were consistently applied across all benchmark methods for comparison. Following clustering, we identified differential genes within each cluster, serving as distinctive signatures for various cell types. This was achieved using the Wilcoxon rank-sum test, which assesses differences between two populations based on their relative ranks. Finally, we annotated the cell clusters utilizing marker genes identified in the previous step. In the comparison of batch effect removal, Scanpy and TotalVI required further parameters and settings for batch effect. For the fairness of comparison between different methods, we only used the default settings for Scanpy and TotalVI. In our study, we employed a [12pt]{minimal} $k m$ global cell embedding matrix to represent low-dimensional embeddings of [12pt]{minimal} $k$ cells. These embeddings were subsequently utilized for downstream analyses, which included constructing a cell adjacency matrix and performing cell clustering. The adjacency matrix was derived from the cell embeddings using a K-nearest neighbors algorithm, with the number of neighbors set to the default value of 20. Cell clustering was then executed using the Leiden algorithm , which operates on the adjacency matrix. To aid visualization, we applied the UMAP algorithm to reduce the cell embeddings and extracted latent features to a 2D space, enabling the visual discrimination of gene expression levels across different cell clusters. For the UMAP parameters, we set the number of neighbors to 15, the minimum distance to 0.1, and the number of components to 2. These parameters were consistently applied across all benchmark methods for comparison. Following clustering, we identified differential genes within each cluster, serving as distinctive signatures for various cell types. This was achieved using the Wilcoxon rank-sum test, which assesses differences between two populations based on their relative ranks. Finally, we annotated the cell clusters utilizing marker genes identified in the previous step. In the comparison of batch effect removal, Scanpy and TotalVI required further parameters and settings for batch effect. For the fairness of comparison between different methods, we only used the default settings for Scanpy and TotalVI. Integrative analysis of transcriptomics and proteomics data with ScMMAE We evaluated the performance of scMMAE in integrative analysis of transcriptomics and proteomics across five CITE-seq datasets including SPL111 , SPL206 , PBMC5K , PBMC10K , and MALT10K , which contains murine cells, murine cells, peripheral blood mononuclear cells (PBMCs), PBMCs, and (Mucosa-Associated Lymphoid Tissue) MALT cells, respectively. SPL111 and SPL206 datasets include cells collected from murine spleen and lymph nodes. PBMC5K and PBMC10K datasets contain cells collected from PBMC. MALT10K datasets consist of cells from a MALT tumor, a rare kind of malignant lymphoma. We compared scMMAE to existing benchmarking methods, including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI, using ARI, NMI, FMI, which are three most commonly used metrics to evaluation the representation of multi-omics fusion. On top of that, we applied seven other metrics to evaluate the performance, including adjusted mutual information (AMI), silhouette coefficient (SC), etc. Among 9 of the 10 evaluated metrics, a higher value denotes superior performance, whereas for the Davies–Bouldin Index (DBI), a lower value signifies better performance. ScMMAE consistently achieved top-tier performance on average in ARI, NMI, and FMI across the five CITE-seq datasets, against the other five methods including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI . Some of the evaluation metrics, such as the DBI, were not applicable to methods like BREMSC and jointDIMMSC, because these methods relied on coordinate-based visualization for their computations and do not generate independent visualizations. The results demonstrated that scMMAE outperformed existing methods in eight metrics including the three most important, ARI, NMI, and FMI, and ranked secondly in Calinski–Harabaz index (CHI) and Jaccard index (JI), indicating the exceptional performance of scMMAE in multi-omics fusion. A comparison of each of the five datasets demonstrated the comprehensive superiority of scMMAE . ScMMAE achieved better performance with the three most important metrics, ARI, NMI, and FMI, in all five datasets. Although scMM and SCOIT surpassed scMMAE in CHI and JI, they did not perform well in other metrics. In DBI, scMMAE also exhibited the best performance across all datasets except the PBMC5K dataset, where it was close to the best . We projected the cells in the PBMC10K dataset (CITE-seq) into a 2D space using UMAP . All methods tested in this session exhibited very similar capacities in resolving the major cell types (myeloid cell, B cell, T cell, and natural killer cell; ), probably due to the distinct gene expression patterns between these cell populations . The expression of maker genes CD4, CD8A, ITGAM, and JCHAIN were shown for CD4 T cells, CD8 T cells, macrophage, and B cells . In comparison to the major cell lineages, the transcriptomes of T cell subtype were very similar to each other, which could be reflected by the indiscriminate UMAP projections . This phenomenon left the T cell subtype identification a challenging puzzle in single-cell transcriptomic study. In our parallel comparison of different representation methods, we observed that all T cell subtypes showed clumped distributions with clear inter-cluster discrimination in the UMAP of scMMAE and TotalVI, while the discrimination of different T cell subtypes were found blurred in the UMAP of SCANPY, scMM, and SCOIT . In this regard, scMMAE and scMM also outperformed at the myeloid subtype identification, since SCANPY, SCOIT, and TotalVI failed in discriminating CXCL8+ macrophage from CXCL8- ones . Collectively, these results suggested that the scMMAE exhibited superior performance in cell subtype identification tasks. Detailed information about the subpopulations and marker genes for all cell types in the PBMC10K dataset is available in . The clustering results for the other four CITE-seq datasets PBMC5K, MALT10K, SPL111, and SPL206 based on the scMMAE output were shown in , respectively. Enhancing transcriptomics representation with ScMMAE A better classifier can be obtained by training with multimodal data and thus enhance unimodal classification . This approach also works for the representation learning in single-cell multiomics ( ; for detailed explanation please refer to Discussion). ScMMAE can transfer the deep learning model learnt from fused omics to enhance RNA representation. We applied scMMAE alongside four existing methods, Scanpy , Seurat , Pagoda2 , and scVI to four scRNA-seq cohorts, namely IFNB , CBMC , PBMC 3K , and BMCITE . IFNB profiled PBMC cells and stimulated the group treated with interferon beta. CBMC collected cord blood mononuclear cells from humans. PBMC 3K collected PBMCs from healthy donors. BMCITE was obtained from the bone marrow mononuclear cells of a single human donor. Our method scMMAE achieved superior performance compared to the other four methods in terms of the mean values of the 10 evaluation metrics on the four datasets . Although Seurat performed slightly better than scMMAE on one cohort (PBMC 3K), it did not achieve comparative results as scMMAE in terms of the other three cohorts . The pre-trained scMMAE, incorporating a second modal of data, proved highly beneficial for RNA representation and consistently improved performance in different situations To demonstrate the usefulness and superiority of scMMAE for downstream analysis, we visualized the clustering results of five methods with cell type annotation (IFNB; ). The IFNB dataset is a scRNA-seq dataset commonly used to study the effects of interferon-beta [12pt]{minimal} $(IFN- )$ on cells. This dataset primarily contains gene expression information from cells stimulated with [12pt]{minimal} $IFN- $ , making it useful for analyzing immune responses, cellular signaling pathways, and the regulatory effects of interferon on different cell types. ScMMAE resolved four major cell populations and 12 subtypes with clear boundaries. We displayed the marker genes CD3E, CD8A, ITGAM, CD19, and FCER1A for CD4 T cells, CD8 T cells, macrophages, B cells, and conventional dendritic cells. As T cell subtypes express similar transcriptomic characters, distinguishing subtypes in T cells is challenging. The UMAP projection demonstrated the capability of scMMAE to segregate CD4 and CD8 T cells into separate yet proximate clusters, while Scanpy, Seurat, Pagoda2, and scVI aggregated CD4 and CD8 T cells into a singular cluster . Moreover, scMMAE and scVI outperformed in myeloid subpopulation identification as macrophages were spread in two groups in Scanpy, Seurat, and Pagoda2. Collectively, the results demonstrated the effectiveness of scMMAE for scRNA-seq enhanced by multimodal knowledge. The projection of cells in the three RNA-seq datasets PBMC3K, BMCITE, and CBMC were displayed in , respectively. ScMMAE overcomes batch effects and preserves tissue information Appropriately removing batch effect but preserving tissue information in the meanwhile is a challenge in single-cell analysis. In the five CITE-seq datasets mentioned above, only SPL206 included different batches and tissues. SPL111 and SPL206 were from the same source but profiled 111 proteins and 206 proteins. We took SPL206 as an example to perform batch elimination and compare it with other methods. SPL206 CITE-seq dataset consisting of two batches and two tissues (lymph node and spleen). The batches are still distinct in the UMAP of Scanpy, SCOIT, and TotalVI . Especially in the scMMAE and scMM methods, the two batches of cells were very homogeneously mixed together . Removing the batch effect may lead to the elimination of tissue information. We annotated cells collected from lymph nodes and spleen . Cells from different tissues can be well distinguished in Scanpy , partially distinguished in scMMAE , SCOIT , and TotalVI , while they uniformly mixed together in scMM. These results demonstrate a better performance in removing batch effect with tissue information preservation. In addition to the qualitative analysis above, we also conducted a quantitative evaluation of the model’s performance, using batch average silhouette width (Batch-ASW) and graph connectivity (GC) to assess its effectiveness in batch effect removal . Our model achieved top 1 and top 2 results in GC and Batch-ASW across five methods, demonstrating its superior capability in addressing batch effects . Assisting biomarker identification with ScMMAE ScMMAE as a representation learning method can benefit downstream analysis for scRNA-seq data. Taking a scRNA-seq profile of sepsis as an example, scMMAE can assist biomarker identification. Sepsis is a life-threatening disease when the immune system overreacts to infections . This profile includes 106 545 PBMCs collected from 15 sepsis patients in ICU, 4 patients in hospital wards, 27 patients in emergency department (ED), and 19 healthy subjects. ED consists of 10 UTI with leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) but no organ dysfunction (LeuK-UTI) patients, 7 UTI with mild or transient organ dysfunction (Int-URO) patients, and 10 UTI with clear or persistent organ dysfunction (Urosepsis, URO) patients. We clustered and annotated cells from sepsis samples and control samples using the fine-tuned scMMAE based on the CITE-seq dataset described above ( and ). Six cell types (T cells, B cells, neural killer cells, monocytes, dendritic cells, and megakaryocytes) were annotated, and different cell states were identified including T cell states (TS), B cell states (BS), NK cell states (NS), monocyte states (MS), dendritic cell states (DS), and megakaryocytes (MK). Notably, monocyte states 1 (MS1) and MK showed an increase in sepsis compared to the controls, indicating the potential significance of these subpopulations in sepsis. Comparing the number of MS1 and MK in sepsis and healthy subjects also revealed significant growth of MK ( P value=2.558e-05; ) and MS1 ( P value=1.840e-5; ) in sepsis, indicating the potential of MK and MS1 as biomarkers. We assessed the diagnostic capability of MS1 and MK using the AUROC . Using the proportion of MS1 in sepsis and control samples revealed by scMMAE achieved an AUROC of 0.90, which is higher than existing biomarkers FAIM3/PLAC8 (AUROC = 0.75) , SeptiCyte (AUROC = 0.56) , and sNIP (AUROC = 0.61) . In addition, we also examined whether scMMAE could distinguish between septic and normal patients in ED situations . The performance of MS1 (AUROC = 0.77) was better than existing biomarkers including FAIM3/PLAC8 (AUROC = 0.66), SeptiCyte (AUROC = 0.48), and sNIP (AUROC = 0.65). The results demonstrated that scMMAE, as a representation of scRNA-seq enhanced by multimodal omics, can assist downstream analysis such as biomarker identification. We evaluated the performance of scMMAE in integrative analysis of transcriptomics and proteomics across five CITE-seq datasets including SPL111 , SPL206 , PBMC5K , PBMC10K , and MALT10K , which contains murine cells, murine cells, peripheral blood mononuclear cells (PBMCs), PBMCs, and (Mucosa-Associated Lymphoid Tissue) MALT cells, respectively. SPL111 and SPL206 datasets include cells collected from murine spleen and lymph nodes. PBMC5K and PBMC10K datasets contain cells collected from PBMC. MALT10K datasets consist of cells from a MALT tumor, a rare kind of malignant lymphoma. We compared scMMAE to existing benchmarking methods, including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI, using ARI, NMI, FMI, which are three most commonly used metrics to evaluation the representation of multi-omics fusion. On top of that, we applied seven other metrics to evaluate the performance, including adjusted mutual information (AMI), silhouette coefficient (SC), etc. Among 9 of the 10 evaluated metrics, a higher value denotes superior performance, whereas for the Davies–Bouldin Index (DBI), a lower value signifies better performance. ScMMAE consistently achieved top-tier performance on average in ARI, NMI, and FMI across the five CITE-seq datasets, against the other five methods including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI . Some of the evaluation metrics, such as the DBI, were not applicable to methods like BREMSC and jointDIMMSC, because these methods relied on coordinate-based visualization for their computations and do not generate independent visualizations. The results demonstrated that scMMAE outperformed existing methods in eight metrics including the three most important, ARI, NMI, and FMI, and ranked secondly in Calinski–Harabaz index (CHI) and Jaccard index (JI), indicating the exceptional performance of scMMAE in multi-omics fusion. A comparison of each of the five datasets demonstrated the comprehensive superiority of scMMAE . ScMMAE achieved better performance with the three most important metrics, ARI, NMI, and FMI, in all five datasets. Although scMM and SCOIT surpassed scMMAE in CHI and JI, they did not perform well in other metrics. In DBI, scMMAE also exhibited the best performance across all datasets except the PBMC5K dataset, where it was close to the best . We projected the cells in the PBMC10K dataset (CITE-seq) into a 2D space using UMAP . All methods tested in this session exhibited very similar capacities in resolving the major cell types (myeloid cell, B cell, T cell, and natural killer cell; ), probably due to the distinct gene expression patterns between these cell populations . The expression of maker genes CD4, CD8A, ITGAM, and JCHAIN were shown for CD4 T cells, CD8 T cells, macrophage, and B cells . In comparison to the major cell lineages, the transcriptomes of T cell subtype were very similar to each other, which could be reflected by the indiscriminate UMAP projections . This phenomenon left the T cell subtype identification a challenging puzzle in single-cell transcriptomic study. In our parallel comparison of different representation methods, we observed that all T cell subtypes showed clumped distributions with clear inter-cluster discrimination in the UMAP of scMMAE and TotalVI, while the discrimination of different T cell subtypes were found blurred in the UMAP of SCANPY, scMM, and SCOIT . In this regard, scMMAE and scMM also outperformed at the myeloid subtype identification, since SCANPY, SCOIT, and TotalVI failed in discriminating CXCL8+ macrophage from CXCL8- ones . Collectively, these results suggested that the scMMAE exhibited superior performance in cell subtype identification tasks. Detailed information about the subpopulations and marker genes for all cell types in the PBMC10K dataset is available in . The clustering results for the other four CITE-seq datasets PBMC5K, MALT10K, SPL111, and SPL206 based on the scMMAE output were shown in , respectively. We evaluated the performance of scMMAE in integrative analysis of transcriptomics and proteomics across five CITE-seq datasets including SPL111 , SPL206 , PBMC5K , PBMC10K , and MALT10K , which contains murine cells, murine cells, peripheral blood mononuclear cells (PBMCs), PBMCs, and (Mucosa-Associated Lymphoid Tissue) MALT cells, respectively. SPL111 and SPL206 datasets include cells collected from murine spleen and lymph nodes. PBMC5K and PBMC10K datasets contain cells collected from PBMC. MALT10K datasets consist of cells from a MALT tumor, a rare kind of malignant lymphoma. We compared scMMAE to existing benchmarking methods, including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI, using ARI, NMI, FMI, which are three most commonly used metrics to evaluation the representation of multi-omics fusion. On top of that, we applied seven other metrics to evaluate the performance, including adjusted mutual information (AMI), silhouette coefficient (SC), etc. Among 9 of the 10 evaluated metrics, a higher value denotes superior performance, whereas for the Davies–Bouldin Index (DBI), a lower value signifies better performance. ScMMAE consistently achieved top-tier performance on average in ARI, NMI, and FMI across the five CITE-seq datasets, against the other five methods including BREMSC, jointDIMMSC, scMM, SCOIT, and TotalVI . Some of the evaluation metrics, such as the DBI, were not applicable to methods like BREMSC and jointDIMMSC, because these methods relied on coordinate-based visualization for their computations and do not generate independent visualizations. The results demonstrated that scMMAE outperformed existing methods in eight metrics including the three most important, ARI, NMI, and FMI, and ranked secondly in Calinski–Harabaz index (CHI) and Jaccard index (JI), indicating the exceptional performance of scMMAE in multi-omics fusion. A comparison of each of the five datasets demonstrated the comprehensive superiority of scMMAE . ScMMAE achieved better performance with the three most important metrics, ARI, NMI, and FMI, in all five datasets. Although scMM and SCOIT surpassed scMMAE in CHI and JI, they did not perform well in other metrics. In DBI, scMMAE also exhibited the best performance across all datasets except the PBMC5K dataset, where it was close to the best . We projected the cells in the PBMC10K dataset (CITE-seq) into a 2D space using UMAP . All methods tested in this session exhibited very similar capacities in resolving the major cell types (myeloid cell, B cell, T cell, and natural killer cell; ), probably due to the distinct gene expression patterns between these cell populations . The expression of maker genes CD4, CD8A, ITGAM, and JCHAIN were shown for CD4 T cells, CD8 T cells, macrophage, and B cells . In comparison to the major cell lineages, the transcriptomes of T cell subtype were very similar to each other, which could be reflected by the indiscriminate UMAP projections . This phenomenon left the T cell subtype identification a challenging puzzle in single-cell transcriptomic study. In our parallel comparison of different representation methods, we observed that all T cell subtypes showed clumped distributions with clear inter-cluster discrimination in the UMAP of scMMAE and TotalVI, while the discrimination of different T cell subtypes were found blurred in the UMAP of SCANPY, scMM, and SCOIT . In this regard, scMMAE and scMM also outperformed at the myeloid subtype identification, since SCANPY, SCOIT, and TotalVI failed in discriminating CXCL8+ macrophage from CXCL8- ones . Collectively, these results suggested that the scMMAE exhibited superior performance in cell subtype identification tasks. Detailed information about the subpopulations and marker genes for all cell types in the PBMC10K dataset is available in . The clustering results for the other four CITE-seq datasets PBMC5K, MALT10K, SPL111, and SPL206 based on the scMMAE output were shown in , respectively. A better classifier can be obtained by training with multimodal data and thus enhance unimodal classification . This approach also works for the representation learning in single-cell multiomics ( ; for detailed explanation please refer to Discussion). ScMMAE can transfer the deep learning model learnt from fused omics to enhance RNA representation. We applied scMMAE alongside four existing methods, Scanpy , Seurat , Pagoda2 , and scVI to four scRNA-seq cohorts, namely IFNB , CBMC , PBMC 3K , and BMCITE . IFNB profiled PBMC cells and stimulated the group treated with interferon beta. CBMC collected cord blood mononuclear cells from humans. PBMC 3K collected PBMCs from healthy donors. BMCITE was obtained from the bone marrow mononuclear cells of a single human donor. Our method scMMAE achieved superior performance compared to the other four methods in terms of the mean values of the 10 evaluation metrics on the four datasets . Although Seurat performed slightly better than scMMAE on one cohort (PBMC 3K), it did not achieve comparative results as scMMAE in terms of the other three cohorts . The pre-trained scMMAE, incorporating a second modal of data, proved highly beneficial for RNA representation and consistently improved performance in different situations To demonstrate the usefulness and superiority of scMMAE for downstream analysis, we visualized the clustering results of five methods with cell type annotation (IFNB; ). The IFNB dataset is a scRNA-seq dataset commonly used to study the effects of interferon-beta [12pt]{minimal} $(IFN- )$ on cells. This dataset primarily contains gene expression information from cells stimulated with [12pt]{minimal} $IFN- $ , making it useful for analyzing immune responses, cellular signaling pathways, and the regulatory effects of interferon on different cell types. ScMMAE resolved four major cell populations and 12 subtypes with clear boundaries. We displayed the marker genes CD3E, CD8A, ITGAM, CD19, and FCER1A for CD4 T cells, CD8 T cells, macrophages, B cells, and conventional dendritic cells. As T cell subtypes express similar transcriptomic characters, distinguishing subtypes in T cells is challenging. The UMAP projection demonstrated the capability of scMMAE to segregate CD4 and CD8 T cells into separate yet proximate clusters, while Scanpy, Seurat, Pagoda2, and scVI aggregated CD4 and CD8 T cells into a singular cluster . Moreover, scMMAE and scVI outperformed in myeloid subpopulation identification as macrophages were spread in two groups in Scanpy, Seurat, and Pagoda2. Collectively, the results demonstrated the effectiveness of scMMAE for scRNA-seq enhanced by multimodal knowledge. The projection of cells in the three RNA-seq datasets PBMC3K, BMCITE, and CBMC were displayed in , respectively. A better classifier can be obtained by training with multimodal data and thus enhance unimodal classification . This approach also works for the representation learning in single-cell multiomics ( ; for detailed explanation please refer to Discussion). ScMMAE can transfer the deep learning model learnt from fused omics to enhance RNA representation. We applied scMMAE alongside four existing methods, Scanpy , Seurat , Pagoda2 , and scVI to four scRNA-seq cohorts, namely IFNB , CBMC , PBMC 3K , and BMCITE . IFNB profiled PBMC cells and stimulated the group treated with interferon beta. CBMC collected cord blood mononuclear cells from humans. PBMC 3K collected PBMCs from healthy donors. BMCITE was obtained from the bone marrow mononuclear cells of a single human donor. Our method scMMAE achieved superior performance compared to the other four methods in terms of the mean values of the 10 evaluation metrics on the four datasets . Although Seurat performed slightly better than scMMAE on one cohort (PBMC 3K), it did not achieve comparative results as scMMAE in terms of the other three cohorts . The pre-trained scMMAE, incorporating a second modal of data, proved highly beneficial for RNA representation and consistently improved performance in different situations To demonstrate the usefulness and superiority of scMMAE for downstream analysis, we visualized the clustering results of five methods with cell type annotation (IFNB; ). The IFNB dataset is a scRNA-seq dataset commonly used to study the effects of interferon-beta [12pt]{minimal} $(IFN- )$ on cells. This dataset primarily contains gene expression information from cells stimulated with [12pt]{minimal} $IFN- $ , making it useful for analyzing immune responses, cellular signaling pathways, and the regulatory effects of interferon on different cell types. ScMMAE resolved four major cell populations and 12 subtypes with clear boundaries. We displayed the marker genes CD3E, CD8A, ITGAM, CD19, and FCER1A for CD4 T cells, CD8 T cells, macrophages, B cells, and conventional dendritic cells. As T cell subtypes express similar transcriptomic characters, distinguishing subtypes in T cells is challenging. The UMAP projection demonstrated the capability of scMMAE to segregate CD4 and CD8 T cells into separate yet proximate clusters, while Scanpy, Seurat, Pagoda2, and scVI aggregated CD4 and CD8 T cells into a singular cluster . Moreover, scMMAE and scVI outperformed in myeloid subpopulation identification as macrophages were spread in two groups in Scanpy, Seurat, and Pagoda2. Collectively, the results demonstrated the effectiveness of scMMAE for scRNA-seq enhanced by multimodal knowledge. The projection of cells in the three RNA-seq datasets PBMC3K, BMCITE, and CBMC were displayed in , respectively. Appropriately removing batch effect but preserving tissue information in the meanwhile is a challenge in single-cell analysis. In the five CITE-seq datasets mentioned above, only SPL206 included different batches and tissues. SPL111 and SPL206 were from the same source but profiled 111 proteins and 206 proteins. We took SPL206 as an example to perform batch elimination and compare it with other methods. SPL206 CITE-seq dataset consisting of two batches and two tissues (lymph node and spleen). The batches are still distinct in the UMAP of Scanpy, SCOIT, and TotalVI . Especially in the scMMAE and scMM methods, the two batches of cells were very homogeneously mixed together . Removing the batch effect may lead to the elimination of tissue information. We annotated cells collected from lymph nodes and spleen . Cells from different tissues can be well distinguished in Scanpy , partially distinguished in scMMAE , SCOIT , and TotalVI , while they uniformly mixed together in scMM. These results demonstrate a better performance in removing batch effect with tissue information preservation. In addition to the qualitative analysis above, we also conducted a quantitative evaluation of the model’s performance, using batch average silhouette width (Batch-ASW) and graph connectivity (GC) to assess its effectiveness in batch effect removal . Our model achieved top 1 and top 2 results in GC and Batch-ASW across five methods, demonstrating its superior capability in addressing batch effects . Appropriately removing batch effect but preserving tissue information in the meanwhile is a challenge in single-cell analysis. In the five CITE-seq datasets mentioned above, only SPL206 included different batches and tissues. SPL111 and SPL206 were from the same source but profiled 111 proteins and 206 proteins. We took SPL206 as an example to perform batch elimination and compare it with other methods. SPL206 CITE-seq dataset consisting of two batches and two tissues (lymph node and spleen). The batches are still distinct in the UMAP of Scanpy, SCOIT, and TotalVI . Especially in the scMMAE and scMM methods, the two batches of cells were very homogeneously mixed together . Removing the batch effect may lead to the elimination of tissue information. We annotated cells collected from lymph nodes and spleen . Cells from different tissues can be well distinguished in Scanpy , partially distinguished in scMMAE , SCOIT , and TotalVI , while they uniformly mixed together in scMM. These results demonstrate a better performance in removing batch effect with tissue information preservation. In addition to the qualitative analysis above, we also conducted a quantitative evaluation of the model’s performance, using batch average silhouette width (Batch-ASW) and graph connectivity (GC) to assess its effectiveness in batch effect removal . Our model achieved top 1 and top 2 results in GC and Batch-ASW across five methods, demonstrating its superior capability in addressing batch effects . ScMMAE as a representation learning method can benefit downstream analysis for scRNA-seq data. Taking a scRNA-seq profile of sepsis as an example, scMMAE can assist biomarker identification. Sepsis is a life-threatening disease when the immune system overreacts to infections . This profile includes 106 545 PBMCs collected from 15 sepsis patients in ICU, 4 patients in hospital wards, 27 patients in emergency department (ED), and 19 healthy subjects. ED consists of 10 UTI with leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) but no organ dysfunction (LeuK-UTI) patients, 7 UTI with mild or transient organ dysfunction (Int-URO) patients, and 10 UTI with clear or persistent organ dysfunction (Urosepsis, URO) patients. We clustered and annotated cells from sepsis samples and control samples using the fine-tuned scMMAE based on the CITE-seq dataset described above ( and ). Six cell types (T cells, B cells, neural killer cells, monocytes, dendritic cells, and megakaryocytes) were annotated, and different cell states were identified including T cell states (TS), B cell states (BS), NK cell states (NS), monocyte states (MS), dendritic cell states (DS), and megakaryocytes (MK). Notably, monocyte states 1 (MS1) and MK showed an increase in sepsis compared to the controls, indicating the potential significance of these subpopulations in sepsis. Comparing the number of MS1 and MK in sepsis and healthy subjects also revealed significant growth of MK ( P value=2.558e-05; ) and MS1 ( P value=1.840e-5; ) in sepsis, indicating the potential of MK and MS1 as biomarkers. We assessed the diagnostic capability of MS1 and MK using the AUROC . Using the proportion of MS1 in sepsis and control samples revealed by scMMAE achieved an AUROC of 0.90, which is higher than existing biomarkers FAIM3/PLAC8 (AUROC = 0.75) , SeptiCyte (AUROC = 0.56) , and sNIP (AUROC = 0.61) . In addition, we also examined whether scMMAE could distinguish between septic and normal patients in ED situations . The performance of MS1 (AUROC = 0.77) was better than existing biomarkers including FAIM3/PLAC8 (AUROC = 0.66), SeptiCyte (AUROC = 0.48), and sNIP (AUROC = 0.65). The results demonstrated that scMMAE, as a representation of scRNA-seq enhanced by multimodal omics, can assist downstream analysis such as biomarker identification. ScMMAE as a representation learning method can benefit downstream analysis for scRNA-seq data. Taking a scRNA-seq profile of sepsis as an example, scMMAE can assist biomarker identification. Sepsis is a life-threatening disease when the immune system overreacts to infections . This profile includes 106 545 PBMCs collected from 15 sepsis patients in ICU, 4 patients in hospital wards, 27 patients in emergency department (ED), and 19 healthy subjects. ED consists of 10 UTI with leukocytosis (blood WBC [12pt]{minimal} $ $ 12 000 per [12pt]{minimal} $mm^{3}$ ) but no organ dysfunction (LeuK-UTI) patients, 7 UTI with mild or transient organ dysfunction (Int-URO) patients, and 10 UTI with clear or persistent organ dysfunction (Urosepsis, URO) patients. We clustered and annotated cells from sepsis samples and control samples using the fine-tuned scMMAE based on the CITE-seq dataset described above ( and ). Six cell types (T cells, B cells, neural killer cells, monocytes, dendritic cells, and megakaryocytes) were annotated, and different cell states were identified including T cell states (TS), B cell states (BS), NK cell states (NS), monocyte states (MS), dendritic cell states (DS), and megakaryocytes (MK). Notably, monocyte states 1 (MS1) and MK showed an increase in sepsis compared to the controls, indicating the potential significance of these subpopulations in sepsis. Comparing the number of MS1 and MK in sepsis and healthy subjects also revealed significant growth of MK ( P value=2.558e-05; ) and MS1 ( P value=1.840e-5; ) in sepsis, indicating the potential of MK and MS1 as biomarkers. We assessed the diagnostic capability of MS1 and MK using the AUROC . Using the proportion of MS1 in sepsis and control samples revealed by scMMAE achieved an AUROC of 0.90, which is higher than existing biomarkers FAIM3/PLAC8 (AUROC = 0.75) , SeptiCyte (AUROC = 0.56) , and sNIP (AUROC = 0.61) . In addition, we also examined whether scMMAE could distinguish between septic and normal patients in ED situations . The performance of MS1 (AUROC = 0.77) was better than existing biomarkers including FAIM3/PLAC8 (AUROC = 0.66), SeptiCyte (AUROC = 0.48), and sNIP (AUROC = 0.65). The results demonstrated that scMMAE, as a representation of scRNA-seq enhanced by multimodal omics, can assist downstream analysis such as biomarker identification. The rapidly developing area of single-cell multi-omics analysis necessitates the development of methods for the collaborative analysis of multimodal data. In this study, we introduced scMMAE, a deep learning-based method for the fusion of RNAs and proteins at the single-cell level. ScMMAE can retain DI to each modality and can switch from bimodal training to unimodal prediction experiments. Utilizing self-supervised and transfer learning principles, scMMAE outperforms well compared to existing multi-omics analysis techniques and state-of-the-art methods in single-modal data analysis on real-world datasets. This represents a significant advancement in terms of interpretative capabilities, surpassing previous methods. Comprehensive ablation studies further underscore the architectural efficiency of scMMAE. Previous multimodal omics fusion methods, such as Scanpy, Seurat, scVI, and TotalVI, have predominantly focused on aligning data from disparate modalities and integrating the information shared between the two, often overlooking the unique information intrinsic to each comic. However, the unique information from each omics might be crucial as it can complement the deficiencies presented in other modalities, thereby enhancing the overall performance of multimodal approaches. In our method, scMMAE aggregates the fused information and DI from transcriptomics and proteomics. Leveraging the comprehensive insights gained from multi-modal omics during the training phase, scMMAE becomes more adept at interpreting and representing single-cell profiles when applied to transcriptomics datasets. From a theoretical perspective grounded in machine learning, the model can leverage distinct modalities to deduce more precise distributions or improved classification boundaries. As illustrated in , discrimination of cell types using transcriptomics alone might result in multiple plausible demarcations for classification . However, incorporating proteomic information enhances the accuracy of defining the classification boundary . Once determined through integrative analysis of multi-omic data, the refined classification boundary can serve as a valuable tool for unimodal, such as transcriptomic. In this study, we utilized multiple metrics to evaluate the clustering results. We aim to take a comprehensive view of the effectiveness of each method. For example, the scMM method consistently achieves the top rank in the CHI metric. Because CHI scores are calculated by assessing between-class variance and within-class variance, indicating that scMM places greater emphasis on the separation and cohesion of clusters. However, the scMM excessively neglects other factors, leading to low scores on various metrics. In contrast, our method, scMMAE, effectively considers each metric, achieving good results across the board. Although scMMAE achieved good performance on single-cell transcriptomics datasets by incorporating protein information and DI, it still has limitations. Inheriting from the deep neural networks, it is hard to provide an explanation of the relationships between the RNAs and proteins. Therefore, we will try to improve scMMAE with better interpretation in the near future. In addition, we also plan to develop a variant of scMMAE to integrate scATAC-seq and scRNA-seq datasets. This will enable downstream analyses, including gene regulatory network inference and transcription factor identification. Key Points We propose scMMAE to fuse single-cell multi-omics using cross-attention with MAE. scMMAE transforms cross-attention to self-attention to enhance single omics (scRNA-seq) representation with around 10% improvement. Downstream analysis reveals its capability of distinguishing sub-populations of cells, mitigating batch effects while preserving tissue-specific information, and identifying biomarkers of diseases such as sepsis. We propose scMMAE to fuse single-cell multi-omics using cross-attention with MAE. scMMAE transforms cross-attention to self-attention to enhance single omics (scRNA-seq) representation with around 10% improvement. Downstream analysis reveals its capability of distinguishing sub-populations of cells, mitigating batch effects while preserving tissue-specific information, and identifying biomarkers of diseases such as sepsis. scmmae_bib_majorrevised_supp_bbaf010 |
Involvement of community paediatricians in the care of children and young people with mental health difficulties in the UK: implications for case ascertainment by child and adolescent psychiatric, and paediatric surveillance systems | ecfc2cc2-ac0e-4f35-9c3c-f36a63a8bc6e | 7871672 | Pediatrics[mh] | Epidemiological studies are important for understanding disease trends and planning services. Large scale epidemiological studies help to determine reliable population estimates of common health conditions. However, for less common disorders, large epidemiological studies may not identify enough cases to enable the required analyses. For example, despite a very large representative sample size of 9117 children, the ‘Mental Health of Children and Young People in England Survey’ stated that the ‘sample was too small to reliably detect change in a low prevalence condition.’ Therefore, using typical epidemiological surveys to study uncommon conditions may require prohibitively large sample sizes that would render such studies unaffordable and impractical. On the other hand, surveillance methodology provides a cheaper and more efficient alternative epidemiological approach to studying uncommon conditions. This methodology was pioneered by the British Paediatric Surveillance Unit (BPSU) in 1986. The BPSU has so far conducted 120 surveillance studies, many of which have had important policy impact. Indeed, this surveillance strategy developed by the BPSU has been referred to as a success story of modern paediatrics. The success has led to its replication for paediatric research in many countries (Lynn and Reading). Also, similar methodology has been developed in the UK for obstetrics and gynaecology, ophthalmology, and child and adolescent mental health. The latter is referred to as the Child and Adolescent Psychiatric Surveillance System (CAPSS) ( https://www.rcpsych.ac.uk/docs/default-source/default-document-library/capss-10-year-report-final.pdf?sfvrsn=e3402268_2 ). Although surveillance methodology is typically applied to uncommon disorders, the strategy can equally be used to study aspects of common conditions. Examples include rare events associated with common conditions or practices such as the incidence of neuroleptic malignant syndrome associated with use of antipsychotic medications. Surveillance strategy can also apply to studies of uncommon subtypes of common conditions such as obsessive compulsive disorder (OCD) related to paediatric autoimmune neuropsychiatric disorders associated with streptococcal infections (PANDAS). The principle of surveillance methodology is described in detail elsewhere and illustrated in . Using CAPSS as example, every month, the surveillance team based at the Royal College of Psychiatrists send emails to all consultant child and adolescent psychiatrists in the UK and Ireland requesting them to report whether they have seen a new case of the condition being studied. Consultants who report that they have seen cases are contacted by the researchers (who are independent of CAPSS) to obtain the relevant research data about the case. BPSU and CAPSS operate active case surveillance, which means that consultants are also requested to report if they have not seen a case. This approach helps to monitor response rate and compliance. Given that incidence rate is one of the main outcomes of surveillance studies, it is essential that the estimation of this parameter is reliable in order to have policy impact. The reliability of incidence estimates requires that case ascertainment is as complete as possible. This is particularly crucial for less common conditions because missing a few cases can significantly skew the calculated incidence. One of the surveillance strategies to improve case ascertainment is multiple data sourcing such as among different professional groups who are likely to see or know of cases of the conditions being studied. Thus, for conditions commonly seen by both paediatricians and child psychiatrists, ascertainment is improved by concurrent surveillance through BPSU and CAPSS. A joint Royal College of Paediatrics and Child Health/British Association of Community Child Health (BACCH) workforce guide identifies child mental health conditions such as attention deficit and hyperactivity disorder (ADHD) and autism spectrum disorder (ASD) as within the roles and expertise of community paediatricians. It is also well recognised that other mental health conditions such as eating disorders and conversion disorder require paediatric support for optimum assessment and treatment. This understanding has informed joint BPSU/CAPSS surveillance studies of eating disorders, conversion disorder and ADHD transition. The importance of joint BPSU/CAPSS surveillance is well illustrated in the conversion disorder study whereby cases reported by paediatricians and child psychiatrists had only a very small overlap of 4.2%. This study found that surveillance of either professional group alone would have reduced case ascertainment by 59% or 36%%, respectively. This strongly underlines the importance of joint BPSU and CAPSS surveillance for better case ascertainment of conditions that commonly interface between Paediatric and Child and Adolescent Mental Health Services (CAMHS). In addition to joint studies, both BPSU and CAPSS conduct single-unit studies for conditions that are considered to be seen almost exclusively by paediatricians (for BPSU) or child psychiatrists (for CAPSS). CAPSS has conducted single-unit studies of non-affective psychosis, paediatric bipolar disorder and early onset depression. These CAPSS-only studies ran on the assumption that adequate case ascertainment is achievable for these conditions through surveillance of only child psychiatrists. It was considered that for such conditions, joint CAPSS and BPSU surveillance would achieve little additional case ascertainment at huge extra costs and increased reporting burden on paediatricians who are unlikely to see affected children. However, while the assumption that paediatricians are not seeing children with the aforementioned types of mental health conditions appears to have face validity, this hypothesis would benefit from empirical evaluation. Although community paediatrics workforce guide states that community paediatricians are not usually trained to assess and treat these types of mental health conditions, the document acknowledged that the underfunding of CAMHS may lead to increased pressure on community paediatricians to become involved in the management of more mental health difficulties. The latter point is hinted at by the increase in the proportion of community paediatric services that manage ADHD from 15% in 2006 to 63% in 2016. Thus, the first objective of this study is to ascertain the extent to which community paediatricians may be involved in the care of children with mental health conditions, the types of mental health conditions they are involved in providing care for and the reasons for their involvement. These findings could help to determine with more clarity which child and adolescent mental health conditions are appropriate for CAPSS-only surveillance and which ones justify the additional cost and effort of dual BPSU–CAPSS surveillance to maximise case ascertainment. The second objective of this study is to explore the challenges and opportunities in joint working between community paediatricians and CAMHS. However, due to space limitation, data from this second objective are not included in the current paper, but will be the subject of a separate publication. The study focused on community paediatricians because the structure of health services for children in the UK indicates that these are the paediatricians who are more likely to interface with CAMHS. Furthermore, community paediatricians often work with children who are likely to have experienced childhood adversities that increase the risk of mental health difficulties. Examples of such young people include children looked after by the state and those involved in adoption and fostering, and or safeguarding procedures.
Survey methods This survey adapted questions and methodology used by an earlier CAPSS survey of consultant child and adolescent psychiatrists. Two experienced community paediatricians and a specialist in surveillance methodology reviewed the earlier survey questions and adapted them for completion by community paediatricians. The final version of the questionnaire was agreed by consensus. The survey included structured questions with multiple response options and one Likert rating scale. The structured questions sought information on the community paediatricians’ special areas of interest, experience of joint working with CAMHS, and presentation of children and young people (CYP) with mental health conditions to their services. The community paediatricians used the Likert scale to rate the likelihood of their involvement in the assessment or care of CYP with specific mental health conditions. The response options were ‘Always/mostly’ (>75%), ‘Sometimes’ (25%–75%), ‘Rarely/never’ (<25%), ‘Don’t know’ and ‘Not applicable’. The structured responses are presented in the results as frequencies and percentages. Provision was made for free text comments to help to further understand the context for answers to the structured questions. Thematic content analysis was used to identify common themes within the participants’ free text comments. The survey was discussed by the executive committee members of CAPSS as well as by members of the BPSU. The survey has been provided as an . 10.1136/bmjpo-2020-000713.supp1 Supplementary data Survey administration The survey was distributed through the BACCH newsletters and direct mass-emailing to members via a link to the web-based tool Survey Monkey ( www.surveymonkey.com ). Responses were obtained between December 2015 and August 2016. BACCH had a total membership of 1120 in 2015. However, in order to reduce response burden and still achieve national coverage, respondents were advised that they could choose to complete one questionnaire on behalf of their service, unit or department. There were 169 distinctly managed Community Child Health Services in the UK in 2015. Patient and public involvement This survey was carried out among clinicians and no direct patient data were required. The BPSU has a permanent patient and public involvement representative on the executive board, who provided support for the study. This is acknowledged in the paper.
This survey adapted questions and methodology used by an earlier CAPSS survey of consultant child and adolescent psychiatrists. Two experienced community paediatricians and a specialist in surveillance methodology reviewed the earlier survey questions and adapted them for completion by community paediatricians. The final version of the questionnaire was agreed by consensus. The survey included structured questions with multiple response options and one Likert rating scale. The structured questions sought information on the community paediatricians’ special areas of interest, experience of joint working with CAMHS, and presentation of children and young people (CYP) with mental health conditions to their services. The community paediatricians used the Likert scale to rate the likelihood of their involvement in the assessment or care of CYP with specific mental health conditions. The response options were ‘Always/mostly’ (>75%), ‘Sometimes’ (25%–75%), ‘Rarely/never’ (<25%), ‘Don’t know’ and ‘Not applicable’. The structured responses are presented in the results as frequencies and percentages. Provision was made for free text comments to help to further understand the context for answers to the structured questions. Thematic content analysis was used to identify common themes within the participants’ free text comments. The survey was discussed by the executive committee members of CAPSS as well as by members of the BPSU. The survey has been provided as an . 10.1136/bmjpo-2020-000713.supp1 Supplementary data
The survey was distributed through the BACCH newsletters and direct mass-emailing to members via a link to the web-based tool Survey Monkey ( www.surveymonkey.com ). Responses were obtained between December 2015 and August 2016. BACCH had a total membership of 1120 in 2015. However, in order to reduce response burden and still achieve national coverage, respondents were advised that they could choose to complete one questionnaire on behalf of their service, unit or department. There were 169 distinctly managed Community Child Health Services in the UK in 2015.
This survey was carried out among clinicians and no direct patient data were required. The BPSU has a permanent patient and public involvement representative on the executive board, who provided support for the study. This is acknowledged in the paper.
Respondents’ characteristics A total of 245 community paediatricians responded to the survey. Although this represents 22% of the 1120 members of BACCH in 2015 (excluding retired, affiliate and overseas members), we believe that the responses provide a good coverage of the 169 Community Child Health (CCH) units in the UK because respondents were advised that they could choose to complete one questionnaire on behalf of their unit/service. All the respondents stated that they worked clinically in community paediatrics. Most respondents were consultants, 177 (75.3%) but responses were also received from associated specialists, 37 (15.7%); staff grades, 9 (3.8%) and other grades of doctors such as trainees, 12 (5.1%). shows that the respondents’ most common areas of special interests are in neurodevelopmental conditions, 160 (70.5%); neurodisability, 114 (50%); child safeguarding, 97 (42.7%) and behavioural paediatrics, 74 (32.6%). Joint working with CAMHS The community paediatricians were asked about joint working with CAMHS in order to gain an understanding of how their organisation’s structures might moderate their involvement in the care of CYP with mental health conditions. Less than half of the respondents (42.7%) reported that their paediatric services are part of a multidisciplinary team or joint service with CAMHS. Thematic analysis of free text comments showed that the most common area of joint work is in the assessment and treatment of CYP with ADHD and ASD, more so for the younger age groups. This theme was mentioned 35 times. An example of a related comment is “I work closely with CAMHS regarding children with ASD and do joint assessments for children 2½–5 years old.” Local pathways for new presentations of child and adolescent mental health conditions In order to explore the community paediatricians’ contact with CYP with mental health difficulties at the initial part of the patient’s care journey, they were asked which service(s) would a child or adolescent attend for assessment and or treatment if they present in the paediatrician’s catchment area with the specific mental health conditions listed in . Their responses showed that, on the whole, CYP with neurodevelopmental conditions such as ASD, ADHD and Tourette syndrome present more frequently to paediatrics than to CAMHS. The difference is particularly striking for ASD whereby 93% would present to paediatrics compared with CAMHS (46.7%). The proportion for ADHD and Tourette syndrome is evenly split between paediatrics and CAMHS. Also, there is limited presentation to ‘joint services’ for all conditions including neurodevelopmental disorders. The above trend is different in relation to emotional difficulties, in that, most CYP with self-harm and suicidal behaviour, depression, anxiety and OCD would present to CAMHS (≊ 98%). However, a sizeable proportion of CYP with these emotional difficulties may also present to paediatricians (eg, 29.5% for anxiety/OCD and 12.8% for depression). Even cases of psychosis and bipolar were reported to present to paediatricians although at very low frequencies (1.8%). Given that the workforce guide for community paediatricians does not recommend working with CYP presenting with the types of emotional difficulties that would typically be seen in CAMHS, the community paediatricians’ free text comments were analysed thematically to understand the reasons why such CYP are presenting to paediatric services. The overwhelming reason identified is ‘difficulty with accessing CAMHS’. This concern was mentioned 59 times (which represents 24% of the participating community paediatricians). Four examples of related comments are reported in . Box 1 Showing some examples of comments explaining the involvement of community paediatricians in the assessment or care of children with mental health conditions ‘It is extremely difficult to access CAMHS and so many patients with mental/emotional health concerns end up being seen by paediatrics.’ ‘Threshold for referral acceptance by CAMHS is very high so we tend to see a lot of children that would ideally be seen by CAMHS.’ ‘The Community Paediatric team is holding responsibility for a large number of children who actually require psychological or psychiatric input which is not provided.’ ‘CAMHS have very strict entry criteria and reject a lot of patients meaning that they sometimes come to paediatrics even though we don’t necessarily have the appropriate skills to assess them and no support services to work with them but if CAMHS won’t accept then we are seen as ‘better than nothing’ which is adding additional strain to our already overstretched services.’ Footnotes: CAMHS, Child and Adolescent Mental Health Services. Involvement of community paediatricians in the assessment or care of children with mental health conditions In order to gain further understanding about how much community paediatricians are likely to have some involvement with the care of children with mental health conditions attending their paediatric services, they were asked to rate the likelihood of them being ‘aware of’’ a child attending their service with the mental health conditions in . ‘Awareness’ of such cases was defined broadly to include direct clinical care for the child or involvement in multidisciplinary team discussion or supervision about the child. This broad definition is in keeping with the level of involvement required for consultants to be able to report a case for BPSU or CAPSS surveillance studies. A consultant only needs to know enough about the case to judge whether the child meets the inclusion criteria for reporting. BPSU and CAPSS encourage consultants to report cases they ‘know of’’, even if they believe that someone else might report the case. This practice helps to improve surveillance case ascertainment. The potential for double reporting is preferred to non-reporting because surveillance researchers are able to prevent double-counting of reported cases through a process of de-duplication. By combining the response options of ‘always’ and ‘sometimes’, shows that the vast majority of the community paediatricians (above 75%) have some involvement in the assessment or care of children with ASD, ADHD, Tourette syndrome, intellectual disability and fetal alcohol syndrome. Between 50% and 75% have some involvement in the assessment or care of children with attachment disorder, eating disorder and anxiety including OCD. About one-third (32.3%) are involved in assessment or care of children with depression, and a small proportion (8.2%) in the care of those with psychosis and bipolar disorder.
A total of 245 community paediatricians responded to the survey. Although this represents 22% of the 1120 members of BACCH in 2015 (excluding retired, affiliate and overseas members), we believe that the responses provide a good coverage of the 169 Community Child Health (CCH) units in the UK because respondents were advised that they could choose to complete one questionnaire on behalf of their unit/service. All the respondents stated that they worked clinically in community paediatrics. Most respondents were consultants, 177 (75.3%) but responses were also received from associated specialists, 37 (15.7%); staff grades, 9 (3.8%) and other grades of doctors such as trainees, 12 (5.1%). shows that the respondents’ most common areas of special interests are in neurodevelopmental conditions, 160 (70.5%); neurodisability, 114 (50%); child safeguarding, 97 (42.7%) and behavioural paediatrics, 74 (32.6%).
The community paediatricians were asked about joint working with CAMHS in order to gain an understanding of how their organisation’s structures might moderate their involvement in the care of CYP with mental health conditions. Less than half of the respondents (42.7%) reported that their paediatric services are part of a multidisciplinary team or joint service with CAMHS. Thematic analysis of free text comments showed that the most common area of joint work is in the assessment and treatment of CYP with ADHD and ASD, more so for the younger age groups. This theme was mentioned 35 times. An example of a related comment is “I work closely with CAMHS regarding children with ASD and do joint assessments for children 2½–5 years old.”
In order to explore the community paediatricians’ contact with CYP with mental health difficulties at the initial part of the patient’s care journey, they were asked which service(s) would a child or adolescent attend for assessment and or treatment if they present in the paediatrician’s catchment area with the specific mental health conditions listed in . Their responses showed that, on the whole, CYP with neurodevelopmental conditions such as ASD, ADHD and Tourette syndrome present more frequently to paediatrics than to CAMHS. The difference is particularly striking for ASD whereby 93% would present to paediatrics compared with CAMHS (46.7%). The proportion for ADHD and Tourette syndrome is evenly split between paediatrics and CAMHS. Also, there is limited presentation to ‘joint services’ for all conditions including neurodevelopmental disorders. The above trend is different in relation to emotional difficulties, in that, most CYP with self-harm and suicidal behaviour, depression, anxiety and OCD would present to CAMHS (≊ 98%). However, a sizeable proportion of CYP with these emotional difficulties may also present to paediatricians (eg, 29.5% for anxiety/OCD and 12.8% for depression). Even cases of psychosis and bipolar were reported to present to paediatricians although at very low frequencies (1.8%). Given that the workforce guide for community paediatricians does not recommend working with CYP presenting with the types of emotional difficulties that would typically be seen in CAMHS, the community paediatricians’ free text comments were analysed thematically to understand the reasons why such CYP are presenting to paediatric services. The overwhelming reason identified is ‘difficulty with accessing CAMHS’. This concern was mentioned 59 times (which represents 24% of the participating community paediatricians). Four examples of related comments are reported in . Box 1 Showing some examples of comments explaining the involvement of community paediatricians in the assessment or care of children with mental health conditions ‘It is extremely difficult to access CAMHS and so many patients with mental/emotional health concerns end up being seen by paediatrics.’ ‘Threshold for referral acceptance by CAMHS is very high so we tend to see a lot of children that would ideally be seen by CAMHS.’ ‘The Community Paediatric team is holding responsibility for a large number of children who actually require psychological or psychiatric input which is not provided.’ ‘CAMHS have very strict entry criteria and reject a lot of patients meaning that they sometimes come to paediatrics even though we don’t necessarily have the appropriate skills to assess them and no support services to work with them but if CAMHS won’t accept then we are seen as ‘better than nothing’ which is adding additional strain to our already overstretched services.’ Footnotes: CAMHS, Child and Adolescent Mental Health Services.
In order to gain further understanding about how much community paediatricians are likely to have some involvement with the care of children with mental health conditions attending their paediatric services, they were asked to rate the likelihood of them being ‘aware of’’ a child attending their service with the mental health conditions in . ‘Awareness’ of such cases was defined broadly to include direct clinical care for the child or involvement in multidisciplinary team discussion or supervision about the child. This broad definition is in keeping with the level of involvement required for consultants to be able to report a case for BPSU or CAPSS surveillance studies. A consultant only needs to know enough about the case to judge whether the child meets the inclusion criteria for reporting. BPSU and CAPSS encourage consultants to report cases they ‘know of’’, even if they believe that someone else might report the case. This practice helps to improve surveillance case ascertainment. The potential for double reporting is preferred to non-reporting because surveillance researchers are able to prevent double-counting of reported cases through a process of de-duplication. By combining the response options of ‘always’ and ‘sometimes’, shows that the vast majority of the community paediatricians (above 75%) have some involvement in the assessment or care of children with ASD, ADHD, Tourette syndrome, intellectual disability and fetal alcohol syndrome. Between 50% and 75% have some involvement in the assessment or care of children with attachment disorder, eating disorder and anxiety including OCD. About one-third (32.3%) are involved in assessment or care of children with depression, and a small proportion (8.2%) in the care of those with psychosis and bipolar disorder.
The main objectives of this study were to ascertain the extent to which community paediatricians are involved in the care of children with mental health conditions, the types of mental health conditions they are involved in providing care for, reasons for their involvement, and the implications for case ascertainment for surveillance studies by CAPSS and BPSU. The survey found high levels of community paediatricians’ involvement in the assessment and treatment of neurodevelopmental conditions, more so for ASD. The study also found a significant level of presentation of CYP with emotional difficulties to community paediatric services. The high level of community paediatricians’ involvement in the assessment and treatment of CYP with neurodevelopmental conditions like ASD and ADHD is consistent with their expertise, workforce recommendations and established practice in the UK. The community paediatricians appeared positive about this area of work. There was no free text comment to suggest that any of the paediatricians had concerns about supporting CYP with neurodevelopmental difficulties. Concerns were expressed only when the CYP developed comorbid emotional difficulties which required CAMHS support but this was difficult to access. This concern is consistent with the view that CYP with neurodevelopmental conditions like ADHD and ASD are best managed holistically within an integrated service model involving both paediatrics and CAMHS. The surveillance implication of the high presentation of neurodevelopmental conditions to community paediatrics supports the current practice of joint BPSU and CAPSS surveillance for such conditions. This practice is exemplified by a recent joint study on ADHD transition which showed that 64% of the cases were reported by paediatricians, while 36% were by child and adolescent psychiatrists with no cases dually reported through both BPSU and CAPSS. The high losses of case ascertainment if the study had been a single BPSU or CAPSS study is self-evident. The community paediatricians reported a significant level of presentation of CYP with emotional difficulties to their services (eg, 29% for anxiety and OCD). The primary reason for this situation is difficulty with access to CAMHS. The survey found that unlike neurodevelopmental conditions, the community paediatricians expressed concerns that their involvement in the care of CYP with emotional difficulties is beyond their training and expertise. Many suggested that they had to offer help, because the affected CYP would otherwise have no support. The service implications of these concerns are discussed later. However, for purposes of surveillance studies, the significant presentation of CYP with emotional difficulties to community paediatric services could have implications for case ascertainment. The surveillance implication is even more significant if account is taken of the high proportion of community paediatricians who were ‘aware’ of CYP with emotional difficulties in their service. The latter point is based on the fact that a consultant being ‘aware of’’ or ‘knowing of’’ a case is sufficient for them to make a surveillance report on the case. The surveillance implication of the significant presentation of CYP with emotional difficulties to community paediatric services requires some nuancing. Joint BPSU and CAPSS surveillance is two times as expensive. It also tasks the goodwill of consultants in both specialties who make voluntary monthly reports about having seen or not seen cases. Maintaining the goodwill of consultants is a crucial factor in sustaining surveillance platforms. This requires careful management of the number of studies in order to prevent excessive reporting burden on consultants. These points indicate that a strong justification should be required to support joint BPSU and CAPSS studies in order to optimally balance the trade-offs between case ascertainment, cost and increased reporting burden on consultants. We recommend that in relation to emotional difficulties, the justification should depend on the specific research question. For example, while OCD is an emotional difficulty, a surveillance study of OCD presentation in the context of PANDAS would require joint BPSU and CAPSS strategy. The separate executive committees of BPSU and CAPSS can advise researchers early in the planning of a study regarding whether the research question is likely to require a single or joint surveillance. BPSU surveillance covers all consultant paediatricians in the UK and Ireland. However, for some surveillance studies of child and adolescent mental health difficulties where the interface is more likely with community paediatricians (rather than the general body of paediatricians), a case could be made to limit the cost and reporting burden by running a joint CAPSS and BACCH study (instead of joint CAPSS and BPSU). However, there is currently no surveillance infrastructure for only BACCH members. The very low levels of presentation of CYP with psychosis and bipolar disorder to community paediatrics support the current practice of CAPSS-only surveillance for such conditions. The additional expense and reporting burden of joint surveillance is unlikely to be justifiable for such cases. However, there could still be circumstances whereby a surveillance study of patients with psychosis may require joint BPSU and CAPSS strategy. A potential example would be a study of the incidence of neuroleptic malignant syndrome in CYP treated with antipsychotic medications. The concern about access to CAMHS, which was raised by almost a quarter of the paediatricians, requires some brief exploration even though it is less central to the study objective covered in this paper (which is focused on the implication for surveillance case ascertainment). This challenge with CAMHS access appeared to be pervasive and it generated a lot of frustration among the community paediatricians. Some of the paediatricians indicated that they were reluctantly over-reaching their expertise to help CYP with mental health difficulties that would normally be seen by CAMHS. Many paediatricians formulated the reason for the problem with CAMHS access as underfunding of CAMHS, leading to short staffing, long waiting times and raised referral threshold to focus on CYP with the most severe mental illnesses. Therefore, several community paediatricians suggested that the main solution is to expand CAMHS capacity. Some of the paediatricians cautioned against the type of token measures that occurred in their own catchment which involved the commissioners rebranding CAMHS without extra resources which resulted in no improvement in access. We hope that the National Health Service Long Term Plan ( https://www.longtermplan.nhs.uk/ ) which has specific commitment of extra resources for CAMHS as well as commitment to closer integration of services would bring about genuine and sustained improvement in access to CAMHS. Strengths and limitations of the study One of the strengths of this study is its nationwide scope and presentation of a representative sample of CCH paediatricians’ workload and experience of working with CAMHS practitioners in the UK. There are however potential weaknesses of the study that require caution when interpreting the results. The main limitation of this paper relates to uncertainty about the representativeness of the survey sample. We surveyed members of BACCH as this group of paediatricians are more likely to interface with CAMHS. However, they consisted of just over a quarter of the total UK paediatric consultant workforce of 3996 in 2015 ( https://www.rcpch.ac.uk/resources/paediatric-workforce-data-policy-briefing-2017 ). Although we believe that the 245 respondents provided a good coverage of the 169 CCH units around the UK, concerns about confidentiality meant that we did not invite data that could link respondents to CCH units. Thus, the absence of information on the regional spread of the respondents as well as the age and gender distribution means that there is some uncertainty about the degree to which the findings are generalisable.
One of the strengths of this study is its nationwide scope and presentation of a representative sample of CCH paediatricians’ workload and experience of working with CAMHS practitioners in the UK. There are however potential weaknesses of the study that require caution when interpreting the results. The main limitation of this paper relates to uncertainty about the representativeness of the survey sample. We surveyed members of BACCH as this group of paediatricians are more likely to interface with CAMHS. However, they consisted of just over a quarter of the total UK paediatric consultant workforce of 3996 in 2015 ( https://www.rcpch.ac.uk/resources/paediatric-workforce-data-policy-briefing-2017 ). Although we believe that the 245 respondents provided a good coverage of the 169 CCH units around the UK, concerns about confidentiality meant that we did not invite data that could link respondents to CCH units. Thus, the absence of information on the regional spread of the respondents as well as the age and gender distribution means that there is some uncertainty about the degree to which the findings are generalisable.
This survey identified a significant involvement of community paediatricians in the assessment and treatment of CYP with mental health conditions. The involvement is highest in relation to neurodevelopmental conditions, and this is in keeping with the expectation and expertise of community paediatricians. However, there is also significant involvement in the care of CYP with emotional difficulties which is mainly due to lack of access to CAMHS. The implication of the findings for surveillance case ascertainment is that joint BPSU and CAPSS continues to be recommended for surveillance studies of neurodevelopmental conditions. For surveillance studies of emotional disorders, a nuanced decision about single or joint surveillance should be made based on the specific research question and the relative trade-offs between case ascertainment, cost and reporting burden. Single CAPSS studies remain appropriate for surveillance studies of psychosis and bipolar disorder. There is urgent need to expand access to CAMHS. This would reduce the need for community paediatricians to over-reach their expertise to support CYP with mental health difficulties whose needs would be better met by CAMHS.
Reviewer comments Author's manuscript
|
Multiplex Determination of K-Antigen and Colanic Acid Capsule Variants of | 4d6d835e-5172-4464-bbc0-49bfc59ddec0 | 11507822 | Microbiology[mh] | Cronobacter spp. (former Enterobacter sakazakii ) are opportunistic bacterial pathogens which can be isolated from a wide range of foods and environmental sources . Serious infections of Cronobacter are associated with neonates, particularly those with low birth weight (<1.5 kg) and <28 days in age. Such infections may result in necrotizing enterocolitis (NEC), septicemia, and meningitis with high fatality rates (40–80%) . Bowen and Braden (2006) reported that surviving neonatal cases of Cronobacter meningitis may have severe neurological damage. Infections also occur in adults, in particular, immunocompromised patients . Adult infections are associated with bacteremia, wound infections and urosepsis. C. sakazakii and C. malonaticus are the species isolated from the majority of clinical samples in both age populations (neonatal and Adult) . Several studies have reported that Cronobacter species isolated from different environments such as powdered infant formula (PIF) and milk powder production factories including floors, roofs, tanker bays, drying towers, roller dryers, conveyors, and air filters of industrial units . Cronobacter species can persist in these environments because of their ability to survive spray drying, desiccation and osmotic stress . Caubilla-Barron and Forsythe (2007) reported that Cronobacter species can persist and survive more than 2 years in PIF. The ingestion of contaminated PIF is the main route of infant infection, and this has led to the development of internationally approved detection methods for the food industry . Capsular polysaccharides of Gram-negative bacteria play a significant role in maintaining the structural integrity of the cell in hostile environmental conditions. The polysaccharide capsules are major bacterial virulence factors and environmental fitness traits . Due to its propensity for water, the capsule will contribute to the organisms’ persistence under desiccated conditions in natural and food production environments . The O-antigen and K-antigen in Gram-negative bacteria consist of long polysaccharide units, which are covalently linked to lipid A in the outer membrane. This diversity has been the basis for differentiation methods of E. coli and Salmonella . E. coli produces more than 80 different capsular polysaccharide K-antigens, while there are over 2500 different Salmonella serotypes . Consequently, polysaccharide capsule diversity can be used as a taxonomic tool within the Enterobacteriales . The K-antigen gene cluster of E. coli consists of three genomic regions. Region 1 includes the kpsEDCS genes and Region 3 includes kpsTM encode for enzymes and transport proteins responsible for the initiation of chain elongation and translocation to the cell surface. Variable Region 2 genes encode the glycosyltransferases and other enzymes responsible for the biosynthesis of the K-antigen . Kaczmarek et al. (2014) reported that the K1 antigen is a key virulence determinant of E. coli strains and has been associated with meningitis, bacteremia and septicemia, particularly in neonatal cases. Neonatal meningitis Escherichia coli (NMEC) is a predominant Gram-negative bacterial pathogen associated with meningitis in babies. NMEC is also associated with strains possessing capsular polysaccharides . A multiplex PCR assay targeting a capsular polysaccharide synthesis gene cluster of Klebsiella serotypes K1, K2 and K5 was evaluated using reference serotype strains and a panel of clinical isolates. The PCR assay was highly specific for serotypes associated with virulence in humans . Feizabadi et al. (2013) proposed a rapid and reliable PCR method for the identification of K. pneumoniae K1 and K2 serotypes. A genomic study of 11 Cronobacter strains by Joseph et al. (2012) notes that the Cronobacter possessed a sequence-type variable capsular polysaccharide encoding region. Later studies by Ogrodzki and Forsythe (2015 and 2017) reported that it is homologous with the K-antigen of E. coli and is found in all Cronobacter species. The Cronobacter K-antigen is encoded in three regions, of which most of Region 1 (kpsEDCS) and all Region 3 (kpsTM) were conserved across the Cronobacter genus. The glycosyltransferase genes in Region 2 varied in length and CG % content. Additionally, the terminal sequence of kspS (Region 1) differed in conjunction with the variation in Region 2. Since Region 2 encodes for glycosyltransferases, the two K-antigens (K1 and K2) lead to the production of two distinct exported polysaccharides. Furthermore, a comparison of kpsS (which encodes for the capsular polysaccharide transport protein) revealed sequence variation in accordance with kps Region 2 . The chemical composition of Cronobacter K-antigen is still unknown; however, K2 is linked to strains from neonatal meningitis cases. Another bacterial capsular polysaccharide is known as colanic acid (CA). This is associated with bacterial protection against desiccation, extreme temperatures and acidic environmental conditions . In Cronobacter species, the colanic acid-encoding gene cluster is located adjacent to the O-antigen region and separated by the galF gene . Furthermore, Ogrodzki and Forsythe (2015) reported that there are two variants in the colanic acid-encoding gene cluster. CA1 is composed of 21 genes, while CA2 lacks the galE gene (encoding for UDP-N-acetyl glucosamine 4-epimerase). C. sakazakii and C. malonaticus isolates with capsular type [K2:CA2:Cell+] were associated with neonatal meningitis and necrotizing enterocolitis. Other capsular types were less associated with clinical infections . This study aimed to develop and apply a multiplex PCR assay targeting the Cronobacter capsular polysaccharide genes kpsS (K1 and K2) and galE (CA1 and CA2). This assay could subsequently be useful for the specific detection, and rapid and simple identification of K-antigen and colanic acid types, respectively.
2.1. Bacterial Strains Twenty-six strains of C. sakazakii were used in this study. These strains were from the culture collection of Cronobacter spp. of Nottingham Trent University (NTU). These strains had previously been isolated from various food and environmental sources. They were from 18 different sequence types and 4 serotypes O:1, O:2, O:3 and O:4; . 2.2. Genomic DNA Extraction Genomic DNA was prepared according to the instructions of the manufacturer, 1.5 mL of culture grown overnight in TSB using the GenElute™ Bacterial Genomic DNA Kit (Sigma-Aldrich, London, UK). The purity and concentration of the extracted DNA were measured by using Nanodrop 2000 (Thermo Scientific, London, UK). 2.2.1. C. sakazakii K-Antigen Profiling The K-capsule encoding region in Cronobacter spp. is composed of three regions. The genomic study indicated that the variations between K1 and K2 capsule types were attributed to Region 2 and within the kpsS gene (encoding for the capsular polysaccharide transport protein) of Region 1 . Therefore, K1 and K2 primers were designed based on the capsular gene ( kpsS ) and were identified from the sequence information of C. sakazakii strain 658 ( kpsS1 ) and strain 6 ( kpsS2 ), respectively. Primers flanking K1 and K2 were designed using the Primer 3.0 software and were synthesized by Eurofins MWG Operon (London, UK). The primer names, their sequence and predicted product size are summarized in . 2.2.2. K-Antigen Gene Amplification The multiplex PCR was performed by mixing two primers (K1 and K2) in a final volume of 50 µL containing the following components: 1× dreamtaq buffer; 2.5 mM MgCl2; 400 µM concentrations (each) of dATP, dCTP, dGTP, and dTTP; 0.06 to 0.10 µM primer; and 1 U of DreamTaq DNA polymerase and DNA template (200 ng). The following PCR conditions were used for amplification: an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94° C for 30 s, 59 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1× TAE buffer) at a voltage of 70 V for ~60 min. 2.2.3. C. sakazakii Colanic Acid Profiling Colanic acid variant primers were based on the galE gene sequence encoding for UDP-N-acetyl glucosamine 4-epimerase of C. sakazakii strain 658. They were designed using the Primer 3.0 software, and obtained from Eurofins MWG Operon company (London, UK); . 2.2.4. C. sakazakii Colanic Acid Amplification The PCR conditions used for amplification were an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94 °C for 30 s, 60 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1X TAE buffer) at a voltage of 75 V for ~60 min. Genomic investigation, In silico analyses were carried out using the accessible Cronobacter genomes available by open access at the PubMLST Cronobacter database ( www.pubmlst.org/cronobacter/ ) (accessed on 29 January 2018). The Cronobacter genomes included in this study were C. sakazakii strains; 1844, 1882, 1992, 1886, 1888, 1906, 1105, 377, 658, 1885, 2027, 1890, 1847, 1845, 1881, 1887, 1283, 1889, 1108 and 1908.
Twenty-six strains of C. sakazakii were used in this study. These strains were from the culture collection of Cronobacter spp. of Nottingham Trent University (NTU). These strains had previously been isolated from various food and environmental sources. They were from 18 different sequence types and 4 serotypes O:1, O:2, O:3 and O:4; .
Genomic DNA was prepared according to the instructions of the manufacturer, 1.5 mL of culture grown overnight in TSB using the GenElute™ Bacterial Genomic DNA Kit (Sigma-Aldrich, London, UK). The purity and concentration of the extracted DNA were measured by using Nanodrop 2000 (Thermo Scientific, London, UK). 2.2.1. C. sakazakii K-Antigen Profiling The K-capsule encoding region in Cronobacter spp. is composed of three regions. The genomic study indicated that the variations between K1 and K2 capsule types were attributed to Region 2 and within the kpsS gene (encoding for the capsular polysaccharide transport protein) of Region 1 . Therefore, K1 and K2 primers were designed based on the capsular gene ( kpsS ) and were identified from the sequence information of C. sakazakii strain 658 ( kpsS1 ) and strain 6 ( kpsS2 ), respectively. Primers flanking K1 and K2 were designed using the Primer 3.0 software and were synthesized by Eurofins MWG Operon (London, UK). The primer names, their sequence and predicted product size are summarized in . 2.2.2. K-Antigen Gene Amplification The multiplex PCR was performed by mixing two primers (K1 and K2) in a final volume of 50 µL containing the following components: 1× dreamtaq buffer; 2.5 mM MgCl2; 400 µM concentrations (each) of dATP, dCTP, dGTP, and dTTP; 0.06 to 0.10 µM primer; and 1 U of DreamTaq DNA polymerase and DNA template (200 ng). The following PCR conditions were used for amplification: an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94° C for 30 s, 59 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1× TAE buffer) at a voltage of 70 V for ~60 min. 2.2.3. C. sakazakii Colanic Acid Profiling Colanic acid variant primers were based on the galE gene sequence encoding for UDP-N-acetyl glucosamine 4-epimerase of C. sakazakii strain 658. They were designed using the Primer 3.0 software, and obtained from Eurofins MWG Operon company (London, UK); . 2.2.4. C. sakazakii Colanic Acid Amplification The PCR conditions used for amplification were an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94 °C for 30 s, 60 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1X TAE buffer) at a voltage of 75 V for ~60 min. Genomic investigation, In silico analyses were carried out using the accessible Cronobacter genomes available by open access at the PubMLST Cronobacter database ( www.pubmlst.org/cronobacter/ ) (accessed on 29 January 2018). The Cronobacter genomes included in this study were C. sakazakii strains; 1844, 1882, 1992, 1886, 1888, 1906, 1105, 377, 658, 1885, 2027, 1890, 1847, 1845, 1881, 1887, 1283, 1889, 1108 and 1908.
C. sakazakii K-Antigen Profiling The K-capsule encoding region in Cronobacter spp. is composed of three regions. The genomic study indicated that the variations between K1 and K2 capsule types were attributed to Region 2 and within the kpsS gene (encoding for the capsular polysaccharide transport protein) of Region 1 . Therefore, K1 and K2 primers were designed based on the capsular gene ( kpsS ) and were identified from the sequence information of C. sakazakii strain 658 ( kpsS1 ) and strain 6 ( kpsS2 ), respectively. Primers flanking K1 and K2 were designed using the Primer 3.0 software and were synthesized by Eurofins MWG Operon (London, UK). The primer names, their sequence and predicted product size are summarized in .
The multiplex PCR was performed by mixing two primers (K1 and K2) in a final volume of 50 µL containing the following components: 1× dreamtaq buffer; 2.5 mM MgCl2; 400 µM concentrations (each) of dATP, dCTP, dGTP, and dTTP; 0.06 to 0.10 µM primer; and 1 U of DreamTaq DNA polymerase and DNA template (200 ng). The following PCR conditions were used for amplification: an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94° C for 30 s, 59 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1× TAE buffer) at a voltage of 70 V for ~60 min.
C. sakazakii Colanic Acid Profiling Colanic acid variant primers were based on the galE gene sequence encoding for UDP-N-acetyl glucosamine 4-epimerase of C. sakazakii strain 658. They were designed using the Primer 3.0 software, and obtained from Eurofins MWG Operon company (London, UK); .
C. sakazakii Colanic Acid Amplification The PCR conditions used for amplification were an initial denaturation step at 95 °C for 5 min, followed by 30 cycles of 94 °C for 30 s, 60 °C for 30 s, and 72 °C for 1 min, with a final extension at 72 °C for 8 min. Five microliters of the PCR products were loaded in 1.5% agarose gel in 1X TAE buffer) at a voltage of 75 V for ~60 min. Genomic investigation, In silico analyses were carried out using the accessible Cronobacter genomes available by open access at the PubMLST Cronobacter database ( www.pubmlst.org/cronobacter/ ) (accessed on 29 January 2018). The Cronobacter genomes included in this study were C. sakazakii strains; 1844, 1882, 1992, 1886, 1888, 1906, 1105, 377, 658, 1885, 2027, 1890, 1847, 1845, 1881, 1887, 1283, 1889, 1108 and 1908.
3.1. Sequence Type (ST) and Serotype Determination Twenty-six strains of C. sakazakii were used in this study. These strains were from the culture collection of Cronobacter spp. of Nottingham Trent University (NTU). The results of the Sequence type (ST) and O-antigen serotyping for 26 strains are included in . In this study, strains were divided into four O-antigen serotypes O:1, O:2, O:3 and O:4. serotype O:2 was the most dominant serotype in studied C. sakazakii strains, which was confirmed to be particularly predominant in clinical cases . 3.1.1. K-Antigen PCR amplification of K1 and K2 from C. sakazakii strains is shown in . The predicted PCR amplicon sizes are 248 bp and 120 bp for K1 and K2, respectively . The K1 capsular type was noted in C. sakazakii strains with STS; ST1, ST8, ST20, ST23, ST64, ST198, ST263, ST264 and ST406. Whereas K2 was primarily found in C. sakazakii sequence types ST4, ST9, ST12, ST13, ST136, ST233, ST245 and ST405. The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains as shown in . The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains (Region 2) as shown in . The comparison showed an agreement between PCR amplification results and the genomic study of K-antigen Region 2. 3.1.2. Colanic Acid (CA) PCR amplification of CA1 for C. sakazakii strains was shown in . Colanic acid type 1 (CA1) was found in the majority of C. sakazakii sequence types; ST1, ST8, ST9, ST20, ST245 and ST405 with PCR product size 429 bp. At the same time, CA2 was found in C. sakazakii sequence types ST4, ST12, ST13, ST23 and ST64. The later strains showed no PCR products due to the absence of the galE gene . shows the agreement between the PCR determination and genome investigation for the CA type of studied strains. 3.2. Comparison between PCR Amplification Result and the Genomic Investigation The K-antigen type 1 (K1) strains produced PCR bands of the expected size of 248 bp, whereas K-antigen type 2 (K2) strains produced PCR bands of the expected size 120 bp . The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains; see . The genomic information of these strains was obtained from accessible open access at the PubMLST Cronobacter database ( www.pubmlst.org/cronobacter/ ) (accessed on 29 January 2018). The comparison showed complete agreement between the PCR amplification result and the genomic study of K-antigen Region 2. According to the genome investigation, two variants were found within the colanic acid (CA) cluster, CA1 and CA2. These were composed of 21 and 20 genes, respectively, which differed in the presence of galE in CA1 (21 genes), and absence in CA2 (20 genes). Therefore, primers were designed based on galE gene sequence (presence of galE, CA1). The colanic acid type 1 (CA1) strains produced a PCR product size of 429 bp, while CA2 strains showed no PCR products as a result of the absence of the galE gene . showed also an agreement between the PCR determination and genome investigation for the CA type of studied strains (presence/absence of galE gene).
Twenty-six strains of C. sakazakii were used in this study. These strains were from the culture collection of Cronobacter spp. of Nottingham Trent University (NTU). The results of the Sequence type (ST) and O-antigen serotyping for 26 strains are included in . In this study, strains were divided into four O-antigen serotypes O:1, O:2, O:3 and O:4. serotype O:2 was the most dominant serotype in studied C. sakazakii strains, which was confirmed to be particularly predominant in clinical cases . 3.1.1. K-Antigen PCR amplification of K1 and K2 from C. sakazakii strains is shown in . The predicted PCR amplicon sizes are 248 bp and 120 bp for K1 and K2, respectively . The K1 capsular type was noted in C. sakazakii strains with STS; ST1, ST8, ST20, ST23, ST64, ST198, ST263, ST264 and ST406. Whereas K2 was primarily found in C. sakazakii sequence types ST4, ST9, ST12, ST13, ST136, ST233, ST245 and ST405. The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains as shown in . The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains (Region 2) as shown in . The comparison showed an agreement between PCR amplification results and the genomic study of K-antigen Region 2. 3.1.2. Colanic Acid (CA) PCR amplification of CA1 for C. sakazakii strains was shown in . Colanic acid type 1 (CA1) was found in the majority of C. sakazakii sequence types; ST1, ST8, ST9, ST20, ST245 and ST405 with PCR product size 429 bp. At the same time, CA2 was found in C. sakazakii sequence types ST4, ST12, ST13, ST23 and ST64. The later strains showed no PCR products due to the absence of the galE gene . shows the agreement between the PCR determination and genome investigation for the CA type of studied strains.
PCR amplification of K1 and K2 from C. sakazakii strains is shown in . The predicted PCR amplicon sizes are 248 bp and 120 bp for K1 and K2, respectively . The K1 capsular type was noted in C. sakazakii strains with STS; ST1, ST8, ST20, ST23, ST64, ST198, ST263, ST264 and ST406. Whereas K2 was primarily found in C. sakazakii sequence types ST4, ST9, ST12, ST13, ST136, ST233, ST245 and ST405. The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains as shown in . The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains (Region 2) as shown in . The comparison showed an agreement between PCR amplification results and the genomic study of K-antigen Region 2.
PCR amplification of CA1 for C. sakazakii strains was shown in . Colanic acid type 1 (CA1) was found in the majority of C. sakazakii sequence types; ST1, ST8, ST9, ST20, ST245 and ST405 with PCR product size 429 bp. At the same time, CA2 was found in C. sakazakii sequence types ST4, ST12, ST13, ST23 and ST64. The later strains showed no PCR products due to the absence of the galE gene . shows the agreement between the PCR determination and genome investigation for the CA type of studied strains.
The K-antigen type 1 (K1) strains produced PCR bands of the expected size of 248 bp, whereas K-antigen type 2 (K2) strains produced PCR bands of the expected size 120 bp . The PCR product size of K-antigen type (K1 and K2) was compared with the genome investigation of studied strains; see . The genomic information of these strains was obtained from accessible open access at the PubMLST Cronobacter database ( www.pubmlst.org/cronobacter/ ) (accessed on 29 January 2018). The comparison showed complete agreement between the PCR amplification result and the genomic study of K-antigen Region 2. According to the genome investigation, two variants were found within the colanic acid (CA) cluster, CA1 and CA2. These were composed of 21 and 20 genes, respectively, which differed in the presence of galE in CA1 (21 genes), and absence in CA2 (20 genes). Therefore, primers were designed based on galE gene sequence (presence of galE, CA1). The colanic acid type 1 (CA1) strains produced a PCR product size of 429 bp, while CA2 strains showed no PCR products as a result of the absence of the galE gene . showed also an agreement between the PCR determination and genome investigation for the CA type of studied strains (presence/absence of galE gene).
Studies by Ogrodzki and Forsythe (2015 and 2017) indicate that it is homologous to the K-antigen of E. coli and is present in all Cronobacter species. As previously reported, most of the K-antigen Region 1 ( kpsEDCS ) and all Region 3 ( kpsTM ) in Cronobacter species were conserved across the genus; however, there are two variants in Region 2 which are diverse in their CG % content and length . The K-antigen-specific CPS composition is currently unknown; however, it is of interest as it may be an important virulence or environmental fitness trait. Moreover, the K-antigen Region 1 ( kpsEDCS ) and Region 3 ( kpsMT ) genes were found in all Cronobacter spp., and the highly variable Region 2 genes were assigned to two homology groups, K1 and K2 types. These variations between K1 and K2 capsule types were attributed to the kpsS gene of Region 1, and the entire Region 2 (3 genes). Similarly, there are two variants in the colonic acid synthesis gene cluster which are located adjacent to the O-antigen region and separated by the galF gene: differing in the presence/absence of galE . C. sakazakii isolates with capsular type [K2:CA2] are associated with neonatal meningitis and necrotizing enterocolitis, and other capsular types are less associated with clinical infections . The purpose of this study was to develop and validate a multiplex PCR assay targeting capsular polysaccharide genes kpsS (K1 and K2) and galE (encoding for UDP-N-acetyl glucosamine 4-epimerase), CA1 and CA2 for the specific detection, rapid and simple identification of K-antigen and colanic acid type, respectively. Twenty-six C. sakazakii strains were used in this study, these strains were isolated from food and environmental sources, covering different STs ( n = 18). Initial PCR-serotyping assays revealed that they were in four serotypes (O:1,O:2,O:3 and O:4) . Colanic acid type 1 (CA1) were found in C. sakazakii sequence types such as ST1, ST8, ST9, ST20, ST245 and ST405, while CA2 was primarily found in C. sakazakii sequence types ST4, ST12, ST13, ST23, ST42, ST64, ST136, ST198, ST233, ST263, ST264 and ST406; . These are clinically important sequence types concerning neonatal infections sequence types in particular ST4 and ST12. No cross-reactions were observed between the specific two primers of capsular type K1 and K2 . Moreover, shows the agreement between the PCR determination and genome investigation for the CA type of studied strains. Until recently, there were only 18 Cronobacter serotypes across the whole genus, with only 7 in C. sakazakii . In this study, strains were divided into four O-antigen serotypes, O:1, O:2, O:3 and O:4. Serotype O:2 was the most dominant serotype in studied C. sakazakii strains, which was confirmed to be particularly predominant in clinical cases . The same serotype occurs in different ST. For example, ST4 strains had three serotypes, which are O:2, O:3 and O:4. Furthermore, there is no clear correlation between serotype and K-antigen or colanic acid type. For example, the O:2 serotype covers three different capsule profiles, O:2:K1:CA1, O:2:K2:CA2 and O:2:K2:CA1. The most dominant capsule profile was K2:CA2, this includes strains with ST4, ST12 and ST13. These sequence types are strongly associated with severe neonatal infections of meningitis and NEC . However, strains belonging to other STs may also cause severe neonatal infections such as bacteremia and septicemia. The capsule profile, sequence type ST and serotype analyses suggested that Cronobacter strains isolated from food and environmental sources were highly diverse. This was particularly notable for the isolates, which were obtained from different foods.
PCR assays targeting the capsular polysaccharide genes, such as the K-antigen, are useful for pathogenicity and taxonomic studies. However, genomic prediction of K-antigen (K1 and K2) and colanic acid type (CA type) may not be feasible for laboratories with limited budgets. Instead, targeted methods based on PCR amplification may be more accessible to determine the K-antigen and colanic acid type in C. sakazakii . Thus, this multiplex assay may help in C. sakazakii capsular type identification in routine diagnoses.
|
Effect of photobiological regulation of green laser on orthodontic tooth retention in rats | 9b6fdb0d-86c1-4bb9-beb0-ed3732794fa8 | 11753330 | Dentistry[mh] | Orthodontic treatment aims for aesthetics, harmony, and stability, with stability being crucial . Following treatment, teeth can relapse due to remodeling of periodontal tissue and alveolar bone . Even after 7 months, periodontal Ligament(PDL) fibers remain still stretched, and alveolar bone has not stabilized . Thus, a long retention period is needed for the remodeling of gingival and PDL fibers, in addition to newly formed bones. A 2-year retention is standard, and lifelong retainers may be necessary for complex cases . However, patient compliance with extended retention periods is often low, making optimal outcomes difficult to achieve. Accelerating periodontal tissue and bone regeneration during retention can enhance tooth stability and reduce relapse. Low-level infrared lasers can reduce relapse and enhance periodontal tissue and alveolar bone remodeling in rats . Green light from low-level lasers also accelerates new bone formation ; however, to the best of our knowledge, no research currently exists on its use in tooth retention. Visible light effectively regulates the differentiation of stem cells into osteoblasts (OBs) , which is crucial for photobiomodulation (PBM) in bone formation. PBM, previously known as low-level laser therapy (LLLT), uses light to promote tissue healing and regeneration . The light used in PBM can be either coherent (laser) or incoherent (light-emitting diodes, LEDs), with wavelengths ranging from 405 to 1,100 nm, an output power below 100 mW, and an energy density below 10 J/cm 2 . PBM provides a noninvasive method to stimulate localized cellular responses and promote tissue regeneration . Both in vitro and in vivo experiments have demonstrated that green light, particularly at shorter wavelengths, enhances bone regeneration by stimulating OB proliferation, differentiation, and maturation . In vivo, PBM accelerates bone healing by increasing bone density, regeneration, and mineralization . The green light spectrum of low-level lasers (LLLs) has been shown to accelerate new bone formation and enable rapid bone integration between dental implants and the alveolar bone, allowing for earlier weight-bearing. Therefore, this study utilized a 540-nm green laser to observe periodontal tissue reconstruction during orthodontic tooth retention in rats. Alveolar bone remodeling involves the coordinated activity of OBs and osteoclasts (OCs). Bone morphogenetic proteins (BMPs), signaling proteins crucial for osteogenic induction, play a vital role in bone tissue remodeling, with bone morphogenetic protein 2 (BMP-2) exhibiting the strongest osteogenic activity . BMP-2 is a potent bone-inducing factor that promotes the differentiation of mesenchymal stem cells into bone cells and serves as an indicator of bone formation capability . Based on this analysis, this study utilized a 540 nm green laser to irradiate the periodontal tissue of rats during the orthodontic tooth retention stage. The aim was to observe periodontal tissue reconstruction and BMP-2 expression and to explore the impact of green laser photobiomodulation on orthodontic tooth retention in rats.
Establishment of rat orthodontic tooth movement model We selected 100 8-week-old male Sprague–Dawley (SD) rats obtained from the Shanxi Medical University Animal Center to develop an orthodontic tooth movement model. The breeding environment maintained a temperature range of 18–22 °C, a humidity level of 40%–70%, good ventilation, and provided a balanced diet and clean drinking water. A 12-h light/dark cycle was implemented to simulate the natural light–dark cycle. A 0.2-mm diameter ligature wire (Tiantian Corp., Changsha, Hunan, China) was used to attach a nickel-titanium coil spring (Suhang Corp., Shenzhen, Guangdong, China), with one end ligated to the neck of the left upper first molar and the other end ligated to the incisors (Fig. a). A 50-g force was applied to move the first molar mesially. The active load force was applied once per week for 3 weeks. The experiments were conducted in accordance with the "Guiding Principles in the Care and Use of Animals" and was approved by our Hospital Laboratory Animal Welfare Ethics Committee. Establishment of rat orthodontic tooth retention model After 3 weeks, the intraoral device was removed, and a 0.25-mm diameter ligature wire was twisted to serve as a fixed retainer, maintaining the distance between the incisors and the left first molar in the maxilla (Fig. b). The rats were divided into two groups: Group A (control) and Group B (green laser treatment). Each group was further subdivided into five subgroups based on retention duration: 1-day, 4-day, 10-day, 13-day, and 21-day groups, with 10 rats per subgroup. In Group A, no additional measures were taken after the retention model was established, and the retention device was removed at the designated time points. In Group B, rats received green laser (Yuguang Co., Ltd., Shenzhen, Guangdong, China) treatment on days 0, 3, 6, 9, 12, 15, 18, and 21. The laser parameters were as follows: wavelength of 540 nm, output power of 1 W, spot diameter of 10 mm, spot area of approximately 0.79 cm 2 , and dosage of 23 J/cm 2 . Laser irradiation was applied to the buccal and palatal gingiva around the first molar for 18 s per site. After the retention devices were removed, the rats were allowed to relapse for 3 days before being sacrificed on days 4, 7, 13, 16, and 24. Maxillary samples from the left molar region were collected and fixed in 4% paraformaldehyde solution for 48 h. Measurement and calculation of recurrence rates Custom trays for the rats were created using a three-dimensional (3D) printer (Chinese Academy of Medical Sciences). A two-step method was employed to obtain maxillary silicone impressions of all rats before and after the first molar movement (Fig. a). Following an intraperitoneal injection of 3% sodium pentobarbital, the rats were positioned supine without limb fixation to keep the airway unobstructed. A heavy-body silicone rubber mixture was placed in the rats’ mouths to create the first impression. The spillway of the first impression was made, and light-body silicone rubber was injected. Afterward, it was placed into the mouth and removed to obtain a second impression once the material had solidified. The maxillary positive model was created using super-hard gypsum perfusion (Fig. b) and scanned with a 3Shape-D2000 scanner (3Shape Corp., Copenhagen, Denmark) to produce a 3D model of the perfused positive model (Fig. c). The distance between the mesiopalatal sulcus of the first and second molars was measured using 3Shape software (D0: before tooth movement, D1: after tooth movement, and D2: after relapse), with 0.001 mm accuracy (Fig. d). The amount of tooth movement (D = D1—D0, in mm) and recurrence (Dn = D1—D2, in mm) were calculated. The recurrence rate was determined by Dn/D. Measurement of alveolar bone density The rat maxilla in the left molar area was collected, and the soft tissue was removed (Fig. a). The specimens were then fixed in 4% paraformaldehyde solution for 48 h. The maxillary samples were scanned using micro-computed tomography (micro-CT) (Skyscan-1276; Bruker Corp., Billerica, MA, USA) at a voltage of 70 kV, a current of 90 μA, a slice thickness of 5.0 μm, and a pixel size of 1,536 × 1,024, to assess the alveolar bone density on the pressure side of the left upper first molar (Fig. b). Preparation and staining of specimens Maxillary specimens were decalcified, embedded in paraffin, and sectioned longitudinally along the mesiodistal direction of the first molar. The tissues were stained with hematoxylin and eosin (HE) to observe OB, OC, and PDL fibers on the distal side of the first molar. Immunohistochemistry was performed using a BMP-2 monoclonal antibody to detect BMP-2 expression. Absorbance values were analyzed using ImageJ software (National Institutes of Health, Bethesda, MD, USA). Statistical analysis Data were analyzed using SPSS 26.0 software (IBM Corp., Armonk, NY, USA) and expressed as mean ± standard deviation (x̄ ± s). An independent t-test and Kruskal–Wallis test were performed, with P < 0.05 indicating statistical significance. The independent t-test was used to compare two groups at the same time, while the Kruskal–Wallis test was used to evaluate multiple groups over time. The results of the Kruskal–Wallis test are represented by the H value: H 1 for the control group and H 2 for the green laser irradiation group.
We selected 100 8-week-old male Sprague–Dawley (SD) rats obtained from the Shanxi Medical University Animal Center to develop an orthodontic tooth movement model. The breeding environment maintained a temperature range of 18–22 °C, a humidity level of 40%–70%, good ventilation, and provided a balanced diet and clean drinking water. A 12-h light/dark cycle was implemented to simulate the natural light–dark cycle. A 0.2-mm diameter ligature wire (Tiantian Corp., Changsha, Hunan, China) was used to attach a nickel-titanium coil spring (Suhang Corp., Shenzhen, Guangdong, China), with one end ligated to the neck of the left upper first molar and the other end ligated to the incisors (Fig. a). A 50-g force was applied to move the first molar mesially. The active load force was applied once per week for 3 weeks. The experiments were conducted in accordance with the "Guiding Principles in the Care and Use of Animals" and was approved by our Hospital Laboratory Animal Welfare Ethics Committee.
After 3 weeks, the intraoral device was removed, and a 0.25-mm diameter ligature wire was twisted to serve as a fixed retainer, maintaining the distance between the incisors and the left first molar in the maxilla (Fig. b). The rats were divided into two groups: Group A (control) and Group B (green laser treatment). Each group was further subdivided into five subgroups based on retention duration: 1-day, 4-day, 10-day, 13-day, and 21-day groups, with 10 rats per subgroup. In Group A, no additional measures were taken after the retention model was established, and the retention device was removed at the designated time points. In Group B, rats received green laser (Yuguang Co., Ltd., Shenzhen, Guangdong, China) treatment on days 0, 3, 6, 9, 12, 15, 18, and 21. The laser parameters were as follows: wavelength of 540 nm, output power of 1 W, spot diameter of 10 mm, spot area of approximately 0.79 cm 2 , and dosage of 23 J/cm 2 . Laser irradiation was applied to the buccal and palatal gingiva around the first molar for 18 s per site. After the retention devices were removed, the rats were allowed to relapse for 3 days before being sacrificed on days 4, 7, 13, 16, and 24. Maxillary samples from the left molar region were collected and fixed in 4% paraformaldehyde solution for 48 h.
Custom trays for the rats were created using a three-dimensional (3D) printer (Chinese Academy of Medical Sciences). A two-step method was employed to obtain maxillary silicone impressions of all rats before and after the first molar movement (Fig. a). Following an intraperitoneal injection of 3% sodium pentobarbital, the rats were positioned supine without limb fixation to keep the airway unobstructed. A heavy-body silicone rubber mixture was placed in the rats’ mouths to create the first impression. The spillway of the first impression was made, and light-body silicone rubber was injected. Afterward, it was placed into the mouth and removed to obtain a second impression once the material had solidified. The maxillary positive model was created using super-hard gypsum perfusion (Fig. b) and scanned with a 3Shape-D2000 scanner (3Shape Corp., Copenhagen, Denmark) to produce a 3D model of the perfused positive model (Fig. c). The distance between the mesiopalatal sulcus of the first and second molars was measured using 3Shape software (D0: before tooth movement, D1: after tooth movement, and D2: after relapse), with 0.001 mm accuracy (Fig. d). The amount of tooth movement (D = D1—D0, in mm) and recurrence (Dn = D1—D2, in mm) were calculated. The recurrence rate was determined by Dn/D.
The rat maxilla in the left molar area was collected, and the soft tissue was removed (Fig. a). The specimens were then fixed in 4% paraformaldehyde solution for 48 h. The maxillary samples were scanned using micro-computed tomography (micro-CT) (Skyscan-1276; Bruker Corp., Billerica, MA, USA) at a voltage of 70 kV, a current of 90 μA, a slice thickness of 5.0 μm, and a pixel size of 1,536 × 1,024, to assess the alveolar bone density on the pressure side of the left upper first molar (Fig. b).
Maxillary specimens were decalcified, embedded in paraffin, and sectioned longitudinally along the mesiodistal direction of the first molar. The tissues were stained with hematoxylin and eosin (HE) to observe OB, OC, and PDL fibers on the distal side of the first molar. Immunohistochemistry was performed using a BMP-2 monoclonal antibody to detect BMP-2 expression. Absorbance values were analyzed using ImageJ software (National Institutes of Health, Bethesda, MD, USA).
Data were analyzed using SPSS 26.0 software (IBM Corp., Armonk, NY, USA) and expressed as mean ± standard deviation (x̄ ± s). An independent t-test and Kruskal–Wallis test were performed, with P < 0.05 indicating statistical significance. The independent t-test was used to compare two groups at the same time, while the Kruskal–Wallis test was used to evaluate multiple groups over time. The results of the Kruskal–Wallis test are represented by the H value: H 1 for the control group and H 2 for the green laser irradiation group.
Recurrence of orthodontic teeth in rats After the retention device was removed, the first molar relapsed. Within the same group, the recurrence rate decreased over time at the 1, 4, 10, 13, and 21-day retention intervals (H 1 = 45.926, P = 0.001; H 2 = 47.059, P = 0.001). This change can be divided into two stages: a higher recurrence rate during days 1–4 and a lower recurrence rate from days 4–21. These results indicate that longer retention periods benefit tooth stability. At the same time points, the recurrence rates in Groups A and B differed. No significant difference was observed between Groups A and B after 1 day ( P = 0.647); however, Group B had significantly lower recurrence rates from days 4–21 ( P < 0.01) (Table ). This suggests that during the retention stage, short-term retention (1 day) and green laser irradiation have a relatively small impact on the recurrence rate of the first molar. As the retention time increases, the effect of green laser therapy gradually improves, effectively reducing the recurrence rate of the first molar. The trend of recurrence rates was similar to the results obtained by other members of our group using low-level infrared laser irradiation . However, at 13 and 21 days of retention, the recurrence rates were lower in the green laser irradiation group than in the infrared laser group. Measurement of alveolar bone density Micro-CT analysis showed that alveolar bone density gradually increased over time in both groups, peaking at day 21 (H 1 = 44.610, P = 0.001; H 2 = 46.640, P = 0.001). Alveolar bone densities in Groups A and B were similar after 1 day ( P = 0.202). However, from day 4 onward, Group B consistently exhibited higher bone density than Group A ( P < 0.01), demonstrating the positive impact of the green laser on bone regeneration (Table ). Currently, there are few reports on the effect of green laser on orthodontic tooth retention in rats, particularly regarding changes in alveolar bone density. This study confirmed the role of green laser in promoting the formation of new alveolar bone, which is consistent with the conclusion that green laser therapy promotes the repair of bone defects . Results of HE staining HE staining revealed that, over time, the PDL fibers in the same group became more regular and dense, while the number of bone resorption lacunae and OBs gradually decreased (Fig. ). At the same time point, HE staining showed no significant differences between Groups A and B after 1 day. The arrangement of fibroblasts and PDL fibers was relatively disordered, with occasional hyaline degeneration observed. OC-like cells were present in the alveolar bone, but OBs were relatively rare. After 4 days, Group A showed almost no change. Meanwhile, in Group B, the fibroblasts were organized relatively orderly, PDL fibers were arranged more regularly and densely, and the number of OC-like cells decreased, while OBs increased. After 10 days, the PDL fibers in Group B were arranged more regularly, exhibiting denser bone trabeculae and significant new bone deposition. A small number of OCs and signs of bone resorption were observed. After 13 and 21 days, the PDL became increasingly organized. In Group B, OC-like cells were rarely observed, and more OBs were found in the new bone matrix. Results of immunohistochemical staining Immunohistochemical staining revealed that BMP-2 expression and BMP-2-positive cell counts increased in both groups over time (Fig. ) (H 1 = 47.106, P = 0.001; H 2 = 47.102, P = 0.001). BMP-2-positive cells were differentially expressed in each group during the recurrence stage. BMP-2-positive cells were observed more after 1 day in Group B compared to that in Group A, although no significant difference was found ( P = 0.709). BMP-2-positive cells in both groups gradually increased after 4 days, peaking at 21 days. Group B showed significantly higher BMP-2-positive cell counts ( P < 0.01) (Table ). These results demonstrate that green laser irradiation promotes BMP-2 expression but has no significant effect on periodontal remodeling after 1 day. As the retention time increased, the effect of green laser therapy was gradually enhanced, leading to increased BMP-2 expression, which facilitated periodontal tissue remodeling.
After the retention device was removed, the first molar relapsed. Within the same group, the recurrence rate decreased over time at the 1, 4, 10, 13, and 21-day retention intervals (H 1 = 45.926, P = 0.001; H 2 = 47.059, P = 0.001). This change can be divided into two stages: a higher recurrence rate during days 1–4 and a lower recurrence rate from days 4–21. These results indicate that longer retention periods benefit tooth stability. At the same time points, the recurrence rates in Groups A and B differed. No significant difference was observed between Groups A and B after 1 day ( P = 0.647); however, Group B had significantly lower recurrence rates from days 4–21 ( P < 0.01) (Table ). This suggests that during the retention stage, short-term retention (1 day) and green laser irradiation have a relatively small impact on the recurrence rate of the first molar. As the retention time increases, the effect of green laser therapy gradually improves, effectively reducing the recurrence rate of the first molar. The trend of recurrence rates was similar to the results obtained by other members of our group using low-level infrared laser irradiation . However, at 13 and 21 days of retention, the recurrence rates were lower in the green laser irradiation group than in the infrared laser group.
Micro-CT analysis showed that alveolar bone density gradually increased over time in both groups, peaking at day 21 (H 1 = 44.610, P = 0.001; H 2 = 46.640, P = 0.001). Alveolar bone densities in Groups A and B were similar after 1 day ( P = 0.202). However, from day 4 onward, Group B consistently exhibited higher bone density than Group A ( P < 0.01), demonstrating the positive impact of the green laser on bone regeneration (Table ). Currently, there are few reports on the effect of green laser on orthodontic tooth retention in rats, particularly regarding changes in alveolar bone density. This study confirmed the role of green laser in promoting the formation of new alveolar bone, which is consistent with the conclusion that green laser therapy promotes the repair of bone defects .
HE staining revealed that, over time, the PDL fibers in the same group became more regular and dense, while the number of bone resorption lacunae and OBs gradually decreased (Fig. ). At the same time point, HE staining showed no significant differences between Groups A and B after 1 day. The arrangement of fibroblasts and PDL fibers was relatively disordered, with occasional hyaline degeneration observed. OC-like cells were present in the alveolar bone, but OBs were relatively rare. After 4 days, Group A showed almost no change. Meanwhile, in Group B, the fibroblasts were organized relatively orderly, PDL fibers were arranged more regularly and densely, and the number of OC-like cells decreased, while OBs increased. After 10 days, the PDL fibers in Group B were arranged more regularly, exhibiting denser bone trabeculae and significant new bone deposition. A small number of OCs and signs of bone resorption were observed. After 13 and 21 days, the PDL became increasingly organized. In Group B, OC-like cells were rarely observed, and more OBs were found in the new bone matrix.
Immunohistochemical staining revealed that BMP-2 expression and BMP-2-positive cell counts increased in both groups over time (Fig. ) (H 1 = 47.106, P = 0.001; H 2 = 47.102, P = 0.001). BMP-2-positive cells were differentially expressed in each group during the recurrence stage. BMP-2-positive cells were observed more after 1 day in Group B compared to that in Group A, although no significant difference was found ( P = 0.709). BMP-2-positive cells in both groups gradually increased after 4 days, peaking at 21 days. Group B showed significantly higher BMP-2-positive cell counts ( P < 0.01) (Table ). These results demonstrate that green laser irradiation promotes BMP-2 expression but has no significant effect on periodontal remodeling after 1 day. As the retention time increased, the effect of green laser therapy was gradually enhanced, leading to increased BMP-2 expression, which facilitated periodontal tissue remodeling.
After orthodontic treatment, relapse tends to occur, and the recurrence rate remains very high. Studies have shown that rat molars exhibit rapid relapse when the force device is removed after 1 day, with a recurrence rate of 62.5%–73%, gradually decreasing and stabilizing at 86.1%–93% after 21 days . In clinical practice, some patients continue to wear retainers after three years of orthodontic treatment, yet the recurrence rate still exceeds 19% . Orthodontic relapse (OR) involves several factors, with the reconstruction of the periodontium being the primary cause. Feng et al. demonstrated that the PDL has the potential to revert to its original state after the removal of mechanical forces, highlighting the importance of PDL collagen recovery in the early stages of relapse. The findings of Franzen et al. support the hypothesis that orthodontic tooth movement and recurrence undergo similar processes, suggesting that alveolar bone remodeling is an important factor in orthodontic tooth recurrence. Therefore, understanding the reconstruction of the periodontium and alveolar bone after orthodontic tooth recurrence is crucial for inhibiting relapse and maintaining long-term stability. BMP-2 is the most active growth factor in the BMP family and is critical in driving osteogenic differentiation . BMP-2 promotes the proliferation, differentiation, and maturation of mesenchymal stem cells into OB cells, thereby stimulating the growth, development, and reconstruction of bone and cartilage . Osteogenesis induced by BMP-2 has been studied in many fields . During orthodontic treatment, a dynamic balance between periodontal tissue and alveolar bone remodeling helps maintain the teeth in their new positions. Yang et al. found a significant increase in BMP-2 expression in new bone formation in an orthodontic rat model. Some researchers have accelerated periodontal tissue remodeling and new bone formation during the orthodontic tooth retention stage in rats by injecting drugs and observed an increase in BMP-2 expression within the periodontal tissue . These results suggest that BMP-2 can promote new bone formation during orthodontic tooth movement and retention. BMP-2 can interact with broader signaling pathways, such as the Wnt/β-catenin pathway and the TGF-β pathway, which directly or indirectly affect the expression of osteogenic genes, including Runt-related transcription factor 2 (Runx2) and Osteocalcin (OCN) , thereby modulating the production of BMP-2. Green laser irradiation can influence the expression of osteogenic genes Runx2 and OCN . In this study, we evaluated the expression of BMP-2 in periodontal tissues irradiated by green laser during the orthodontic tooth retention stage in rats. PBM or LLLT influences mesenchymal stem cell proliferation, differentiation, and maturation. LLLs are considered biologically safe, as their direct irradiation of biological tissues promotes tissue regeneration and accelerates orthodontic tooth movement without causing irreversible damage. Kim et al. reported that LLLT combined with fixed retainers enhances collagen synthesis in the periodontal tissue, shortens the retention period, and reduces the recurrence rate. Most experiments on LLL interventions in orthodontic tooth retention use infrared and near-infrared lasers ; however, limited research has been conducted on green lasers in this area. Green light affects the proliferation, differentiation, and maturation of human adipose-derived stem cells (hADSCs) , bone marrow mesenchymal stem cells (BMSCs) , amniotic fluid-derived stem cells (AFSCs) , OB-like cells (SaOS-2) , and OBs . Specifically, green light is superior to infrared or near-infrared light in promoting bone formation . Specific green light wavelengths promote the osteogenic differentiation of hADSCs . Compared to red (660 nm) and near-infrared (810 nm) light, green light (540 nm) is more effective in promoting the differentiation of hADSCs into OBs while inhibiting OB proliferation. Conversely, red (660 nm) and near-infrared (810 nm) light stimulate OB proliferation but do not significantly affect differentiation . The mechanism by which green light promotes osteogenesis involves the activation of the transient receptor potential cation channel subfamily V member 1 (TRPV1) by 540 nm green light, which more effectively regulates calcium concentrations in stem cells , thereby affecting the expression of osteogenic genes, including RUNX2, OCN, and OSX, increasing the mRNA expression levels of osteogenesis-related factors, and promoting osteogenesis . Green laser (532 nm) exhibited a stronger activating effect on exogenously expressed TRPV1 channels in Xenopus oocyte cells than infrared laser (637 nm), which promoted the differentiation of stem cells into OBs . Infrared laser promoted cell proliferation by increasing mitochondrial activity and ATP production within cells . Therefore, it was concluded that the green laser had a stronger effect on promoting OB differentiation, while the infrared laser had a stronger effect on promoting OB proliferation. However, other researchers concluded that green LED irradiation promotes the proliferation and differentiation of SaOS-2 cells . The biological regulation of green light similarly promotes OB proliferation , with parameters set at a 40 Hz pulse, 560–650 nm, and 0.4 mW/cm 2 . Additionally, green light irradiation regulates osteogenesis in BMSCs by modulating the expression levels of phosphorylated protein 1 (SPP1), bone gamma-carboxyglutamate protein (BGLAP), and RUNX2 . The effect of visible light irradiation on the expression of pluripotency genes, such as Oct-4, Sox2, and Nanog, in AFSCs has been demonstrated using different LED wavelengths (0–2 mW/cm 2 ), including 525 nm green light , which can stimulate OB differentiation. In vivo experiments, Jiang et al. demonstrated that 540 nm green light promotes the repair of femoral defects in rats . Additionally, the green light spectrum from an LLL accelerates new bone formation and enables rapid bone integration between implants and alveolar bone; therefore, implants can be used for early weight-bearing. Furthermore, this light can be used to desensitize tooth sensitivity. In clinical orthodontic treatment, there have been reports of using low-energy infrared laser (35.7 J/cm 2 ) in the treatment of rotated incisors, which confirmed that it alleviated the relapse of rotated incisors . Referring to the irradiation methods and parameters of infrared laser, green laser can be applied in orthodontic clinics to prevent relapse and reduce the retention period in the near future. Traditional retainers cannot accelerate periodontal tissue remodeling. In our experiment, a 540-nm green laser was used to promote OB proliferation and new bone formation, stimulate BMP-2 expression, accelerate the reconstruction of the periodontium and alveolar bone, and reduce the recurrence of orthodontic teeth in rats. The recurrence rates and absorbance values of BMP-2 in orthodontic teeth of rats were similar to the results obtained by a previous study and other studies from our group , using low-level infrared laser therapy (808 nm). However, compared to the preliminary results from our research group , the recurrence rates were lower with green laser irradiation than with infrared laser (808 nm) at 13 and 21 retention days, and the absorbance values of BMP-2 were exactly the opposite. Also this study confirmed the role of green laser in promoting alveolar bone density, which was similar to the conclusion that green laser promoted the repair of bone defects. Based on the findings of our group, after 13 retention days in rats, the 540 nm green laser demonstrated a stronger effect in reducing relapse rates and promoting alveolar bone remodeling compared to the 808 nm infrared laser. However, this study did not conduct in vitro cell experiments, nor did it analyze different wavelengths or BMP-2-related signaling pathways. We will continue with in vitro experiments to investigate whether the differentiation effect of green laser with different wavelengths on cells is stronger than that of infrared laser and to analyze the BMP-2-related signaling pathways.
Green laser therapy effectively reduces orthodontic tooth recurrence and improves the stability of teeth by promoting periodontal tissue and alveolar bone remodeling. It enhances BMP-2 expression and induces new bone formation. These conclusions will aid in subsequent in vitro orthodontic tooth retention model studies on the differential effects of green lasers with different wavelengths on OBs and their impact on BMP-2-related signaling pathways.
|
Childhood Obesity: Position Statement of Polish Society of Pediatrics, Polish Society for Pediatric Obesity, Polish Society of Pediatric Endocrinology and Diabetes, the College of Family Physicians in Poland and Polish Association for Study on Obesity | 23f40644-d4d5-4445-b5e2-b8ddf7af8874 | 9505061 | Pediatrics[mh] | Pediatric obesity is not a single nation problem, but it is one of the most important problems of public health . Although healthy eating patterns and regular physical activity (PA) help people achieve and maintain a healthy weight starting at an early age and continuing throughout life, every nation has unique cultural, economical, and health-care system conditions that make difficult to implement some detailed universal guidelines. Therefore, there is a need to publish local guidelines that will be in concordance with international, universal recommendations. This is the first position statement of the Polish Society of Pediatrics, Polish Society for Pediatric Obesity, Polish Society of Pediatric Endocrinology and Diabetes, and Polish Association for the Study on Obesity. The Expert Panel’s goal was to develop comprehensive evidence-based guidelines addressing to prevention, diagnosis and treatment of obesity and its complications in children and adolescents. The aim of the work was to assist pediatric care providers—pediatricians, family doctors, nurses, physiotherapist, registered dietitians, and psychologist in both the prevention and the identification and management of specific risk associated with obesity from infancy to adulthood. Searching was conducted by using PubMed/MEDLINE, Cochrane Library, Science Direct, MEDLINE, and EBSCO databases, from January 2022 to June 2022, for English language meta-analyses, systematic reviews, randomized clinical trials, and observational studies from all over the world. The websites of scientific organizations, such as WHO, were also searched. Five main topics were defined: (1) definition, causes, consequences of obesity; (2) treatment of obesity; (3) obesity prevention; (4) the role of primary care in the prevention of obesity; (5) Recommendations for general practitioners, parents, teachers, and regional authorities. 3.1. Obesity—Definition Obesity is a chronic recurrent disease related to excessive fat tissue accumulation that presents a risk to health. The diagnosis of overweight, obesity, and severe obesity is usually based on the measurement of high and weight, calculation of weight-to-length ratio in children below the age of 5 years and body mass index (BMI) in older children . Indexes are assessed using child growth standards for age and sex. The advantages of these indexes are simplicity, low cost, universality of measurement, and assessment. However, it should be noted that they are not perfect in assessing the amount and distribution of fat tissue accumulation causing the development of obesity complications. In addition, they should be used with caution in a particular situation, for example, in athletes with high muscle mass or children with significant posture defects (scoliosis) related to the decrease of height measurement. Diagnostic Tools and Data Interpretation According to the World Health Organization (WHO), in children under the age of 5 years, overweight should be diagnosed if the weight-to-length ratio is greater than 2SD above the median of the child growth standard and obesity when this ratio is greater than 3SD above the median . In children aged 3–18 years, Polish BMI percentile charts should be used, where overweight is defined as BMI above the 85th percentile (>1SD) and obesity above the 97th percentile (>2SD) . WHO standards for children aged 5–19 years can be also used, with the overweight and obesity definition in accordance with Polish charts . It is also possible to use older BMI percentiles charts for Polish children, published in 1999 by Palczewska and Niedzwiecka , where overweight is defined as BMI above the 90th percentile and obesity above the 97th percentile. However, using them, we risk underestimation of the prevalence of overweight compared to WHO charts. Due to the high risk of metabolic and cardiovascular complications development, severe obesity should be specified. There are few definitions of severe obesity in children. We propose to use ONE, where severe obesity is diagnosed in children older than 5 years if BMI exceeds 3SD (99.9th centile) . The accumulation of visceral fat tissue, which is an index of abdominal obesity related to a metabolic complication that can be used in children, is waist circumference . It is measured at the level of the midpoint between the lowest rib and the iliac crest. For Polish children, centile charts for waist circumference for age and sex were developed within the OLA/OLAF project . Up to the age of 16 years, waist circumference exceeding 90 percentile for age and sex defines abdominal obesity and is associated with increased cardiometabolic risk. In older adolescents, adult cut-off point values for abdominal obesity should be used (94 cm for men and 80 cm for females). 3.2. Specific Causes of Obesity 3.2.1. ‘Simple’ Obesity The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended . Weight status of children is closely associated with healthy lifestyle behaviors, such as physical activity, sedentary behavior, screen time, sleep, and dietary behaviors. Over 90% of obesity cases are idiopathic and less than 10% are associated with genetic and hormonal causes . Unhealthy Diet Poor eating habits, including inadequate intake of vegetables, fruit, and milk, and eating too many high-calorie snacks, play a main role in childhood obesity development. The body weight is regulated by various physiological mechanisms that maintain the balance between energy intake and energy expenditure. These regulatory systems under normal conditions, e.g., a positive energy balance of only 500 kJ (120 kcal) per day (approximately one serving of sugar-sweetened soft drink), would produce a 50 kg increase in body mass over 10 years . Apart from excess caloric intake, very important for the development of childhood obesity are: incorrect, insufficient number of meals, skipping breakfast, drinking sugar-sweetened beverages, eating out, eating without hunger, and eating in front of the TV screen. In research conducted by Toschke et al. on 477 children aged 5–7, the prevalence of obesity decreased with the higher number of meals consumed during the day. In the group of children who ate 3 or less meals per day, 15% of children were overweight and 4.2% were obese. Among children who ate 5 or more meals per day, the prevalence of overweight and obesity was 8.1% and 1.7%, respectively. People who regularly skipped breakfast had 4.5 times higher risk of obesity than those who regularly ate breakfast . Sedentary Lifestyle Research conducted in 49 countries in 2018 shows that 80% of Polish children lead a sedentary lifestyle. Our youngest took the penultimate place among their peers from Europe . Children and adolescents spend between 246 and 387 min a day sitting . European children spend up to 2.7 h watching TV a day . Global trends, including excessive screen time spending, are creating a generation of ‘inactive children.’ During the pandemic, the percentage of children meeting the PA guidelines fell even further, while the percentage of children spending ≥ 2 h a day in front of a screen increased from 66% to 88% . Studies have shown that inactivity and sitting for more than four hours a day significantly increase the risk of cardiovascular disease, diabetes, and obesity, reduce sleep time, and also worsen prosocial and behavioral behaviors . The latest reports about so-called obesity say that sedentary lifestyle and video games are the "new thrombophilia cocktail" in adolescents . Weight gain is caused by more time sitting, but also by a greater consumption of snacks and sweets. Therefore, attention should be paid to activities that aim to modify a sedentary lifestyle in both school and home. Just three 5-min walks during the working day can reverse the changes caused by prolonged sitting in the peripheral arteries of the legs . A 2017 study found that climbing stairs, considered high-intensity PA, burns more calories per minute than running . Introducing active video games to increase daily energy expenditure in obese and sedentary children is not a substitute for sports activities but may contribute to increasing energy expenditure beyond the threshold of sedentary activity. Involving children in everyday activities, such as cleaning up after a meal, vacuuming, taking out the dog, throwing out garbage, reduces the time spent in a sitting position. Commercial breaks while watching TV may be used for this purpose. A desk with an adjustable tabletop height or a seat in the form of a fitness ball will also force “active sitting”. Balls provide better concentration in learning than a short period of intense PA or lack of PA while studying . The reduction in school sitting time and the use of active breaks in long sitting resulted in a significant improvement in the apoB/apoA-1 ratio with average effect sizes for TC, HDL-C, and TC to HDL-C ratio. The ability to concentrate attention is also improved. Measuring the number of steps and using health apps on your phone is an effective way to increase your child’s PA and thus weight loss . Most studies use screen time as a replacement for total sedentary time. Media use does not represent all sedentary time . Many interventions to reduce sitting time have focused on increasing PA. It has been shown that active children or athletes, compensating for their high PA, spend quite a lot of time on rest . It is therefore important to correctly evaluate the sedentary time in children . The sedentary behaviors should be reduced in children with excessive weight to maximum 2 h per day . Sleep Restrictions Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . Excessive use of computer screens, tablets, smartphones, especially in the evening and at night may have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day . Psychological Mechanisms The psychological mechanisms behind the onset and maintenance of obesity are the object of inquiry in scientific studies for psychologists with different theoretical backgrounds . Excessive eating, the compulsive consumption of food, and affected somatic functioning (excessive body weight) are often signs of difficulties in a person’s psychological functioning. Obesity can be significant in terms of the mother–child relationship and other relationships in the family. A child’s obesity can play a role in experiencing emotions and in social relationships with peers and adults . Additionally, some recent research points to a role of a chronic stress and alteration in glucocorticoids secretion and action in the development of overweight and obesity. Stress may play a major role in the development and maintenance of excessive body weight in individuals who have an increased glucocorticoid exposure or sensitivity due to increased long-term cortisol levels . Binge Eating Disorder (BED) Most of the excess eating that leads to obesity is not due to physical hunger but psychological causes. Certain cognitive schemas, therefore, trigger emotions and behavior towards food . An important role in eating excessively is also played by ineffective mechanisms of emotional regulation related to the predominance of arousal processes over inhibition processes. This results in a unique style of coping with emotional tension, reduced ability to defer gratification, and impulsiveness . Binge Eating Disorder is characterized by the occurrence of recurrent, uncontrolled binge eating episodes, defined as eating significantly more food at a given time than most people would under similar circumstances and times . The American psychiatric classification Diagnostic and Statistical Manual of Mental Disorders. Fifth Edition (DSM-5) distinguished BED as an independent disease entity, symbol 307.51 (F50.8) . BED is now recognized as a separate type of eating disorder (in DSM-5) in addition to eating disorders such as bulimia nervosa (BN) and anorexia nervosa (AN) . According to various data, this problem affects about 2–5% of the population and more often affects women . This percentage increases significantly in obese people, ranging from 30% to even 36–42% , and 13–27% of obese individuals seeking treatment have ED . To diagnose BED, ≥3 of the following indicators of control impairment for binge eating episodes must be present: eating until an unpleasant feeling of fullness appears; eating large amounts of food when not physically hungry; eating rapidly than usual; eating alone because of embarrassment; and feelings of disgust, guilt, or depression after an episode of binge eating . Additionally, to diagnose BED, binge eating episodes must occur at least once a week for at least 3 months. 3.2.2. Monogenic Obesity Monogenic obesity should be considered in children with early onset of weight gain (<2 years of age) and concomitant hyperphagia . Causes of secondary obesity include: genetic (monogenic, syndromic), endocrine, iatrogenic, or hypothalamic. Suspicion of secondary obesity should be assumed based on anamnesis (patient’s and family history) and physical examination with anthropometric evaluation, followed by additional diagnostics (differential diagnosis, hormonal, genetic, imaging assessment). The clinical features suggesting a genetic cause of obesity are: (1) history of consanguinity in the family; (2) intellectual impairment; (3) dysmorphic features; (4) organ/system specific abnormalities; (5) severe obesity of early development; (6) hyperphagia and food seeking behaviors; (7) other specific features/characteristic phenotypes. The confirmation of the diagnosis should be made on the basis of genetic testing. Genetic obesity could be caused by a mutation in a single gene (monogenic), inherited recessively. It disrupts the regulatory system of satiety and hunger as well as energy expenditure. It is a rare condition and occurs in 3–10% of children with severe obesity. The most common gene mutation related to monogenic obesity is listened in . Personalized treatment is available for some mutations. Patients with leptin deficiency and biologically inactive leptin can be treated with recombinant human leptin (metreleptin) . Melanocortin 4 receptor (MC4R) agonist, setmelanotide, is now approved for the treatment in patients with proopiomelanocortin, leptin receptor, and proprotei convertase subtilisin/kexin type 1 (PCSK1) deficiencies . It is also known that patients with some mutations can be successfully treated with well-known drugs, e.g., glucagon-like peptide 1 (GLP-1) agonist, which is effective in weight reduction in patients with MC4R mutations, and obesity related to Kinase suppressor of Ras 2 mutation is well treated with metformin . Identification of a monogenic background is also important in a patient’s qualification to bariatric surgery. 3.2.3. Syndromic Obesity Syndromic obesity is usually related to dysmorphic features, mental retardation, and organ-system specific abnormalities . Syndromic obesity can be caused by a single gene mutation or a larger chromosomal region change that involves several/many genes. Despite obesity, it is usually related to dysmorphic features and characteristic of syndrome abnormalities. It is estimated that obesity could be a characteristic of almost 100 syndromes. The most common are Prader–Willi syndrome and Bardet–Biedl syndrome. Prader–Willi syndrome is the most common form of syndromic obesity (1:15,000–25,000 births). It is caused by inactivation of the region 15q11-13 of the paternal chromosome. The characteristic features of this syndrome are: (1) severe neonatal hypotonia; (2) feeding problems and poor weight gain in the first year of life; (3) hyperphagia and obesity appear about 4–8 years; (4) characteristic dysmorphic features (small hands and feet, almond-shaped eyes, prominent nasal bridge, downturned lips, tall, narrow forehead); (5) hormonal deficiencies (growth hormone deficiency, hypogonadism, hypothyroidism); (6) intellectual impairment, speech difficulties, and behavioral disturbances. Genetic confirmation of the syndrome should be made as soon as possible in all neonates with hypotonia and in all older children with a characteristic phenotype. Implementation of recombinant growth hormone (rGH) treatment is possible in all children with Prader–Willi syndrome with a BMI below the 97th percentile. The therapy with rGH improves the body composition with the increase of lean body mass and decrease of visceral fat depot. Bardet–Biedl syndrome is a ciliopathy caused by an autosomal recessive mutation in one of the 24 genes related to the function of the Bbsome—the protein complex involved in the function of the cillia. In addition to obesity, Bardet–Biedl syndrome is characterized by polydactyly, syndactyly, ataxia, hypertonia, speech difficulties, retinal dystrophy, intellectual impairment, renal dysfunction, and hypogonadism. Less common syndromes associated with obesity development, are for example: Alstrom syndrome, Borjeson–Forssman–Lehmann syndrome, Carpenter syndrome, CHOP syndrome. 3.2.4. Obesity Associated with Endocrine Disorders Endocrine workup should be considered in any case of rapid weight gain with concomitant growth arrest/short stature . In the deferential diagnosis of obesity, some endocrine abnormalities (hypothyroidism, hypercortisolemia, growth hormone deficiency, pseudohypoparathyroidism) should be considered. In children with endocrine obesity, short stature, decreased growth velocity, and delayed bone age are typical . Iatrogenic obesity in children is related to chronic treatment with some drugs that affect appetite and metabolism (glucocorticoids, antiseizure drugs—valproic acid, atypical neuroleptics—e.g., clozapine, olanzapine, risperidone). Hypothalamic obesity arises from the disfunction of hunger and satiety centers in the hypothalamus and extreme hyperphagia. It could be caused by congenital abnormalities, head injuries, or a tumor located in the hypothalamic region . 3.3. Consequences and Complications of Obesity 3.3.1. Arterial Hypertension Obesity is the main risk factor for arterial hypertension development in children and adolescents. Arterial hypertension (AH) is diagnosed in approximately 30% of pediatric patients with obesity, and the risk increases with the severity of the obesity . Weight gain accounts for up to 75% of the risk of primary AH . Blood pressure measurement is recommended in all children with overweight or obesity. Early diagnosis of AH is crucial for any interventions that may reduce cardiovascular morbidity and mortality later in life . Blood pressure (BP) should be measured in all children aged ≥3 years at least once a year and during any routine physician examination. However, in all children with overweight and obesity it is recommended to perform office BP measurements in all children with overweight and obesity <3 years of age, during routine health supervision visits, and visits related to health problems (at least once a year) . It is also recommended to perform measurements in children <3 years of age if there are additional risk factors such as neonatal complications, cardiac malformations, genetic diseases, acquired or congenital kidney diseases, neoplasms, drug use, and diseases inducing increased intracranial pressure . Office Blood Pressure Measurement The device used to measure BP must be validated for children with an appropriate size cuff to cover 80–100% of the individual’s arm circumference. Before BP measurement, we should ensure that the patient is sitting or relaxed for 3 to 5 min before. Measurement should be done three times with an interval of 3 min between measurements and use the average of the last two. The auscultatory method is recommended. If the oscillometric method is used, the device must be validated. If AH is detected by the oscillometric method, it needs to be confirmed using the auscultatory method. The diagnosis of AH requires a mean value of 3 independent measurements 95 percentile for age, sex, and height . It is recommended to use the standards recommended by ESH (European Society Of Hypertension) (see online calculator: https://hyperchildnet.eu/blood-pressure-calculator/ accessed on 30 July 2022). Home Blood Pressure Monitoring Home BP in children with obesity correlates with target organ damage better than office BP, and may better reflect the effect of risk factors such as obesity and its metabolic complications . It is recommended to perform home BP monitoring with a validated oscillometric device for 6 to 7 days, with duplicate morning and evening measurements . Ambulatory Blood Pressure Monitoring Ambulatory blood pressure monitoring is recommended in pediatric patients with severe obesity, with sleep-disordered breathing, any damage to the target organ (left ventricular hypertrophy and microalbuminuria) and normal office BP (suspicion of masked hypertension), type 2 diabetes mellitus, and chronic kidney disease . Assessment of Potential Target Organ Damage in Patients with Obesity and AH If arterial hypertension in a pediatric patient with obesity is confirmed, the following diagnostic tests are recommended: (1) assessment of kidney function: blood urea nitrogen, creatinine (and glomerular filtration by formula), electrolytes, urine examination, microalbuminuria; (2) evaluation of organ damage: echocardiography (to assess left ventricular hypertrophy or remodeling), fundoscopy. Treatment of Arterial Hypertension in Children with Obesity Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the management of AH in children with obesity. Daily high to moderate intensity exercise is recommended for 60 to 90 min. There are no contraindications to practicing certain types of PA. In the diet, particularly important are the limitation of sodium intake and the proper sodium to potassium ratio . Pharmacological therapy should be considered in children with grade 1 hypertension in whom BP did not adequately decrease despite 6–12 months of non-pharmacological therapy and is indicated in children with grade 2 hypertension and/or target organ damage. The preferred drug classes are angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), and dihydropyridine calcium antagonists . If stage 2 hypertension, secondary causes, or any damage to the target organ are present, the patient must be referred to a specialist for further diagnostics tests and treatment. 3.3.2. Prediabetes and Type 2 Diabetes Mellitus Assessment of glucose metabolism is recommended in all children and adolescents with overweight and obesity since the age of 6 years . Since there is some evidence that prediabetes is already present in approximately 5% obese children <10 years, it is recommended to measure fasting blood glucose in all children with overweight and obesity at the age of <6 years, as the first step to detect prediabetes and type 2 diabetes . The screening must be repeated after 2–3 years, unless there is a rapid increase in weight or the development of other cardiometabolic complications . The oral glucose tolerance test (OGTT) is recommended to be performed every two years in children with BMI > the 95th percentile > 10 years of age (or earlier, if puberty has already commenced) . OGTT should be performed in a standard setting, with a glucose dose of 1.75 g/kg, a maximum of 75 g . The use of glycated hemoglobin A1c (HbA1c) remains controversial in pediatric age because HbA1c has a lower sensitivity than fasting or OGTT plasma glucose . There is no recommendation to measure insulin concentrations during diagnostics obesity complications in children or adolescents . Fasting insulin concentrations show considerable overlap between insulin resistant and insulin sensitive youth. Therefore, there is no well-defined cut-off point to differentiate normal from abnormal and there is no universally accepted, clinically useful, numeric expression that defines insulin resistance . We recommend against using insulin testing as a basis for making therapeutic decisions. Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the treatment of prediabetes in children with obesity. Certain medications (e.g., metformin, liraglutide) have potent effects on glucose levels, and their use may be considered under the supervision of an expert. Treatment options for pediatric patients with type 2 diabetes include insulin, metformin, and liraglutide (age limit according to summary of product characteristics) . 3.3.3. Dyslipidemia Dyslipidemias are disorders of lipoprotein metabolism that result in the following abnormalities: (1) high total cholesterol (TC); (2) high low-density lipoprotein cholesterol (LDL-C); (3) high non-high-density lipoprotein cholesterol (non-HDL-C); (4) high triglycerides (TG); (5) low HDL-C. Normal lipid and lipoprotein values in children vary by age and sex. In many patients, hyperlipidaemia is caused by some underlying "non-lipid" etiology rather than a primary disorder of lipoprotein metabolism. Among cardiovascular risk factors associated with increased morbidity and mortality, lipids and lipoproteins are of special importance, and in many studies, childhood obesity has been shown to be associated with increased levels of TC, LDL-C, and TG, and decreased HDL-C . The most frequent lipid disorders in children with obesity is combined dyslipidaemia characterized by moderate to severe elevation in TG and non–HDL cholesterol, decreased HDL-C, and mild to moderate elevation in TC and LDL-C . Dyslipidemia is the most common consequence of childhood obesity and is present in as many as 43% of obese children. It is related significantly to insulin resistance as the latter is enhancing hepatic delivery of non-esterified free fatty acids for TG production and sequestration into triglyceride-rich lipoproteins. TGs are deposited in the vessel wall and initiate the process of LDL-C accumulation. They are strongly associated with the risk of developing atherosclerotic disease. LDL-C, very low-density lipoprotein (VLDL), and lipoprotein-a are the primary apolipoprotein-B containing lipoproteins implicated in the formation of atherosclerotic lesions. HDL-C has been thought to be protective through its ability to prevent oxidation of LDL-C . Atherosclerosis starts at a young age, and the number of young people who develop atherosclerosis is increasing, especially children with risk factors such as familial hypercholesterolemia (FH), type 1 diabetes mellitus, and hypertension. In recent decades, hyperlipidaemia in children and adolescents has been increasing and many societies have identified these children as being at increased risk for premature atherosclerosis. The Bogalusa Heart Study demonstrated fatty streaks in 50% of cases between 2 and 15 years of age and in 85 percent of older subjects between 21 and 39 years of age . The prevalence and extent of atherosclerosis found in the aorta and coronary arteries increased with increasing BMI, BP measurements, serum TC, and LDL-C. The degree of atherosclerotic changes increased with worsening severity and a greater number of risk factors . Assessment of lipid metabolism is recommended in all children and adolescents with overweight and obesity since the age of 2 years . Screening of lipid levels in children reveals both genetic lipid abnormalities (e.g., including familial hypercholesterolemia, which affects 1 in 250 people), and dyslipidaemia, which responds favorably to lifestyle changes. In children with excessive weight, it is recommended to assess basic, fasting lipid profile (TC, TG, LDL-C, HDL-C) every 2 years. In children with any lipid disorders, the fasting lipid profile measurement should be repeated every six months to monitor treatment effectiveness. In children with obesity, the diagnosis of dyslipidaemia requires additional lifestyle and lifestyle changes to reduce the risk and occurrence of cardiovascular complications. It is recommended primarily in the therapeutic management of dyslipidaemia caused by the obesity lifestyle change intervention. Even a slight weight loss is associated with a significant decrease in the TG concentration and an increase in HDL-C concentration. In addition to the recommended diet, it is helpful to achieve the desired effect, and therapeutic help is an adequate and regular increase in PA. As an adjuvant treatment use plant stanols, plant sterols, and ω−3 fatty acids are recommended. The plant stanols and sterol esters were shown to inhibit intestinal cholesterol absorption, leading to reduction in LDL-C up to 12%. Additionally mild reduction of TG during their usage is reported. Additionally, ω−3 fatty acids are widely accepted as a supplement used in children. Their exact mechanism of action is not clear, but they mainly reduce TG level. Usage of red yeast rice supplement, monacolin K, also known as lovastatin, an inhibitor of liver cholesterol synthesis, can be considered with caution. It is able to reduce LDL-C between 15–25% within 6–8 weeks of therapy. Due to its mechanism of action, similar to statins, the possible side effects should be closely monitored . According to the National Institute of Health, Lung, and Blood (NHLBI) in cases in which non-pharmacological treatment has no effect, the use of pharmacological treatment should be considered. In accordance with the guidelines of the Polish Lipid Association (PoLA), after 6 months of low-lipid diet in children above the age of 6 with LDL-C concentration, persistent ≥190 mg/dL or ≥160 mg/dL and other risk factors, the statin treatment, together with non-pharmacological, should be considered . 3.3.4. Digestive Tract Complications The most common digestive tract complication related to obesity in children is metabolic-associated fatty liver disease (MAFLD) . MAFLD, previously called nonalcoholic fatty liver disease, may be present in 38% of overweight and obese children and adolescents . The change in terminology aims to reflect the pathogenesis and risk factors for the disease, such as obesity . It is a liver presentation of insulin resistance . The risk of developing liver cirrhosis in children with MAFLD is much lower than in adults and amounts to 1–2% of children. In a child under the age of 10 years of age with hepatic steatosis, the secondary causes of the condition are common and should be considered (glycogen storage disease, hepatitis C virus, and others) . In children with obesity, the diagnosis of MAFLD should be made on the basis of imaging and blood biomarkers. Liver biopsy is the standard of reference, although it is an invasive procedure and should be used only in rational cases. The most available and recommended imaging method for assessing liver steatosis is ultrasound. If available, magnetic resonance imaging (MRI) can be performed. Computed tomography (CT), although accurate, is not recommended due to the high X-ray exposition. The blood biomarker of MAFLD is an increase in alanine aminotransferase (AlAT) to more than twice the upper limit of normal. Unfortunately, both non-invasive methods (imaging and blood biomarkers) have moderate diagnostic accuracy. Additional evaluation of elastography could be useful, however, due to lack of validation, the accuracy remains uncertain . From a clinical perspective, liver fibrosis is far more important than liver steatosis. Liver biopsy is a golden standard for assessment of fibrosis, but non-invasive methods in the future should replace liver biopsy once validated properly. Elastography, multiparametric RMI, and serum markers of fibrosis are investigated. In the treatment of MAFLD, a diet with limited simple carbohydrates and increased fiber consumption is strongly recommended . The introduction of supplements with ω−3 polyunsaturated fatty acids was postulated to reduce liver steatosis , but was not confirmed in other studies . Pharmacological treatment of components of metabolic syndrome with metformin or statins should be considered once MAFLD is associated with lipid disturbances and insulin resistance. However, according to recent NASPGHAN guidelines, no pharmacotherapy is recommended . Cholelithiasis in children is a rare disease with a prevalence of 0.13–0.22% and there is no indication for routine screening . The main risk factors of cholelithiasis are elevated BMI and rapid weight loss. The risk of gallstones is higher in girls than in boys and increases with the severity of obesity and use of contraceptive pills . In patients after sleeve gastrectomy, the incidence of symptomatic cholelithiasis is 3.5% over a period of 2 years . Only in half of children is cholelithiasis symptomatic. In the diagnostic approach to cholelithiasis, abdominal ultrasound and liver enzyme assessment are crucial. Asymptomatic gallstones are diagnosed during a routine ultrasound examination. There is no evidence to routinely screen all obese children for cholelithiasis. However, abdominal ultrasound could be recommended in obese patients during/after rapid weight loss. In symptomatic patients (with pain in the upper right quadrant, vomiting, nausea, jaundice), the ultrasound is recommended. Symptomatic cholelithiasis requires endoscopic cholecystectomy, and in asymptomatic patients, medical therapy with ursodeoxycholic acid (UDCA) can be considered under close observation . UDCA treatment can also be effective in the prevention of gallstone formation in patients after sleeve gastrectomy. Obesity in children increases the risk of gastrointestinal reflux (GERD). GERD should be suspected if there is a characteristic clinical presentation (heartburn usually after eating, worse at night, chest pain, difficulty swallowing, regurgitation) . GERD symptoms increased progressively with increasing BMI and waist circumference. 13.1% of obese children reported symptoms suggestive of GERD. Typical treatment and management is recommended, along with weight reduction to reduce symptoms. 3.3.5. Polycystic Ovary Syndrome and Obesity Impact on Puberty In children with excessive weight, isolated, mild forms of precocious puberty (precocious pubarche, axillarche, thelarche) occur more often, and in obese girls, central puberty tends to start earlier . The most common form of precocious puberty associated with obesity is precocious pubarche. It is related to insulin excess, which is often observed in obese children. Hyperinsulinemia can stimulate androgen production in the adrenals and ovaries. In prepubertal children, excessive adrenal androgen production can be clinically presented as precocious pubic and axillary hair occurrence before the age of 8 in girls and 9 in boys. It could be accompanied by pubertal sweat odor, mild acne, moderately accelerated growth, and bone age. It usually occurs more frequently in girls. In the hormonal assessment, isolated mild elevations of dehydroepiandrosterone sulfate (DHEAS) levels were observed. Less common in obese girls is isolated thelarche, as a consequence of androgen conversion to estrogens in the fat tissue. It is characterized by low concentrations of luteinizing hormone (LH) and estradiol with a mild increase in follicle stimulating hormone (FSH) levels. Height velocity and bone age are not accelerated. The mild forms of precocious puberty in obese children do not need any treatment. They are characterized by a stable course or very slow progression. In their treatment, serial observation and behavioral treatment of excessive weight were indicated. In girls with excessive weight, irregular menses occur twice more often than in non-obese peers . After menarche, obesity can be a cause of menstrual disturbances (heavy, painful menstruation, oligomenorrhea, secondary amenorrhea) and polycystic ovary syndrome (PCOS) . PCOS in adolescent girls is characterized by menstrual irregularities and clinical hyperandrogenism and is associated with infertility, metabolic disturbances, type 2 diabetes, and cardiovascular disease in adulthood. In obese adolescents, it is related to hyperinsulinemia, which can stimulate ovarian and adrenal androgen production, as well as decrease the synthesis of sex hormone binding globulin (SHBG) in the liver, leading to excess androgen. According to the consensuses from 2017 and 2018 years, PCOS in an adolescent girls can be diagnosed if both criteria are met : (1) Menstrual disturbances (irregular menses, oligomenorrhea, and secondary amenorrhea). Irregular menses are defined as normal in the first gynecological year, however, the cycle duration of more than 90 days needs special attention. In the gynecological age of less than 3 years, the cycle is defined as irregular if it is shorter than 21 days and longer than 45 days. From the third gynecological age, the duration of the cycle should be between 21 and 35 days. Secondary amenorrhea is defined as a lack of menstruation for more than 3 months and primary amenorrhea as a lack of menarche at the age of 15 years or more than 3 years post-thelarche. (2) Hyperandrogenism (clinical and/or biochemical). The clinical presentation of hyperandrogenism in adolescent girls is hirsutism, defined as excessive, coarse, terminal hair growth distributed in a male fashion, assessed by the Ferriman–Gallwey score for 8 or more points. It should be distinguished from hypertrichosis. Biochemical androgen access should be assessed on the basis of total testosterone and SHBG measurements and calculation of free/bioavailable testosterone or free androgen index. The diagnosis of PCOS could be made if the gynecological age is older than 2 years and persistent menstrual disturbances are observed for more than 2 years. Other causes of menstrual disturbances and hyperandrogenism must be excluded (hypothyreosis, hypercortisolemia, hyperprolactinemia, congenital adrenal hyperplasia, androgen secreting tumor). The objectives of the treatment are regular menses and decrease in clinical features of hyperandrogenism. Despite a reduction in body weight, contraceptive therapy with antiandrogen action progestogen is indicated. In very young patients and in those with contraindications for estrogen therapy (venous thrombosis, migraine with aura), the natural progestogen therapy in the second phase of the cycle can be used. Antiandrogens (spironolactone, finasteride) are not registered in PCOS treatment, and their use should be considered with great caution. Metformin could be used in girls with PCOS and metabolic disturbances, and in addition to the improvement of metabolic profile, it could restore regular menses . 3.3.6. Respiratory Disorders in Obesity In patients with obesity, the most commonly reported symptoms include an increased respiratory rate, dyspnea after low to moderate exertion, wheezing, and chest pain. Respiratory disorders such as bronchial asthma, obstructive sleep apnea (OSA) syndrome, or hypoventilation syndrome are more common in this group of patients . Several review articles have appeared in recent years on the increased prevalence of asthma in obese patients. However, the topic remains highly controversial. Increased body fat may lead to systemic inflammation, increasing pro-inflammatory serum cytokines. With decreased lung compliance, lung volume, and peripheral airway diameter, bronchial hyperresponsiveness may also be important. A confirmatory factor for the effect of obesity on asthma is improved disease control when weight is reduced, as well as observed increased medication use and reported poor quality of life in obese patients compared to normal weight patients. The therapeutic efficacy of inhaled corticosteroids and their combination with long-acting beta agonists (LABAs) is significantly reduced. In spirometry, lower values of Forced expiratory volume (FEV1), total lung capacity (TLC), and functional residual capacity (FRC) are observed compared to patients with normal weight and bronchial asthma . OSA is a condition manifested during sleep, characterized by repeated shallowing or complete absence of airflow through the upper airway with preserved chest and abdominal movements. It is associated with airflow limitation and consequent hypoxia (transient episodes of hypoxia and hypercapnia). It also causes sleep fragmentation through activation of the sympathetic nervous system and arousal. Its prevalence rate in children and adolescents with overweight or obesity ranges between 13–59% . The features that raise suspicion of OSA include mouth breathing, pauses in breathing pattern, snoring during sleep, concentration problems, hyperactivity, headaches, and excessive daytime sleepiness. Untreated obstructive sleep apnea alters the quality of sleep and shortens the life expectancy of those affected . Polysomnographic studies are performed to diagnose OBS. Weight loss is the first line therapy for obese children with OSA. For children with severe OSA, non-invasive ventilation (NIV) and continuous positive airway pressure (CPAP) can be the treatment of choice. Severe obesity and OSA may lead to the obesity - hypoventilation syndrome , with hypoxia, hypercapnia, and reduced ventilatory drive. Hypoventilation syndrome occurs in severe obesity and its risk increases with increasing body weight. It is a chronic disease that reduces the activity of the patient in social life, reduces quality of life, and increases the risk of death. It is characterized by an increase in the partial pressure of CO 2 and a decrease in O 2 (PaCO 2 > 45 mmHg and PaO 2 < 70 mmHg), with other causes such as neuromuscular disorders, pulmonary vascular pathology, iatrogenic causes (drugs, psychoactive substances), metabolic diseases or respiratory and thoracic disorders excluded. Diagnostic criteria included a BMI ≥ 30 kg/m 2 combined with hypoventilation PaCO 2 > 45 mmHg (and during sleep > 55 mmHg for at least 10 min). Symptoms may initially be minor and as hypercapnia increases, headaches, impaired concentration, excessive sleepiness, confusion, and decreased exercise tolerance may occur . 3.3.7. The Effects of Obesity on Musculoskeletal Health Obesity is one of the most common conditions that negatively affects bone and joint health. Evidence showed positive associations between elevated body fat and the development of slipped capital femoral epiphysis , Blount’s disease, and genu varum . Moreover, the risk of fracture, musculoskeletal pain , impaired mobility, and lower extremity malalignment are more common in children and adolescents with excess weight . Persistence of obesity from childhood to adulthood may lead to an increased risk of osteoarthritis in the weightbearing joints, particularly at the knee . Longitudinal studies indicated that increased body fat may influence the higher risk of incident and worsening joint pain . 3.3.8. Renal Complications It is recommended to assess kidney function in children and adolescents with obesity. In adults, obesity is an independent risk factor for chronic kidney disease . In children, it is not so obvious, but complications of obesity (e.g., arterial hypertension, dyslipidemia, insulin resistance, hyperglycemia, inflammatory state, and autonomous system dysfunction) can alter the kidney function . Therefore, the basic evaluation of kidney function (creatinine level, glomerular filtration rate (eGFR [mL/min/1.73 m 2 ] = 0.413 × body height [cm]/SCr [mg/dL]) and urine analysis) should be performed in children with overweight and obesity. More detailed screening of kidney dysfunction (albuminuria, albumin/creatinine ratio) should be performed in patients with obesity and concomitant arterial hypertension and type 2 diabetes . The obesity seems to be an important risk factor associated with incontinence, but the interaction between these factors is complex and needs further investigation . 3.3.9. Neurological Complications The obesity in children is a risk factor for migraine and idiopathic intracranial hypertension. Obesity in pubertal children is associated with a higher risk of idiopathic intracranial hypertension (pseudotumor cerebri) manifested with headache, nausea, vomiting, retroocular pain, and visual impairment . However, the incidence of this condition is much less common in children than in adults. The possible pathogenesis of idiopathic intracranial hypertension in obesity is increased intraabdominal pressure, which in turn increases intrathoracic and intracerebral venous pressure. The most common clinical symptom of pseudotumor cerebri is headache, usually worse in the morning. It can be accompanied by nausea, vomiting, retroocular pain, decreased or blurred vision, diplopia, or even transient visual obscuration . In 19% of children, it is associated with permanent visual impairment . In younger children, irritability, apathy, and somnolence can occur. Less common are other nonspecific neurological symptoms—ataxia, dizziness, stiff neck, seizures, and facial nerve palsy. In some children, papilledema may be the only symptom of pseudotumor cerebri, without other symptoms. The diagnosis of idiopathic intracranial hypertension is the diagnosis of exclusion. Diagnostic criteria are the presence of characteristic clinical symptoms, including papilledema, in a patient with a normal level of consciousness, with normal neurologic physical examination (except cranial nerves), with normal findings on cerebrospinal fluid examination, neuroimaging studies, and documented increased intracranial pressure with lumbar puncture. Elevated intracranial pressure in a child with obesity can be diagnosed if the pressure of cerebrospinal fluid exceeds 28 cm H 2 O . Magnetic resonance imaging shows signs of elevated intracranial pressure. Management usually covers medication: acetazolamide, which is a diuretic but also reduces cerebrospinal fluid production. Furosemide can be used together with acetazolamide or alone if the first medication is contraindicated. In some patients, the symptoms can resolve after the diagnostic lumbar puncture . Obesity seems to be a risk factor for migraine progression and frequency of migraines. The prevalence of episodic migraine in obese children is higher compared to children of normal weight (8.9% vs. 2.5%) . There is a relationship between headache physiopathology and the response of the central and peripheral mechanism to food consumption. The suggested mechanism includes obesity as a pro-inflammatory disease, which may be associated with neurovascular inflammation. Elevated levels of calcitonin gene-related peptides, dysregulation of the action of orexin, leptin, and adiponectin are possible proinflammatory factors related to obesity . Therefore, weight control should be part of migraine treatment in a child with excessive weight. 3.3.10. Mental Health Disorders Overweight and obesity can lead to physiological and biochemical disorders of the body, as well as a deterioration in self-esteem, well-being, and relations with the environment . In children, they often initiate a negative emotional attitude towards themselves and a sense of non-acceptance by others. In the following years, they can lead to a feeling of rejection, loneliness, and obese teenagers very often feel disliked, lonely, and rejected by their peers. In such young people who are overweight or obese, there are noticeable difficulties in realizing their dreams, and excessive body weight makes it difficult for them to start their adult life and pursue their professional plans. Additionally, it favors an unattractive self-image, which may contribute to loneliness, a sense of regret, sadness, and even depression. Wardle et al. found that body dissatisfaction was greater in obese children who developed it before the age of 16, therefore it should be identified as part of the multidisciplinary assessment. A referral to a specialist is needed in the suspicion of depressive and/or anxious symptoms, suicidal risk, dysmorphophobic traits, and eating disorders . Obesity is a chronic recurrent disease related to excessive fat tissue accumulation that presents a risk to health. The diagnosis of overweight, obesity, and severe obesity is usually based on the measurement of high and weight, calculation of weight-to-length ratio in children below the age of 5 years and body mass index (BMI) in older children . Indexes are assessed using child growth standards for age and sex. The advantages of these indexes are simplicity, low cost, universality of measurement, and assessment. However, it should be noted that they are not perfect in assessing the amount and distribution of fat tissue accumulation causing the development of obesity complications. In addition, they should be used with caution in a particular situation, for example, in athletes with high muscle mass or children with significant posture defects (scoliosis) related to the decrease of height measurement. Diagnostic Tools and Data Interpretation According to the World Health Organization (WHO), in children under the age of 5 years, overweight should be diagnosed if the weight-to-length ratio is greater than 2SD above the median of the child growth standard and obesity when this ratio is greater than 3SD above the median . In children aged 3–18 years, Polish BMI percentile charts should be used, where overweight is defined as BMI above the 85th percentile (>1SD) and obesity above the 97th percentile (>2SD) . WHO standards for children aged 5–19 years can be also used, with the overweight and obesity definition in accordance with Polish charts . It is also possible to use older BMI percentiles charts for Polish children, published in 1999 by Palczewska and Niedzwiecka , where overweight is defined as BMI above the 90th percentile and obesity above the 97th percentile. However, using them, we risk underestimation of the prevalence of overweight compared to WHO charts. Due to the high risk of metabolic and cardiovascular complications development, severe obesity should be specified. There are few definitions of severe obesity in children. We propose to use ONE, where severe obesity is diagnosed in children older than 5 years if BMI exceeds 3SD (99.9th centile) . The accumulation of visceral fat tissue, which is an index of abdominal obesity related to a metabolic complication that can be used in children, is waist circumference . It is measured at the level of the midpoint between the lowest rib and the iliac crest. For Polish children, centile charts for waist circumference for age and sex were developed within the OLA/OLAF project . Up to the age of 16 years, waist circumference exceeding 90 percentile for age and sex defines abdominal obesity and is associated with increased cardiometabolic risk. In older adolescents, adult cut-off point values for abdominal obesity should be used (94 cm for men and 80 cm for females). According to the World Health Organization (WHO), in children under the age of 5 years, overweight should be diagnosed if the weight-to-length ratio is greater than 2SD above the median of the child growth standard and obesity when this ratio is greater than 3SD above the median . In children aged 3–18 years, Polish BMI percentile charts should be used, where overweight is defined as BMI above the 85th percentile (>1SD) and obesity above the 97th percentile (>2SD) . WHO standards for children aged 5–19 years can be also used, with the overweight and obesity definition in accordance with Polish charts . It is also possible to use older BMI percentiles charts for Polish children, published in 1999 by Palczewska and Niedzwiecka , where overweight is defined as BMI above the 90th percentile and obesity above the 97th percentile. However, using them, we risk underestimation of the prevalence of overweight compared to WHO charts. Due to the high risk of metabolic and cardiovascular complications development, severe obesity should be specified. There are few definitions of severe obesity in children. We propose to use ONE, where severe obesity is diagnosed in children older than 5 years if BMI exceeds 3SD (99.9th centile) . The accumulation of visceral fat tissue, which is an index of abdominal obesity related to a metabolic complication that can be used in children, is waist circumference . It is measured at the level of the midpoint between the lowest rib and the iliac crest. For Polish children, centile charts for waist circumference for age and sex were developed within the OLA/OLAF project . Up to the age of 16 years, waist circumference exceeding 90 percentile for age and sex defines abdominal obesity and is associated with increased cardiometabolic risk. In older adolescents, adult cut-off point values for abdominal obesity should be used (94 cm for men and 80 cm for females). 3.2.1. ‘Simple’ Obesity The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended . Weight status of children is closely associated with healthy lifestyle behaviors, such as physical activity, sedentary behavior, screen time, sleep, and dietary behaviors. Over 90% of obesity cases are idiopathic and less than 10% are associated with genetic and hormonal causes . Unhealthy Diet Poor eating habits, including inadequate intake of vegetables, fruit, and milk, and eating too many high-calorie snacks, play a main role in childhood obesity development. The body weight is regulated by various physiological mechanisms that maintain the balance between energy intake and energy expenditure. These regulatory systems under normal conditions, e.g., a positive energy balance of only 500 kJ (120 kcal) per day (approximately one serving of sugar-sweetened soft drink), would produce a 50 kg increase in body mass over 10 years . Apart from excess caloric intake, very important for the development of childhood obesity are: incorrect, insufficient number of meals, skipping breakfast, drinking sugar-sweetened beverages, eating out, eating without hunger, and eating in front of the TV screen. In research conducted by Toschke et al. on 477 children aged 5–7, the prevalence of obesity decreased with the higher number of meals consumed during the day. In the group of children who ate 3 or less meals per day, 15% of children were overweight and 4.2% were obese. Among children who ate 5 or more meals per day, the prevalence of overweight and obesity was 8.1% and 1.7%, respectively. People who regularly skipped breakfast had 4.5 times higher risk of obesity than those who regularly ate breakfast . Sedentary Lifestyle Research conducted in 49 countries in 2018 shows that 80% of Polish children lead a sedentary lifestyle. Our youngest took the penultimate place among their peers from Europe . Children and adolescents spend between 246 and 387 min a day sitting . European children spend up to 2.7 h watching TV a day . Global trends, including excessive screen time spending, are creating a generation of ‘inactive children.’ During the pandemic, the percentage of children meeting the PA guidelines fell even further, while the percentage of children spending ≥ 2 h a day in front of a screen increased from 66% to 88% . Studies have shown that inactivity and sitting for more than four hours a day significantly increase the risk of cardiovascular disease, diabetes, and obesity, reduce sleep time, and also worsen prosocial and behavioral behaviors . The latest reports about so-called obesity say that sedentary lifestyle and video games are the "new thrombophilia cocktail" in adolescents . Weight gain is caused by more time sitting, but also by a greater consumption of snacks and sweets. Therefore, attention should be paid to activities that aim to modify a sedentary lifestyle in both school and home. Just three 5-min walks during the working day can reverse the changes caused by prolonged sitting in the peripheral arteries of the legs . A 2017 study found that climbing stairs, considered high-intensity PA, burns more calories per minute than running . Introducing active video games to increase daily energy expenditure in obese and sedentary children is not a substitute for sports activities but may contribute to increasing energy expenditure beyond the threshold of sedentary activity. Involving children in everyday activities, such as cleaning up after a meal, vacuuming, taking out the dog, throwing out garbage, reduces the time spent in a sitting position. Commercial breaks while watching TV may be used for this purpose. A desk with an adjustable tabletop height or a seat in the form of a fitness ball will also force “active sitting”. Balls provide better concentration in learning than a short period of intense PA or lack of PA while studying . The reduction in school sitting time and the use of active breaks in long sitting resulted in a significant improvement in the apoB/apoA-1 ratio with average effect sizes for TC, HDL-C, and TC to HDL-C ratio. The ability to concentrate attention is also improved. Measuring the number of steps and using health apps on your phone is an effective way to increase your child’s PA and thus weight loss . Most studies use screen time as a replacement for total sedentary time. Media use does not represent all sedentary time . Many interventions to reduce sitting time have focused on increasing PA. It has been shown that active children or athletes, compensating for their high PA, spend quite a lot of time on rest . It is therefore important to correctly evaluate the sedentary time in children . The sedentary behaviors should be reduced in children with excessive weight to maximum 2 h per day . Sleep Restrictions Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . Excessive use of computer screens, tablets, smartphones, especially in the evening and at night may have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day . Psychological Mechanisms The psychological mechanisms behind the onset and maintenance of obesity are the object of inquiry in scientific studies for psychologists with different theoretical backgrounds . Excessive eating, the compulsive consumption of food, and affected somatic functioning (excessive body weight) are often signs of difficulties in a person’s psychological functioning. Obesity can be significant in terms of the mother–child relationship and other relationships in the family. A child’s obesity can play a role in experiencing emotions and in social relationships with peers and adults . Additionally, some recent research points to a role of a chronic stress and alteration in glucocorticoids secretion and action in the development of overweight and obesity. Stress may play a major role in the development and maintenance of excessive body weight in individuals who have an increased glucocorticoid exposure or sensitivity due to increased long-term cortisol levels . Binge Eating Disorder (BED) Most of the excess eating that leads to obesity is not due to physical hunger but psychological causes. Certain cognitive schemas, therefore, trigger emotions and behavior towards food . An important role in eating excessively is also played by ineffective mechanisms of emotional regulation related to the predominance of arousal processes over inhibition processes. This results in a unique style of coping with emotional tension, reduced ability to defer gratification, and impulsiveness . Binge Eating Disorder is characterized by the occurrence of recurrent, uncontrolled binge eating episodes, defined as eating significantly more food at a given time than most people would under similar circumstances and times . The American psychiatric classification Diagnostic and Statistical Manual of Mental Disorders. Fifth Edition (DSM-5) distinguished BED as an independent disease entity, symbol 307.51 (F50.8) . BED is now recognized as a separate type of eating disorder (in DSM-5) in addition to eating disorders such as bulimia nervosa (BN) and anorexia nervosa (AN) . According to various data, this problem affects about 2–5% of the population and more often affects women . This percentage increases significantly in obese people, ranging from 30% to even 36–42% , and 13–27% of obese individuals seeking treatment have ED . To diagnose BED, ≥3 of the following indicators of control impairment for binge eating episodes must be present: eating until an unpleasant feeling of fullness appears; eating large amounts of food when not physically hungry; eating rapidly than usual; eating alone because of embarrassment; and feelings of disgust, guilt, or depression after an episode of binge eating . Additionally, to diagnose BED, binge eating episodes must occur at least once a week for at least 3 months. 3.2.2. Monogenic Obesity Monogenic obesity should be considered in children with early onset of weight gain (<2 years of age) and concomitant hyperphagia . Causes of secondary obesity include: genetic (monogenic, syndromic), endocrine, iatrogenic, or hypothalamic. Suspicion of secondary obesity should be assumed based on anamnesis (patient’s and family history) and physical examination with anthropometric evaluation, followed by additional diagnostics (differential diagnosis, hormonal, genetic, imaging assessment). The clinical features suggesting a genetic cause of obesity are: (1) history of consanguinity in the family; (2) intellectual impairment; (3) dysmorphic features; (4) organ/system specific abnormalities; (5) severe obesity of early development; (6) hyperphagia and food seeking behaviors; (7) other specific features/characteristic phenotypes. The confirmation of the diagnosis should be made on the basis of genetic testing. Genetic obesity could be caused by a mutation in a single gene (monogenic), inherited recessively. It disrupts the regulatory system of satiety and hunger as well as energy expenditure. It is a rare condition and occurs in 3–10% of children with severe obesity. The most common gene mutation related to monogenic obesity is listened in . Personalized treatment is available for some mutations. Patients with leptin deficiency and biologically inactive leptin can be treated with recombinant human leptin (metreleptin) . Melanocortin 4 receptor (MC4R) agonist, setmelanotide, is now approved for the treatment in patients with proopiomelanocortin, leptin receptor, and proprotei convertase subtilisin/kexin type 1 (PCSK1) deficiencies . It is also known that patients with some mutations can be successfully treated with well-known drugs, e.g., glucagon-like peptide 1 (GLP-1) agonist, which is effective in weight reduction in patients with MC4R mutations, and obesity related to Kinase suppressor of Ras 2 mutation is well treated with metformin . Identification of a monogenic background is also important in a patient’s qualification to bariatric surgery. 3.2.3. Syndromic Obesity Syndromic obesity is usually related to dysmorphic features, mental retardation, and organ-system specific abnormalities . Syndromic obesity can be caused by a single gene mutation or a larger chromosomal region change that involves several/many genes. Despite obesity, it is usually related to dysmorphic features and characteristic of syndrome abnormalities. It is estimated that obesity could be a characteristic of almost 100 syndromes. The most common are Prader–Willi syndrome and Bardet–Biedl syndrome. Prader–Willi syndrome is the most common form of syndromic obesity (1:15,000–25,000 births). It is caused by inactivation of the region 15q11-13 of the paternal chromosome. The characteristic features of this syndrome are: (1) severe neonatal hypotonia; (2) feeding problems and poor weight gain in the first year of life; (3) hyperphagia and obesity appear about 4–8 years; (4) characteristic dysmorphic features (small hands and feet, almond-shaped eyes, prominent nasal bridge, downturned lips, tall, narrow forehead); (5) hormonal deficiencies (growth hormone deficiency, hypogonadism, hypothyroidism); (6) intellectual impairment, speech difficulties, and behavioral disturbances. Genetic confirmation of the syndrome should be made as soon as possible in all neonates with hypotonia and in all older children with a characteristic phenotype. Implementation of recombinant growth hormone (rGH) treatment is possible in all children with Prader–Willi syndrome with a BMI below the 97th percentile. The therapy with rGH improves the body composition with the increase of lean body mass and decrease of visceral fat depot. Bardet–Biedl syndrome is a ciliopathy caused by an autosomal recessive mutation in one of the 24 genes related to the function of the Bbsome—the protein complex involved in the function of the cillia. In addition to obesity, Bardet–Biedl syndrome is characterized by polydactyly, syndactyly, ataxia, hypertonia, speech difficulties, retinal dystrophy, intellectual impairment, renal dysfunction, and hypogonadism. Less common syndromes associated with obesity development, are for example: Alstrom syndrome, Borjeson–Forssman–Lehmann syndrome, Carpenter syndrome, CHOP syndrome. 3.2.4. Obesity Associated with Endocrine Disorders Endocrine workup should be considered in any case of rapid weight gain with concomitant growth arrest/short stature . In the deferential diagnosis of obesity, some endocrine abnormalities (hypothyroidism, hypercortisolemia, growth hormone deficiency, pseudohypoparathyroidism) should be considered. In children with endocrine obesity, short stature, decreased growth velocity, and delayed bone age are typical . Iatrogenic obesity in children is related to chronic treatment with some drugs that affect appetite and metabolism (glucocorticoids, antiseizure drugs—valproic acid, atypical neuroleptics—e.g., clozapine, olanzapine, risperidone). Hypothalamic obesity arises from the disfunction of hunger and satiety centers in the hypothalamus and extreme hyperphagia. It could be caused by congenital abnormalities, head injuries, or a tumor located in the hypothalamic region . The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended . Weight status of children is closely associated with healthy lifestyle behaviors, such as physical activity, sedentary behavior, screen time, sleep, and dietary behaviors. Over 90% of obesity cases are idiopathic and less than 10% are associated with genetic and hormonal causes . Unhealthy Diet Poor eating habits, including inadequate intake of vegetables, fruit, and milk, and eating too many high-calorie snacks, play a main role in childhood obesity development. The body weight is regulated by various physiological mechanisms that maintain the balance between energy intake and energy expenditure. These regulatory systems under normal conditions, e.g., a positive energy balance of only 500 kJ (120 kcal) per day (approximately one serving of sugar-sweetened soft drink), would produce a 50 kg increase in body mass over 10 years . Apart from excess caloric intake, very important for the development of childhood obesity are: incorrect, insufficient number of meals, skipping breakfast, drinking sugar-sweetened beverages, eating out, eating without hunger, and eating in front of the TV screen. In research conducted by Toschke et al. on 477 children aged 5–7, the prevalence of obesity decreased with the higher number of meals consumed during the day. In the group of children who ate 3 or less meals per day, 15% of children were overweight and 4.2% were obese. Among children who ate 5 or more meals per day, the prevalence of overweight and obesity was 8.1% and 1.7%, respectively. People who regularly skipped breakfast had 4.5 times higher risk of obesity than those who regularly ate breakfast . Sedentary Lifestyle Research conducted in 49 countries in 2018 shows that 80% of Polish children lead a sedentary lifestyle. Our youngest took the penultimate place among their peers from Europe . Children and adolescents spend between 246 and 387 min a day sitting . European children spend up to 2.7 h watching TV a day . Global trends, including excessive screen time spending, are creating a generation of ‘inactive children.’ During the pandemic, the percentage of children meeting the PA guidelines fell even further, while the percentage of children spending ≥ 2 h a day in front of a screen increased from 66% to 88% . Studies have shown that inactivity and sitting for more than four hours a day significantly increase the risk of cardiovascular disease, diabetes, and obesity, reduce sleep time, and also worsen prosocial and behavioral behaviors . The latest reports about so-called obesity say that sedentary lifestyle and video games are the "new thrombophilia cocktail" in adolescents . Weight gain is caused by more time sitting, but also by a greater consumption of snacks and sweets. Therefore, attention should be paid to activities that aim to modify a sedentary lifestyle in both school and home. Just three 5-min walks during the working day can reverse the changes caused by prolonged sitting in the peripheral arteries of the legs . A 2017 study found that climbing stairs, considered high-intensity PA, burns more calories per minute than running . Introducing active video games to increase daily energy expenditure in obese and sedentary children is not a substitute for sports activities but may contribute to increasing energy expenditure beyond the threshold of sedentary activity. Involving children in everyday activities, such as cleaning up after a meal, vacuuming, taking out the dog, throwing out garbage, reduces the time spent in a sitting position. Commercial breaks while watching TV may be used for this purpose. A desk with an adjustable tabletop height or a seat in the form of a fitness ball will also force “active sitting”. Balls provide better concentration in learning than a short period of intense PA or lack of PA while studying . The reduction in school sitting time and the use of active breaks in long sitting resulted in a significant improvement in the apoB/apoA-1 ratio with average effect sizes for TC, HDL-C, and TC to HDL-C ratio. The ability to concentrate attention is also improved. Measuring the number of steps and using health apps on your phone is an effective way to increase your child’s PA and thus weight loss . Most studies use screen time as a replacement for total sedentary time. Media use does not represent all sedentary time . Many interventions to reduce sitting time have focused on increasing PA. It has been shown that active children or athletes, compensating for their high PA, spend quite a lot of time on rest . It is therefore important to correctly evaluate the sedentary time in children . The sedentary behaviors should be reduced in children with excessive weight to maximum 2 h per day . Sleep Restrictions Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . Excessive use of computer screens, tablets, smartphones, especially in the evening and at night may have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day . Psychological Mechanisms The psychological mechanisms behind the onset and maintenance of obesity are the object of inquiry in scientific studies for psychologists with different theoretical backgrounds . Excessive eating, the compulsive consumption of food, and affected somatic functioning (excessive body weight) are often signs of difficulties in a person’s psychological functioning. Obesity can be significant in terms of the mother–child relationship and other relationships in the family. A child’s obesity can play a role in experiencing emotions and in social relationships with peers and adults . Additionally, some recent research points to a role of a chronic stress and alteration in glucocorticoids secretion and action in the development of overweight and obesity. Stress may play a major role in the development and maintenance of excessive body weight in individuals who have an increased glucocorticoid exposure or sensitivity due to increased long-term cortisol levels . Binge Eating Disorder (BED) Most of the excess eating that leads to obesity is not due to physical hunger but psychological causes. Certain cognitive schemas, therefore, trigger emotions and behavior towards food . An important role in eating excessively is also played by ineffective mechanisms of emotional regulation related to the predominance of arousal processes over inhibition processes. This results in a unique style of coping with emotional tension, reduced ability to defer gratification, and impulsiveness . Binge Eating Disorder is characterized by the occurrence of recurrent, uncontrolled binge eating episodes, defined as eating significantly more food at a given time than most people would under similar circumstances and times . The American psychiatric classification Diagnostic and Statistical Manual of Mental Disorders. Fifth Edition (DSM-5) distinguished BED as an independent disease entity, symbol 307.51 (F50.8) . BED is now recognized as a separate type of eating disorder (in DSM-5) in addition to eating disorders such as bulimia nervosa (BN) and anorexia nervosa (AN) . According to various data, this problem affects about 2–5% of the population and more often affects women . This percentage increases significantly in obese people, ranging from 30% to even 36–42% , and 13–27% of obese individuals seeking treatment have ED . To diagnose BED, ≥3 of the following indicators of control impairment for binge eating episodes must be present: eating until an unpleasant feeling of fullness appears; eating large amounts of food when not physically hungry; eating rapidly than usual; eating alone because of embarrassment; and feelings of disgust, guilt, or depression after an episode of binge eating . Additionally, to diagnose BED, binge eating episodes must occur at least once a week for at least 3 months. Poor eating habits, including inadequate intake of vegetables, fruit, and milk, and eating too many high-calorie snacks, play a main role in childhood obesity development. The body weight is regulated by various physiological mechanisms that maintain the balance between energy intake and energy expenditure. These regulatory systems under normal conditions, e.g., a positive energy balance of only 500 kJ (120 kcal) per day (approximately one serving of sugar-sweetened soft drink), would produce a 50 kg increase in body mass over 10 years . Apart from excess caloric intake, very important for the development of childhood obesity are: incorrect, insufficient number of meals, skipping breakfast, drinking sugar-sweetened beverages, eating out, eating without hunger, and eating in front of the TV screen. In research conducted by Toschke et al. on 477 children aged 5–7, the prevalence of obesity decreased with the higher number of meals consumed during the day. In the group of children who ate 3 or less meals per day, 15% of children were overweight and 4.2% were obese. Among children who ate 5 or more meals per day, the prevalence of overweight and obesity was 8.1% and 1.7%, respectively. People who regularly skipped breakfast had 4.5 times higher risk of obesity than those who regularly ate breakfast . Research conducted in 49 countries in 2018 shows that 80% of Polish children lead a sedentary lifestyle. Our youngest took the penultimate place among their peers from Europe . Children and adolescents spend between 246 and 387 min a day sitting . European children spend up to 2.7 h watching TV a day . Global trends, including excessive screen time spending, are creating a generation of ‘inactive children.’ During the pandemic, the percentage of children meeting the PA guidelines fell even further, while the percentage of children spending ≥ 2 h a day in front of a screen increased from 66% to 88% . Studies have shown that inactivity and sitting for more than four hours a day significantly increase the risk of cardiovascular disease, diabetes, and obesity, reduce sleep time, and also worsen prosocial and behavioral behaviors . The latest reports about so-called obesity say that sedentary lifestyle and video games are the "new thrombophilia cocktail" in adolescents . Weight gain is caused by more time sitting, but also by a greater consumption of snacks and sweets. Therefore, attention should be paid to activities that aim to modify a sedentary lifestyle in both school and home. Just three 5-min walks during the working day can reverse the changes caused by prolonged sitting in the peripheral arteries of the legs . A 2017 study found that climbing stairs, considered high-intensity PA, burns more calories per minute than running . Introducing active video games to increase daily energy expenditure in obese and sedentary children is not a substitute for sports activities but may contribute to increasing energy expenditure beyond the threshold of sedentary activity. Involving children in everyday activities, such as cleaning up after a meal, vacuuming, taking out the dog, throwing out garbage, reduces the time spent in a sitting position. Commercial breaks while watching TV may be used for this purpose. A desk with an adjustable tabletop height or a seat in the form of a fitness ball will also force “active sitting”. Balls provide better concentration in learning than a short period of intense PA or lack of PA while studying . The reduction in school sitting time and the use of active breaks in long sitting resulted in a significant improvement in the apoB/apoA-1 ratio with average effect sizes for TC, HDL-C, and TC to HDL-C ratio. The ability to concentrate attention is also improved. Measuring the number of steps and using health apps on your phone is an effective way to increase your child’s PA and thus weight loss . Most studies use screen time as a replacement for total sedentary time. Media use does not represent all sedentary time . Many interventions to reduce sitting time have focused on increasing PA. It has been shown that active children or athletes, compensating for their high PA, spend quite a lot of time on rest . It is therefore important to correctly evaluate the sedentary time in children . The sedentary behaviors should be reduced in children with excessive weight to maximum 2 h per day . Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . Excessive use of computer screens, tablets, smartphones, especially in the evening and at night may have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day . The psychological mechanisms behind the onset and maintenance of obesity are the object of inquiry in scientific studies for psychologists with different theoretical backgrounds . Excessive eating, the compulsive consumption of food, and affected somatic functioning (excessive body weight) are often signs of difficulties in a person’s psychological functioning. Obesity can be significant in terms of the mother–child relationship and other relationships in the family. A child’s obesity can play a role in experiencing emotions and in social relationships with peers and adults . Additionally, some recent research points to a role of a chronic stress and alteration in glucocorticoids secretion and action in the development of overweight and obesity. Stress may play a major role in the development and maintenance of excessive body weight in individuals who have an increased glucocorticoid exposure or sensitivity due to increased long-term cortisol levels . Most of the excess eating that leads to obesity is not due to physical hunger but psychological causes. Certain cognitive schemas, therefore, trigger emotions and behavior towards food . An important role in eating excessively is also played by ineffective mechanisms of emotional regulation related to the predominance of arousal processes over inhibition processes. This results in a unique style of coping with emotional tension, reduced ability to defer gratification, and impulsiveness . Binge Eating Disorder is characterized by the occurrence of recurrent, uncontrolled binge eating episodes, defined as eating significantly more food at a given time than most people would under similar circumstances and times . The American psychiatric classification Diagnostic and Statistical Manual of Mental Disorders. Fifth Edition (DSM-5) distinguished BED as an independent disease entity, symbol 307.51 (F50.8) . BED is now recognized as a separate type of eating disorder (in DSM-5) in addition to eating disorders such as bulimia nervosa (BN) and anorexia nervosa (AN) . According to various data, this problem affects about 2–5% of the population and more often affects women . This percentage increases significantly in obese people, ranging from 30% to even 36–42% , and 13–27% of obese individuals seeking treatment have ED . To diagnose BED, ≥3 of the following indicators of control impairment for binge eating episodes must be present: eating until an unpleasant feeling of fullness appears; eating large amounts of food when not physically hungry; eating rapidly than usual; eating alone because of embarrassment; and feelings of disgust, guilt, or depression after an episode of binge eating . Additionally, to diagnose BED, binge eating episodes must occur at least once a week for at least 3 months. Monogenic obesity should be considered in children with early onset of weight gain (<2 years of age) and concomitant hyperphagia . Causes of secondary obesity include: genetic (monogenic, syndromic), endocrine, iatrogenic, or hypothalamic. Suspicion of secondary obesity should be assumed based on anamnesis (patient’s and family history) and physical examination with anthropometric evaluation, followed by additional diagnostics (differential diagnosis, hormonal, genetic, imaging assessment). The clinical features suggesting a genetic cause of obesity are: (1) history of consanguinity in the family; (2) intellectual impairment; (3) dysmorphic features; (4) organ/system specific abnormalities; (5) severe obesity of early development; (6) hyperphagia and food seeking behaviors; (7) other specific features/characteristic phenotypes. The confirmation of the diagnosis should be made on the basis of genetic testing. Genetic obesity could be caused by a mutation in a single gene (monogenic), inherited recessively. It disrupts the regulatory system of satiety and hunger as well as energy expenditure. It is a rare condition and occurs in 3–10% of children with severe obesity. The most common gene mutation related to monogenic obesity is listened in . Personalized treatment is available for some mutations. Patients with leptin deficiency and biologically inactive leptin can be treated with recombinant human leptin (metreleptin) . Melanocortin 4 receptor (MC4R) agonist, setmelanotide, is now approved for the treatment in patients with proopiomelanocortin, leptin receptor, and proprotei convertase subtilisin/kexin type 1 (PCSK1) deficiencies . It is also known that patients with some mutations can be successfully treated with well-known drugs, e.g., glucagon-like peptide 1 (GLP-1) agonist, which is effective in weight reduction in patients with MC4R mutations, and obesity related to Kinase suppressor of Ras 2 mutation is well treated with metformin . Identification of a monogenic background is also important in a patient’s qualification to bariatric surgery. Syndromic obesity is usually related to dysmorphic features, mental retardation, and organ-system specific abnormalities . Syndromic obesity can be caused by a single gene mutation or a larger chromosomal region change that involves several/many genes. Despite obesity, it is usually related to dysmorphic features and characteristic of syndrome abnormalities. It is estimated that obesity could be a characteristic of almost 100 syndromes. The most common are Prader–Willi syndrome and Bardet–Biedl syndrome. Prader–Willi syndrome is the most common form of syndromic obesity (1:15,000–25,000 births). It is caused by inactivation of the region 15q11-13 of the paternal chromosome. The characteristic features of this syndrome are: (1) severe neonatal hypotonia; (2) feeding problems and poor weight gain in the first year of life; (3) hyperphagia and obesity appear about 4–8 years; (4) characteristic dysmorphic features (small hands and feet, almond-shaped eyes, prominent nasal bridge, downturned lips, tall, narrow forehead); (5) hormonal deficiencies (growth hormone deficiency, hypogonadism, hypothyroidism); (6) intellectual impairment, speech difficulties, and behavioral disturbances. Genetic confirmation of the syndrome should be made as soon as possible in all neonates with hypotonia and in all older children with a characteristic phenotype. Implementation of recombinant growth hormone (rGH) treatment is possible in all children with Prader–Willi syndrome with a BMI below the 97th percentile. The therapy with rGH improves the body composition with the increase of lean body mass and decrease of visceral fat depot. Bardet–Biedl syndrome is a ciliopathy caused by an autosomal recessive mutation in one of the 24 genes related to the function of the Bbsome—the protein complex involved in the function of the cillia. In addition to obesity, Bardet–Biedl syndrome is characterized by polydactyly, syndactyly, ataxia, hypertonia, speech difficulties, retinal dystrophy, intellectual impairment, renal dysfunction, and hypogonadism. Less common syndromes associated with obesity development, are for example: Alstrom syndrome, Borjeson–Forssman–Lehmann syndrome, Carpenter syndrome, CHOP syndrome. Endocrine workup should be considered in any case of rapid weight gain with concomitant growth arrest/short stature . In the deferential diagnosis of obesity, some endocrine abnormalities (hypothyroidism, hypercortisolemia, growth hormone deficiency, pseudohypoparathyroidism) should be considered. In children with endocrine obesity, short stature, decreased growth velocity, and delayed bone age are typical . Iatrogenic obesity in children is related to chronic treatment with some drugs that affect appetite and metabolism (glucocorticoids, antiseizure drugs—valproic acid, atypical neuroleptics—e.g., clozapine, olanzapine, risperidone). Hypothalamic obesity arises from the disfunction of hunger and satiety centers in the hypothalamus and extreme hyperphagia. It could be caused by congenital abnormalities, head injuries, or a tumor located in the hypothalamic region . 3.3.1. Arterial Hypertension Obesity is the main risk factor for arterial hypertension development in children and adolescents. Arterial hypertension (AH) is diagnosed in approximately 30% of pediatric patients with obesity, and the risk increases with the severity of the obesity . Weight gain accounts for up to 75% of the risk of primary AH . Blood pressure measurement is recommended in all children with overweight or obesity. Early diagnosis of AH is crucial for any interventions that may reduce cardiovascular morbidity and mortality later in life . Blood pressure (BP) should be measured in all children aged ≥3 years at least once a year and during any routine physician examination. However, in all children with overweight and obesity it is recommended to perform office BP measurements in all children with overweight and obesity <3 years of age, during routine health supervision visits, and visits related to health problems (at least once a year) . It is also recommended to perform measurements in children <3 years of age if there are additional risk factors such as neonatal complications, cardiac malformations, genetic diseases, acquired or congenital kidney diseases, neoplasms, drug use, and diseases inducing increased intracranial pressure . Office Blood Pressure Measurement The device used to measure BP must be validated for children with an appropriate size cuff to cover 80–100% of the individual’s arm circumference. Before BP measurement, we should ensure that the patient is sitting or relaxed for 3 to 5 min before. Measurement should be done three times with an interval of 3 min between measurements and use the average of the last two. The auscultatory method is recommended. If the oscillometric method is used, the device must be validated. If AH is detected by the oscillometric method, it needs to be confirmed using the auscultatory method. The diagnosis of AH requires a mean value of 3 independent measurements 95 percentile for age, sex, and height . It is recommended to use the standards recommended by ESH (European Society Of Hypertension) (see online calculator: https://hyperchildnet.eu/blood-pressure-calculator/ accessed on 30 July 2022). Home Blood Pressure Monitoring Home BP in children with obesity correlates with target organ damage better than office BP, and may better reflect the effect of risk factors such as obesity and its metabolic complications . It is recommended to perform home BP monitoring with a validated oscillometric device for 6 to 7 days, with duplicate morning and evening measurements . Ambulatory Blood Pressure Monitoring Ambulatory blood pressure monitoring is recommended in pediatric patients with severe obesity, with sleep-disordered breathing, any damage to the target organ (left ventricular hypertrophy and microalbuminuria) and normal office BP (suspicion of masked hypertension), type 2 diabetes mellitus, and chronic kidney disease . Assessment of Potential Target Organ Damage in Patients with Obesity and AH If arterial hypertension in a pediatric patient with obesity is confirmed, the following diagnostic tests are recommended: (1) assessment of kidney function: blood urea nitrogen, creatinine (and glomerular filtration by formula), electrolytes, urine examination, microalbuminuria; (2) evaluation of organ damage: echocardiography (to assess left ventricular hypertrophy or remodeling), fundoscopy. Treatment of Arterial Hypertension in Children with Obesity Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the management of AH in children with obesity. Daily high to moderate intensity exercise is recommended for 60 to 90 min. There are no contraindications to practicing certain types of PA. In the diet, particularly important are the limitation of sodium intake and the proper sodium to potassium ratio . Pharmacological therapy should be considered in children with grade 1 hypertension in whom BP did not adequately decrease despite 6–12 months of non-pharmacological therapy and is indicated in children with grade 2 hypertension and/or target organ damage. The preferred drug classes are angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), and dihydropyridine calcium antagonists . If stage 2 hypertension, secondary causes, or any damage to the target organ are present, the patient must be referred to a specialist for further diagnostics tests and treatment. 3.3.2. Prediabetes and Type 2 Diabetes Mellitus Assessment of glucose metabolism is recommended in all children and adolescents with overweight and obesity since the age of 6 years . Since there is some evidence that prediabetes is already present in approximately 5% obese children <10 years, it is recommended to measure fasting blood glucose in all children with overweight and obesity at the age of <6 years, as the first step to detect prediabetes and type 2 diabetes . The screening must be repeated after 2–3 years, unless there is a rapid increase in weight or the development of other cardiometabolic complications . The oral glucose tolerance test (OGTT) is recommended to be performed every two years in children with BMI > the 95th percentile > 10 years of age (or earlier, if puberty has already commenced) . OGTT should be performed in a standard setting, with a glucose dose of 1.75 g/kg, a maximum of 75 g . The use of glycated hemoglobin A1c (HbA1c) remains controversial in pediatric age because HbA1c has a lower sensitivity than fasting or OGTT plasma glucose . There is no recommendation to measure insulin concentrations during diagnostics obesity complications in children or adolescents . Fasting insulin concentrations show considerable overlap between insulin resistant and insulin sensitive youth. Therefore, there is no well-defined cut-off point to differentiate normal from abnormal and there is no universally accepted, clinically useful, numeric expression that defines insulin resistance . We recommend against using insulin testing as a basis for making therapeutic decisions. Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the treatment of prediabetes in children with obesity. Certain medications (e.g., metformin, liraglutide) have potent effects on glucose levels, and their use may be considered under the supervision of an expert. Treatment options for pediatric patients with type 2 diabetes include insulin, metformin, and liraglutide (age limit according to summary of product characteristics) . 3.3.3. Dyslipidemia Dyslipidemias are disorders of lipoprotein metabolism that result in the following abnormalities: (1) high total cholesterol (TC); (2) high low-density lipoprotein cholesterol (LDL-C); (3) high non-high-density lipoprotein cholesterol (non-HDL-C); (4) high triglycerides (TG); (5) low HDL-C. Normal lipid and lipoprotein values in children vary by age and sex. In many patients, hyperlipidaemia is caused by some underlying "non-lipid" etiology rather than a primary disorder of lipoprotein metabolism. Among cardiovascular risk factors associated with increased morbidity and mortality, lipids and lipoproteins are of special importance, and in many studies, childhood obesity has been shown to be associated with increased levels of TC, LDL-C, and TG, and decreased HDL-C . The most frequent lipid disorders in children with obesity is combined dyslipidaemia characterized by moderate to severe elevation in TG and non–HDL cholesterol, decreased HDL-C, and mild to moderate elevation in TC and LDL-C . Dyslipidemia is the most common consequence of childhood obesity and is present in as many as 43% of obese children. It is related significantly to insulin resistance as the latter is enhancing hepatic delivery of non-esterified free fatty acids for TG production and sequestration into triglyceride-rich lipoproteins. TGs are deposited in the vessel wall and initiate the process of LDL-C accumulation. They are strongly associated with the risk of developing atherosclerotic disease. LDL-C, very low-density lipoprotein (VLDL), and lipoprotein-a are the primary apolipoprotein-B containing lipoproteins implicated in the formation of atherosclerotic lesions. HDL-C has been thought to be protective through its ability to prevent oxidation of LDL-C . Atherosclerosis starts at a young age, and the number of young people who develop atherosclerosis is increasing, especially children with risk factors such as familial hypercholesterolemia (FH), type 1 diabetes mellitus, and hypertension. In recent decades, hyperlipidaemia in children and adolescents has been increasing and many societies have identified these children as being at increased risk for premature atherosclerosis. The Bogalusa Heart Study demonstrated fatty streaks in 50% of cases between 2 and 15 years of age and in 85 percent of older subjects between 21 and 39 years of age . The prevalence and extent of atherosclerosis found in the aorta and coronary arteries increased with increasing BMI, BP measurements, serum TC, and LDL-C. The degree of atherosclerotic changes increased with worsening severity and a greater number of risk factors . Assessment of lipid metabolism is recommended in all children and adolescents with overweight and obesity since the age of 2 years . Screening of lipid levels in children reveals both genetic lipid abnormalities (e.g., including familial hypercholesterolemia, which affects 1 in 250 people), and dyslipidaemia, which responds favorably to lifestyle changes. In children with excessive weight, it is recommended to assess basic, fasting lipid profile (TC, TG, LDL-C, HDL-C) every 2 years. In children with any lipid disorders, the fasting lipid profile measurement should be repeated every six months to monitor treatment effectiveness. In children with obesity, the diagnosis of dyslipidaemia requires additional lifestyle and lifestyle changes to reduce the risk and occurrence of cardiovascular complications. It is recommended primarily in the therapeutic management of dyslipidaemia caused by the obesity lifestyle change intervention. Even a slight weight loss is associated with a significant decrease in the TG concentration and an increase in HDL-C concentration. In addition to the recommended diet, it is helpful to achieve the desired effect, and therapeutic help is an adequate and regular increase in PA. As an adjuvant treatment use plant stanols, plant sterols, and ω−3 fatty acids are recommended. The plant stanols and sterol esters were shown to inhibit intestinal cholesterol absorption, leading to reduction in LDL-C up to 12%. Additionally mild reduction of TG during their usage is reported. Additionally, ω−3 fatty acids are widely accepted as a supplement used in children. Their exact mechanism of action is not clear, but they mainly reduce TG level. Usage of red yeast rice supplement, monacolin K, also known as lovastatin, an inhibitor of liver cholesterol synthesis, can be considered with caution. It is able to reduce LDL-C between 15–25% within 6–8 weeks of therapy. Due to its mechanism of action, similar to statins, the possible side effects should be closely monitored . According to the National Institute of Health, Lung, and Blood (NHLBI) in cases in which non-pharmacological treatment has no effect, the use of pharmacological treatment should be considered. In accordance with the guidelines of the Polish Lipid Association (PoLA), after 6 months of low-lipid diet in children above the age of 6 with LDL-C concentration, persistent ≥190 mg/dL or ≥160 mg/dL and other risk factors, the statin treatment, together with non-pharmacological, should be considered . 3.3.4. Digestive Tract Complications The most common digestive tract complication related to obesity in children is metabolic-associated fatty liver disease (MAFLD) . MAFLD, previously called nonalcoholic fatty liver disease, may be present in 38% of overweight and obese children and adolescents . The change in terminology aims to reflect the pathogenesis and risk factors for the disease, such as obesity . It is a liver presentation of insulin resistance . The risk of developing liver cirrhosis in children with MAFLD is much lower than in adults and amounts to 1–2% of children. In a child under the age of 10 years of age with hepatic steatosis, the secondary causes of the condition are common and should be considered (glycogen storage disease, hepatitis C virus, and others) . In children with obesity, the diagnosis of MAFLD should be made on the basis of imaging and blood biomarkers. Liver biopsy is the standard of reference, although it is an invasive procedure and should be used only in rational cases. The most available and recommended imaging method for assessing liver steatosis is ultrasound. If available, magnetic resonance imaging (MRI) can be performed. Computed tomography (CT), although accurate, is not recommended due to the high X-ray exposition. The blood biomarker of MAFLD is an increase in alanine aminotransferase (AlAT) to more than twice the upper limit of normal. Unfortunately, both non-invasive methods (imaging and blood biomarkers) have moderate diagnostic accuracy. Additional evaluation of elastography could be useful, however, due to lack of validation, the accuracy remains uncertain . From a clinical perspective, liver fibrosis is far more important than liver steatosis. Liver biopsy is a golden standard for assessment of fibrosis, but non-invasive methods in the future should replace liver biopsy once validated properly. Elastography, multiparametric RMI, and serum markers of fibrosis are investigated. In the treatment of MAFLD, a diet with limited simple carbohydrates and increased fiber consumption is strongly recommended . The introduction of supplements with ω−3 polyunsaturated fatty acids was postulated to reduce liver steatosis , but was not confirmed in other studies . Pharmacological treatment of components of metabolic syndrome with metformin or statins should be considered once MAFLD is associated with lipid disturbances and insulin resistance. However, according to recent NASPGHAN guidelines, no pharmacotherapy is recommended . Cholelithiasis in children is a rare disease with a prevalence of 0.13–0.22% and there is no indication for routine screening . The main risk factors of cholelithiasis are elevated BMI and rapid weight loss. The risk of gallstones is higher in girls than in boys and increases with the severity of obesity and use of contraceptive pills . In patients after sleeve gastrectomy, the incidence of symptomatic cholelithiasis is 3.5% over a period of 2 years . Only in half of children is cholelithiasis symptomatic. In the diagnostic approach to cholelithiasis, abdominal ultrasound and liver enzyme assessment are crucial. Asymptomatic gallstones are diagnosed during a routine ultrasound examination. There is no evidence to routinely screen all obese children for cholelithiasis. However, abdominal ultrasound could be recommended in obese patients during/after rapid weight loss. In symptomatic patients (with pain in the upper right quadrant, vomiting, nausea, jaundice), the ultrasound is recommended. Symptomatic cholelithiasis requires endoscopic cholecystectomy, and in asymptomatic patients, medical therapy with ursodeoxycholic acid (UDCA) can be considered under close observation . UDCA treatment can also be effective in the prevention of gallstone formation in patients after sleeve gastrectomy. Obesity in children increases the risk of gastrointestinal reflux (GERD). GERD should be suspected if there is a characteristic clinical presentation (heartburn usually after eating, worse at night, chest pain, difficulty swallowing, regurgitation) . GERD symptoms increased progressively with increasing BMI and waist circumference. 13.1% of obese children reported symptoms suggestive of GERD. Typical treatment and management is recommended, along with weight reduction to reduce symptoms. 3.3.5. Polycystic Ovary Syndrome and Obesity Impact on Puberty In children with excessive weight, isolated, mild forms of precocious puberty (precocious pubarche, axillarche, thelarche) occur more often, and in obese girls, central puberty tends to start earlier . The most common form of precocious puberty associated with obesity is precocious pubarche. It is related to insulin excess, which is often observed in obese children. Hyperinsulinemia can stimulate androgen production in the adrenals and ovaries. In prepubertal children, excessive adrenal androgen production can be clinically presented as precocious pubic and axillary hair occurrence before the age of 8 in girls and 9 in boys. It could be accompanied by pubertal sweat odor, mild acne, moderately accelerated growth, and bone age. It usually occurs more frequently in girls. In the hormonal assessment, isolated mild elevations of dehydroepiandrosterone sulfate (DHEAS) levels were observed. Less common in obese girls is isolated thelarche, as a consequence of androgen conversion to estrogens in the fat tissue. It is characterized by low concentrations of luteinizing hormone (LH) and estradiol with a mild increase in follicle stimulating hormone (FSH) levels. Height velocity and bone age are not accelerated. The mild forms of precocious puberty in obese children do not need any treatment. They are characterized by a stable course or very slow progression. In their treatment, serial observation and behavioral treatment of excessive weight were indicated. In girls with excessive weight, irregular menses occur twice more often than in non-obese peers . After menarche, obesity can be a cause of menstrual disturbances (heavy, painful menstruation, oligomenorrhea, secondary amenorrhea) and polycystic ovary syndrome (PCOS) . PCOS in adolescent girls is characterized by menstrual irregularities and clinical hyperandrogenism and is associated with infertility, metabolic disturbances, type 2 diabetes, and cardiovascular disease in adulthood. In obese adolescents, it is related to hyperinsulinemia, which can stimulate ovarian and adrenal androgen production, as well as decrease the synthesis of sex hormone binding globulin (SHBG) in the liver, leading to excess androgen. According to the consensuses from 2017 and 2018 years, PCOS in an adolescent girls can be diagnosed if both criteria are met : (1) Menstrual disturbances (irregular menses, oligomenorrhea, and secondary amenorrhea). Irregular menses are defined as normal in the first gynecological year, however, the cycle duration of more than 90 days needs special attention. In the gynecological age of less than 3 years, the cycle is defined as irregular if it is shorter than 21 days and longer than 45 days. From the third gynecological age, the duration of the cycle should be between 21 and 35 days. Secondary amenorrhea is defined as a lack of menstruation for more than 3 months and primary amenorrhea as a lack of menarche at the age of 15 years or more than 3 years post-thelarche. (2) Hyperandrogenism (clinical and/or biochemical). The clinical presentation of hyperandrogenism in adolescent girls is hirsutism, defined as excessive, coarse, terminal hair growth distributed in a male fashion, assessed by the Ferriman–Gallwey score for 8 or more points. It should be distinguished from hypertrichosis. Biochemical androgen access should be assessed on the basis of total testosterone and SHBG measurements and calculation of free/bioavailable testosterone or free androgen index. The diagnosis of PCOS could be made if the gynecological age is older than 2 years and persistent menstrual disturbances are observed for more than 2 years. Other causes of menstrual disturbances and hyperandrogenism must be excluded (hypothyreosis, hypercortisolemia, hyperprolactinemia, congenital adrenal hyperplasia, androgen secreting tumor). The objectives of the treatment are regular menses and decrease in clinical features of hyperandrogenism. Despite a reduction in body weight, contraceptive therapy with antiandrogen action progestogen is indicated. In very young patients and in those with contraindications for estrogen therapy (venous thrombosis, migraine with aura), the natural progestogen therapy in the second phase of the cycle can be used. Antiandrogens (spironolactone, finasteride) are not registered in PCOS treatment, and their use should be considered with great caution. Metformin could be used in girls with PCOS and metabolic disturbances, and in addition to the improvement of metabolic profile, it could restore regular menses . 3.3.6. Respiratory Disorders in Obesity In patients with obesity, the most commonly reported symptoms include an increased respiratory rate, dyspnea after low to moderate exertion, wheezing, and chest pain. Respiratory disorders such as bronchial asthma, obstructive sleep apnea (OSA) syndrome, or hypoventilation syndrome are more common in this group of patients . Several review articles have appeared in recent years on the increased prevalence of asthma in obese patients. However, the topic remains highly controversial. Increased body fat may lead to systemic inflammation, increasing pro-inflammatory serum cytokines. With decreased lung compliance, lung volume, and peripheral airway diameter, bronchial hyperresponsiveness may also be important. A confirmatory factor for the effect of obesity on asthma is improved disease control when weight is reduced, as well as observed increased medication use and reported poor quality of life in obese patients compared to normal weight patients. The therapeutic efficacy of inhaled corticosteroids and their combination with long-acting beta agonists (LABAs) is significantly reduced. In spirometry, lower values of Forced expiratory volume (FEV1), total lung capacity (TLC), and functional residual capacity (FRC) are observed compared to patients with normal weight and bronchial asthma . OSA is a condition manifested during sleep, characterized by repeated shallowing or complete absence of airflow through the upper airway with preserved chest and abdominal movements. It is associated with airflow limitation and consequent hypoxia (transient episodes of hypoxia and hypercapnia). It also causes sleep fragmentation through activation of the sympathetic nervous system and arousal. Its prevalence rate in children and adolescents with overweight or obesity ranges between 13–59% . The features that raise suspicion of OSA include mouth breathing, pauses in breathing pattern, snoring during sleep, concentration problems, hyperactivity, headaches, and excessive daytime sleepiness. Untreated obstructive sleep apnea alters the quality of sleep and shortens the life expectancy of those affected . Polysomnographic studies are performed to diagnose OBS. Weight loss is the first line therapy for obese children with OSA. For children with severe OSA, non-invasive ventilation (NIV) and continuous positive airway pressure (CPAP) can be the treatment of choice. Severe obesity and OSA may lead to the obesity - hypoventilation syndrome , with hypoxia, hypercapnia, and reduced ventilatory drive. Hypoventilation syndrome occurs in severe obesity and its risk increases with increasing body weight. It is a chronic disease that reduces the activity of the patient in social life, reduces quality of life, and increases the risk of death. It is characterized by an increase in the partial pressure of CO 2 and a decrease in O 2 (PaCO 2 > 45 mmHg and PaO 2 < 70 mmHg), with other causes such as neuromuscular disorders, pulmonary vascular pathology, iatrogenic causes (drugs, psychoactive substances), metabolic diseases or respiratory and thoracic disorders excluded. Diagnostic criteria included a BMI ≥ 30 kg/m 2 combined with hypoventilation PaCO 2 > 45 mmHg (and during sleep > 55 mmHg for at least 10 min). Symptoms may initially be minor and as hypercapnia increases, headaches, impaired concentration, excessive sleepiness, confusion, and decreased exercise tolerance may occur . 3.3.7. The Effects of Obesity on Musculoskeletal Health Obesity is one of the most common conditions that negatively affects bone and joint health. Evidence showed positive associations between elevated body fat and the development of slipped capital femoral epiphysis , Blount’s disease, and genu varum . Moreover, the risk of fracture, musculoskeletal pain , impaired mobility, and lower extremity malalignment are more common in children and adolescents with excess weight . Persistence of obesity from childhood to adulthood may lead to an increased risk of osteoarthritis in the weightbearing joints, particularly at the knee . Longitudinal studies indicated that increased body fat may influence the higher risk of incident and worsening joint pain . 3.3.8. Renal Complications It is recommended to assess kidney function in children and adolescents with obesity. In adults, obesity is an independent risk factor for chronic kidney disease . In children, it is not so obvious, but complications of obesity (e.g., arterial hypertension, dyslipidemia, insulin resistance, hyperglycemia, inflammatory state, and autonomous system dysfunction) can alter the kidney function . Therefore, the basic evaluation of kidney function (creatinine level, glomerular filtration rate (eGFR [mL/min/1.73 m 2 ] = 0.413 × body height [cm]/SCr [mg/dL]) and urine analysis) should be performed in children with overweight and obesity. More detailed screening of kidney dysfunction (albuminuria, albumin/creatinine ratio) should be performed in patients with obesity and concomitant arterial hypertension and type 2 diabetes . The obesity seems to be an important risk factor associated with incontinence, but the interaction between these factors is complex and needs further investigation . 3.3.9. Neurological Complications The obesity in children is a risk factor for migraine and idiopathic intracranial hypertension. Obesity in pubertal children is associated with a higher risk of idiopathic intracranial hypertension (pseudotumor cerebri) manifested with headache, nausea, vomiting, retroocular pain, and visual impairment . However, the incidence of this condition is much less common in children than in adults. The possible pathogenesis of idiopathic intracranial hypertension in obesity is increased intraabdominal pressure, which in turn increases intrathoracic and intracerebral venous pressure. The most common clinical symptom of pseudotumor cerebri is headache, usually worse in the morning. It can be accompanied by nausea, vomiting, retroocular pain, decreased or blurred vision, diplopia, or even transient visual obscuration . In 19% of children, it is associated with permanent visual impairment . In younger children, irritability, apathy, and somnolence can occur. Less common are other nonspecific neurological symptoms—ataxia, dizziness, stiff neck, seizures, and facial nerve palsy. In some children, papilledema may be the only symptom of pseudotumor cerebri, without other symptoms. The diagnosis of idiopathic intracranial hypertension is the diagnosis of exclusion. Diagnostic criteria are the presence of characteristic clinical symptoms, including papilledema, in a patient with a normal level of consciousness, with normal neurologic physical examination (except cranial nerves), with normal findings on cerebrospinal fluid examination, neuroimaging studies, and documented increased intracranial pressure with lumbar puncture. Elevated intracranial pressure in a child with obesity can be diagnosed if the pressure of cerebrospinal fluid exceeds 28 cm H 2 O . Magnetic resonance imaging shows signs of elevated intracranial pressure. Management usually covers medication: acetazolamide, which is a diuretic but also reduces cerebrospinal fluid production. Furosemide can be used together with acetazolamide or alone if the first medication is contraindicated. In some patients, the symptoms can resolve after the diagnostic lumbar puncture . Obesity seems to be a risk factor for migraine progression and frequency of migraines. The prevalence of episodic migraine in obese children is higher compared to children of normal weight (8.9% vs. 2.5%) . There is a relationship between headache physiopathology and the response of the central and peripheral mechanism to food consumption. The suggested mechanism includes obesity as a pro-inflammatory disease, which may be associated with neurovascular inflammation. Elevated levels of calcitonin gene-related peptides, dysregulation of the action of orexin, leptin, and adiponectin are possible proinflammatory factors related to obesity . Therefore, weight control should be part of migraine treatment in a child with excessive weight. 3.3.10. Mental Health Disorders Overweight and obesity can lead to physiological and biochemical disorders of the body, as well as a deterioration in self-esteem, well-being, and relations with the environment . In children, they often initiate a negative emotional attitude towards themselves and a sense of non-acceptance by others. In the following years, they can lead to a feeling of rejection, loneliness, and obese teenagers very often feel disliked, lonely, and rejected by their peers. In such young people who are overweight or obese, there are noticeable difficulties in realizing their dreams, and excessive body weight makes it difficult for them to start their adult life and pursue their professional plans. Additionally, it favors an unattractive self-image, which may contribute to loneliness, a sense of regret, sadness, and even depression. Wardle et al. found that body dissatisfaction was greater in obese children who developed it before the age of 16, therefore it should be identified as part of the multidisciplinary assessment. A referral to a specialist is needed in the suspicion of depressive and/or anxious symptoms, suicidal risk, dysmorphophobic traits, and eating disorders . Obesity is the main risk factor for arterial hypertension development in children and adolescents. Arterial hypertension (AH) is diagnosed in approximately 30% of pediatric patients with obesity, and the risk increases with the severity of the obesity . Weight gain accounts for up to 75% of the risk of primary AH . Blood pressure measurement is recommended in all children with overweight or obesity. Early diagnosis of AH is crucial for any interventions that may reduce cardiovascular morbidity and mortality later in life . Blood pressure (BP) should be measured in all children aged ≥3 years at least once a year and during any routine physician examination. However, in all children with overweight and obesity it is recommended to perform office BP measurements in all children with overweight and obesity <3 years of age, during routine health supervision visits, and visits related to health problems (at least once a year) . It is also recommended to perform measurements in children <3 years of age if there are additional risk factors such as neonatal complications, cardiac malformations, genetic diseases, acquired or congenital kidney diseases, neoplasms, drug use, and diseases inducing increased intracranial pressure . Office Blood Pressure Measurement The device used to measure BP must be validated for children with an appropriate size cuff to cover 80–100% of the individual’s arm circumference. Before BP measurement, we should ensure that the patient is sitting or relaxed for 3 to 5 min before. Measurement should be done three times with an interval of 3 min between measurements and use the average of the last two. The auscultatory method is recommended. If the oscillometric method is used, the device must be validated. If AH is detected by the oscillometric method, it needs to be confirmed using the auscultatory method. The diagnosis of AH requires a mean value of 3 independent measurements 95 percentile for age, sex, and height . It is recommended to use the standards recommended by ESH (European Society Of Hypertension) (see online calculator: https://hyperchildnet.eu/blood-pressure-calculator/ accessed on 30 July 2022). Home Blood Pressure Monitoring Home BP in children with obesity correlates with target organ damage better than office BP, and may better reflect the effect of risk factors such as obesity and its metabolic complications . It is recommended to perform home BP monitoring with a validated oscillometric device for 6 to 7 days, with duplicate morning and evening measurements . Ambulatory Blood Pressure Monitoring Ambulatory blood pressure monitoring is recommended in pediatric patients with severe obesity, with sleep-disordered breathing, any damage to the target organ (left ventricular hypertrophy and microalbuminuria) and normal office BP (suspicion of masked hypertension), type 2 diabetes mellitus, and chronic kidney disease . Assessment of Potential Target Organ Damage in Patients with Obesity and AH If arterial hypertension in a pediatric patient with obesity is confirmed, the following diagnostic tests are recommended: (1) assessment of kidney function: blood urea nitrogen, creatinine (and glomerular filtration by formula), electrolytes, urine examination, microalbuminuria; (2) evaluation of organ damage: echocardiography (to assess left ventricular hypertrophy or remodeling), fundoscopy. Treatment of Arterial Hypertension in Children with Obesity Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the management of AH in children with obesity. Daily high to moderate intensity exercise is recommended for 60 to 90 min. There are no contraindications to practicing certain types of PA. In the diet, particularly important are the limitation of sodium intake and the proper sodium to potassium ratio . Pharmacological therapy should be considered in children with grade 1 hypertension in whom BP did not adequately decrease despite 6–12 months of non-pharmacological therapy and is indicated in children with grade 2 hypertension and/or target organ damage. The preferred drug classes are angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), and dihydropyridine calcium antagonists . If stage 2 hypertension, secondary causes, or any damage to the target organ are present, the patient must be referred to a specialist for further diagnostics tests and treatment. The device used to measure BP must be validated for children with an appropriate size cuff to cover 80–100% of the individual’s arm circumference. Before BP measurement, we should ensure that the patient is sitting or relaxed for 3 to 5 min before. Measurement should be done three times with an interval of 3 min between measurements and use the average of the last two. The auscultatory method is recommended. If the oscillometric method is used, the device must be validated. If AH is detected by the oscillometric method, it needs to be confirmed using the auscultatory method. The diagnosis of AH requires a mean value of 3 independent measurements 95 percentile for age, sex, and height . It is recommended to use the standards recommended by ESH (European Society Of Hypertension) (see online calculator: https://hyperchildnet.eu/blood-pressure-calculator/ accessed on 30 July 2022). Home BP in children with obesity correlates with target organ damage better than office BP, and may better reflect the effect of risk factors such as obesity and its metabolic complications . It is recommended to perform home BP monitoring with a validated oscillometric device for 6 to 7 days, with duplicate morning and evening measurements . Ambulatory blood pressure monitoring is recommended in pediatric patients with severe obesity, with sleep-disordered breathing, any damage to the target organ (left ventricular hypertrophy and microalbuminuria) and normal office BP (suspicion of masked hypertension), type 2 diabetes mellitus, and chronic kidney disease . If arterial hypertension in a pediatric patient with obesity is confirmed, the following diagnostic tests are recommended: (1) assessment of kidney function: blood urea nitrogen, creatinine (and glomerular filtration by formula), electrolytes, urine examination, microalbuminuria; (2) evaluation of organ damage: echocardiography (to assess left ventricular hypertrophy or remodeling), fundoscopy. Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the management of AH in children with obesity. Daily high to moderate intensity exercise is recommended for 60 to 90 min. There are no contraindications to practicing certain types of PA. In the diet, particularly important are the limitation of sodium intake and the proper sodium to potassium ratio . Pharmacological therapy should be considered in children with grade 1 hypertension in whom BP did not adequately decrease despite 6–12 months of non-pharmacological therapy and is indicated in children with grade 2 hypertension and/or target organ damage. The preferred drug classes are angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), and dihydropyridine calcium antagonists . If stage 2 hypertension, secondary causes, or any damage to the target organ are present, the patient must be referred to a specialist for further diagnostics tests and treatment. Assessment of glucose metabolism is recommended in all children and adolescents with overweight and obesity since the age of 6 years . Since there is some evidence that prediabetes is already present in approximately 5% obese children <10 years, it is recommended to measure fasting blood glucose in all children with overweight and obesity at the age of <6 years, as the first step to detect prediabetes and type 2 diabetes . The screening must be repeated after 2–3 years, unless there is a rapid increase in weight or the development of other cardiometabolic complications . The oral glucose tolerance test (OGTT) is recommended to be performed every two years in children with BMI > the 95th percentile > 10 years of age (or earlier, if puberty has already commenced) . OGTT should be performed in a standard setting, with a glucose dose of 1.75 g/kg, a maximum of 75 g . The use of glycated hemoglobin A1c (HbA1c) remains controversial in pediatric age because HbA1c has a lower sensitivity than fasting or OGTT plasma glucose . There is no recommendation to measure insulin concentrations during diagnostics obesity complications in children or adolescents . Fasting insulin concentrations show considerable overlap between insulin resistant and insulin sensitive youth. Therefore, there is no well-defined cut-off point to differentiate normal from abnormal and there is no universally accepted, clinically useful, numeric expression that defines insulin resistance . We recommend against using insulin testing as a basis for making therapeutic decisions. Non-pharmacological therapy, including both dietary modifications and PA, is of great importance in the treatment of prediabetes in children with obesity. Certain medications (e.g., metformin, liraglutide) have potent effects on glucose levels, and their use may be considered under the supervision of an expert. Treatment options for pediatric patients with type 2 diabetes include insulin, metformin, and liraglutide (age limit according to summary of product characteristics) . Dyslipidemias are disorders of lipoprotein metabolism that result in the following abnormalities: (1) high total cholesterol (TC); (2) high low-density lipoprotein cholesterol (LDL-C); (3) high non-high-density lipoprotein cholesterol (non-HDL-C); (4) high triglycerides (TG); (5) low HDL-C. Normal lipid and lipoprotein values in children vary by age and sex. In many patients, hyperlipidaemia is caused by some underlying "non-lipid" etiology rather than a primary disorder of lipoprotein metabolism. Among cardiovascular risk factors associated with increased morbidity and mortality, lipids and lipoproteins are of special importance, and in many studies, childhood obesity has been shown to be associated with increased levels of TC, LDL-C, and TG, and decreased HDL-C . The most frequent lipid disorders in children with obesity is combined dyslipidaemia characterized by moderate to severe elevation in TG and non–HDL cholesterol, decreased HDL-C, and mild to moderate elevation in TC and LDL-C . Dyslipidemia is the most common consequence of childhood obesity and is present in as many as 43% of obese children. It is related significantly to insulin resistance as the latter is enhancing hepatic delivery of non-esterified free fatty acids for TG production and sequestration into triglyceride-rich lipoproteins. TGs are deposited in the vessel wall and initiate the process of LDL-C accumulation. They are strongly associated with the risk of developing atherosclerotic disease. LDL-C, very low-density lipoprotein (VLDL), and lipoprotein-a are the primary apolipoprotein-B containing lipoproteins implicated in the formation of atherosclerotic lesions. HDL-C has been thought to be protective through its ability to prevent oxidation of LDL-C . Atherosclerosis starts at a young age, and the number of young people who develop atherosclerosis is increasing, especially children with risk factors such as familial hypercholesterolemia (FH), type 1 diabetes mellitus, and hypertension. In recent decades, hyperlipidaemia in children and adolescents has been increasing and many societies have identified these children as being at increased risk for premature atherosclerosis. The Bogalusa Heart Study demonstrated fatty streaks in 50% of cases between 2 and 15 years of age and in 85 percent of older subjects between 21 and 39 years of age . The prevalence and extent of atherosclerosis found in the aorta and coronary arteries increased with increasing BMI, BP measurements, serum TC, and LDL-C. The degree of atherosclerotic changes increased with worsening severity and a greater number of risk factors . Assessment of lipid metabolism is recommended in all children and adolescents with overweight and obesity since the age of 2 years . Screening of lipid levels in children reveals both genetic lipid abnormalities (e.g., including familial hypercholesterolemia, which affects 1 in 250 people), and dyslipidaemia, which responds favorably to lifestyle changes. In children with excessive weight, it is recommended to assess basic, fasting lipid profile (TC, TG, LDL-C, HDL-C) every 2 years. In children with any lipid disorders, the fasting lipid profile measurement should be repeated every six months to monitor treatment effectiveness. In children with obesity, the diagnosis of dyslipidaemia requires additional lifestyle and lifestyle changes to reduce the risk and occurrence of cardiovascular complications. It is recommended primarily in the therapeutic management of dyslipidaemia caused by the obesity lifestyle change intervention. Even a slight weight loss is associated with a significant decrease in the TG concentration and an increase in HDL-C concentration. In addition to the recommended diet, it is helpful to achieve the desired effect, and therapeutic help is an adequate and regular increase in PA. As an adjuvant treatment use plant stanols, plant sterols, and ω−3 fatty acids are recommended. The plant stanols and sterol esters were shown to inhibit intestinal cholesterol absorption, leading to reduction in LDL-C up to 12%. Additionally mild reduction of TG during their usage is reported. Additionally, ω−3 fatty acids are widely accepted as a supplement used in children. Their exact mechanism of action is not clear, but they mainly reduce TG level. Usage of red yeast rice supplement, monacolin K, also known as lovastatin, an inhibitor of liver cholesterol synthesis, can be considered with caution. It is able to reduce LDL-C between 15–25% within 6–8 weeks of therapy. Due to its mechanism of action, similar to statins, the possible side effects should be closely monitored . According to the National Institute of Health, Lung, and Blood (NHLBI) in cases in which non-pharmacological treatment has no effect, the use of pharmacological treatment should be considered. In accordance with the guidelines of the Polish Lipid Association (PoLA), after 6 months of low-lipid diet in children above the age of 6 with LDL-C concentration, persistent ≥190 mg/dL or ≥160 mg/dL and other risk factors, the statin treatment, together with non-pharmacological, should be considered . The most common digestive tract complication related to obesity in children is metabolic-associated fatty liver disease (MAFLD) . MAFLD, previously called nonalcoholic fatty liver disease, may be present in 38% of overweight and obese children and adolescents . The change in terminology aims to reflect the pathogenesis and risk factors for the disease, such as obesity . It is a liver presentation of insulin resistance . The risk of developing liver cirrhosis in children with MAFLD is much lower than in adults and amounts to 1–2% of children. In a child under the age of 10 years of age with hepatic steatosis, the secondary causes of the condition are common and should be considered (glycogen storage disease, hepatitis C virus, and others) . In children with obesity, the diagnosis of MAFLD should be made on the basis of imaging and blood biomarkers. Liver biopsy is the standard of reference, although it is an invasive procedure and should be used only in rational cases. The most available and recommended imaging method for assessing liver steatosis is ultrasound. If available, magnetic resonance imaging (MRI) can be performed. Computed tomography (CT), although accurate, is not recommended due to the high X-ray exposition. The blood biomarker of MAFLD is an increase in alanine aminotransferase (AlAT) to more than twice the upper limit of normal. Unfortunately, both non-invasive methods (imaging and blood biomarkers) have moderate diagnostic accuracy. Additional evaluation of elastography could be useful, however, due to lack of validation, the accuracy remains uncertain . From a clinical perspective, liver fibrosis is far more important than liver steatosis. Liver biopsy is a golden standard for assessment of fibrosis, but non-invasive methods in the future should replace liver biopsy once validated properly. Elastography, multiparametric RMI, and serum markers of fibrosis are investigated. In the treatment of MAFLD, a diet with limited simple carbohydrates and increased fiber consumption is strongly recommended . The introduction of supplements with ω−3 polyunsaturated fatty acids was postulated to reduce liver steatosis , but was not confirmed in other studies . Pharmacological treatment of components of metabolic syndrome with metformin or statins should be considered once MAFLD is associated with lipid disturbances and insulin resistance. However, according to recent NASPGHAN guidelines, no pharmacotherapy is recommended . Cholelithiasis in children is a rare disease with a prevalence of 0.13–0.22% and there is no indication for routine screening . The main risk factors of cholelithiasis are elevated BMI and rapid weight loss. The risk of gallstones is higher in girls than in boys and increases with the severity of obesity and use of contraceptive pills . In patients after sleeve gastrectomy, the incidence of symptomatic cholelithiasis is 3.5% over a period of 2 years . Only in half of children is cholelithiasis symptomatic. In the diagnostic approach to cholelithiasis, abdominal ultrasound and liver enzyme assessment are crucial. Asymptomatic gallstones are diagnosed during a routine ultrasound examination. There is no evidence to routinely screen all obese children for cholelithiasis. However, abdominal ultrasound could be recommended in obese patients during/after rapid weight loss. In symptomatic patients (with pain in the upper right quadrant, vomiting, nausea, jaundice), the ultrasound is recommended. Symptomatic cholelithiasis requires endoscopic cholecystectomy, and in asymptomatic patients, medical therapy with ursodeoxycholic acid (UDCA) can be considered under close observation . UDCA treatment can also be effective in the prevention of gallstone formation in patients after sleeve gastrectomy. Obesity in children increases the risk of gastrointestinal reflux (GERD). GERD should be suspected if there is a characteristic clinical presentation (heartburn usually after eating, worse at night, chest pain, difficulty swallowing, regurgitation) . GERD symptoms increased progressively with increasing BMI and waist circumference. 13.1% of obese children reported symptoms suggestive of GERD. Typical treatment and management is recommended, along with weight reduction to reduce symptoms. In children with excessive weight, isolated, mild forms of precocious puberty (precocious pubarche, axillarche, thelarche) occur more often, and in obese girls, central puberty tends to start earlier . The most common form of precocious puberty associated with obesity is precocious pubarche. It is related to insulin excess, which is often observed in obese children. Hyperinsulinemia can stimulate androgen production in the adrenals and ovaries. In prepubertal children, excessive adrenal androgen production can be clinically presented as precocious pubic and axillary hair occurrence before the age of 8 in girls and 9 in boys. It could be accompanied by pubertal sweat odor, mild acne, moderately accelerated growth, and bone age. It usually occurs more frequently in girls. In the hormonal assessment, isolated mild elevations of dehydroepiandrosterone sulfate (DHEAS) levels were observed. Less common in obese girls is isolated thelarche, as a consequence of androgen conversion to estrogens in the fat tissue. It is characterized by low concentrations of luteinizing hormone (LH) and estradiol with a mild increase in follicle stimulating hormone (FSH) levels. Height velocity and bone age are not accelerated. The mild forms of precocious puberty in obese children do not need any treatment. They are characterized by a stable course or very slow progression. In their treatment, serial observation and behavioral treatment of excessive weight were indicated. In girls with excessive weight, irregular menses occur twice more often than in non-obese peers . After menarche, obesity can be a cause of menstrual disturbances (heavy, painful menstruation, oligomenorrhea, secondary amenorrhea) and polycystic ovary syndrome (PCOS) . PCOS in adolescent girls is characterized by menstrual irregularities and clinical hyperandrogenism and is associated with infertility, metabolic disturbances, type 2 diabetes, and cardiovascular disease in adulthood. In obese adolescents, it is related to hyperinsulinemia, which can stimulate ovarian and adrenal androgen production, as well as decrease the synthesis of sex hormone binding globulin (SHBG) in the liver, leading to excess androgen. According to the consensuses from 2017 and 2018 years, PCOS in an adolescent girls can be diagnosed if both criteria are met : (1) Menstrual disturbances (irregular menses, oligomenorrhea, and secondary amenorrhea). Irregular menses are defined as normal in the first gynecological year, however, the cycle duration of more than 90 days needs special attention. In the gynecological age of less than 3 years, the cycle is defined as irregular if it is shorter than 21 days and longer than 45 days. From the third gynecological age, the duration of the cycle should be between 21 and 35 days. Secondary amenorrhea is defined as a lack of menstruation for more than 3 months and primary amenorrhea as a lack of menarche at the age of 15 years or more than 3 years post-thelarche. (2) Hyperandrogenism (clinical and/or biochemical). The clinical presentation of hyperandrogenism in adolescent girls is hirsutism, defined as excessive, coarse, terminal hair growth distributed in a male fashion, assessed by the Ferriman–Gallwey score for 8 or more points. It should be distinguished from hypertrichosis. Biochemical androgen access should be assessed on the basis of total testosterone and SHBG measurements and calculation of free/bioavailable testosterone or free androgen index. The diagnosis of PCOS could be made if the gynecological age is older than 2 years and persistent menstrual disturbances are observed for more than 2 years. Other causes of menstrual disturbances and hyperandrogenism must be excluded (hypothyreosis, hypercortisolemia, hyperprolactinemia, congenital adrenal hyperplasia, androgen secreting tumor). The objectives of the treatment are regular menses and decrease in clinical features of hyperandrogenism. Despite a reduction in body weight, contraceptive therapy with antiandrogen action progestogen is indicated. In very young patients and in those with contraindications for estrogen therapy (venous thrombosis, migraine with aura), the natural progestogen therapy in the second phase of the cycle can be used. Antiandrogens (spironolactone, finasteride) are not registered in PCOS treatment, and their use should be considered with great caution. Metformin could be used in girls with PCOS and metabolic disturbances, and in addition to the improvement of metabolic profile, it could restore regular menses . In patients with obesity, the most commonly reported symptoms include an increased respiratory rate, dyspnea after low to moderate exertion, wheezing, and chest pain. Respiratory disorders such as bronchial asthma, obstructive sleep apnea (OSA) syndrome, or hypoventilation syndrome are more common in this group of patients . Several review articles have appeared in recent years on the increased prevalence of asthma in obese patients. However, the topic remains highly controversial. Increased body fat may lead to systemic inflammation, increasing pro-inflammatory serum cytokines. With decreased lung compliance, lung volume, and peripheral airway diameter, bronchial hyperresponsiveness may also be important. A confirmatory factor for the effect of obesity on asthma is improved disease control when weight is reduced, as well as observed increased medication use and reported poor quality of life in obese patients compared to normal weight patients. The therapeutic efficacy of inhaled corticosteroids and their combination with long-acting beta agonists (LABAs) is significantly reduced. In spirometry, lower values of Forced expiratory volume (FEV1), total lung capacity (TLC), and functional residual capacity (FRC) are observed compared to patients with normal weight and bronchial asthma . OSA is a condition manifested during sleep, characterized by repeated shallowing or complete absence of airflow through the upper airway with preserved chest and abdominal movements. It is associated with airflow limitation and consequent hypoxia (transient episodes of hypoxia and hypercapnia). It also causes sleep fragmentation through activation of the sympathetic nervous system and arousal. Its prevalence rate in children and adolescents with overweight or obesity ranges between 13–59% . The features that raise suspicion of OSA include mouth breathing, pauses in breathing pattern, snoring during sleep, concentration problems, hyperactivity, headaches, and excessive daytime sleepiness. Untreated obstructive sleep apnea alters the quality of sleep and shortens the life expectancy of those affected . Polysomnographic studies are performed to diagnose OBS. Weight loss is the first line therapy for obese children with OSA. For children with severe OSA, non-invasive ventilation (NIV) and continuous positive airway pressure (CPAP) can be the treatment of choice. Severe obesity and OSA may lead to the obesity - hypoventilation syndrome , with hypoxia, hypercapnia, and reduced ventilatory drive. Hypoventilation syndrome occurs in severe obesity and its risk increases with increasing body weight. It is a chronic disease that reduces the activity of the patient in social life, reduces quality of life, and increases the risk of death. It is characterized by an increase in the partial pressure of CO 2 and a decrease in O 2 (PaCO 2 > 45 mmHg and PaO 2 < 70 mmHg), with other causes such as neuromuscular disorders, pulmonary vascular pathology, iatrogenic causes (drugs, psychoactive substances), metabolic diseases or respiratory and thoracic disorders excluded. Diagnostic criteria included a BMI ≥ 30 kg/m 2 combined with hypoventilation PaCO 2 > 45 mmHg (and during sleep > 55 mmHg for at least 10 min). Symptoms may initially be minor and as hypercapnia increases, headaches, impaired concentration, excessive sleepiness, confusion, and decreased exercise tolerance may occur . Obesity is one of the most common conditions that negatively affects bone and joint health. Evidence showed positive associations between elevated body fat and the development of slipped capital femoral epiphysis , Blount’s disease, and genu varum . Moreover, the risk of fracture, musculoskeletal pain , impaired mobility, and lower extremity malalignment are more common in children and adolescents with excess weight . Persistence of obesity from childhood to adulthood may lead to an increased risk of osteoarthritis in the weightbearing joints, particularly at the knee . Longitudinal studies indicated that increased body fat may influence the higher risk of incident and worsening joint pain . It is recommended to assess kidney function in children and adolescents with obesity. In adults, obesity is an independent risk factor for chronic kidney disease . In children, it is not so obvious, but complications of obesity (e.g., arterial hypertension, dyslipidemia, insulin resistance, hyperglycemia, inflammatory state, and autonomous system dysfunction) can alter the kidney function . Therefore, the basic evaluation of kidney function (creatinine level, glomerular filtration rate (eGFR [mL/min/1.73 m 2 ] = 0.413 × body height [cm]/SCr [mg/dL]) and urine analysis) should be performed in children with overweight and obesity. More detailed screening of kidney dysfunction (albuminuria, albumin/creatinine ratio) should be performed in patients with obesity and concomitant arterial hypertension and type 2 diabetes . The obesity seems to be an important risk factor associated with incontinence, but the interaction between these factors is complex and needs further investigation . The obesity in children is a risk factor for migraine and idiopathic intracranial hypertension. Obesity in pubertal children is associated with a higher risk of idiopathic intracranial hypertension (pseudotumor cerebri) manifested with headache, nausea, vomiting, retroocular pain, and visual impairment . However, the incidence of this condition is much less common in children than in adults. The possible pathogenesis of idiopathic intracranial hypertension in obesity is increased intraabdominal pressure, which in turn increases intrathoracic and intracerebral venous pressure. The most common clinical symptom of pseudotumor cerebri is headache, usually worse in the morning. It can be accompanied by nausea, vomiting, retroocular pain, decreased or blurred vision, diplopia, or even transient visual obscuration . In 19% of children, it is associated with permanent visual impairment . In younger children, irritability, apathy, and somnolence can occur. Less common are other nonspecific neurological symptoms—ataxia, dizziness, stiff neck, seizures, and facial nerve palsy. In some children, papilledema may be the only symptom of pseudotumor cerebri, without other symptoms. The diagnosis of idiopathic intracranial hypertension is the diagnosis of exclusion. Diagnostic criteria are the presence of characteristic clinical symptoms, including papilledema, in a patient with a normal level of consciousness, with normal neurologic physical examination (except cranial nerves), with normal findings on cerebrospinal fluid examination, neuroimaging studies, and documented increased intracranial pressure with lumbar puncture. Elevated intracranial pressure in a child with obesity can be diagnosed if the pressure of cerebrospinal fluid exceeds 28 cm H 2 O . Magnetic resonance imaging shows signs of elevated intracranial pressure. Management usually covers medication: acetazolamide, which is a diuretic but also reduces cerebrospinal fluid production. Furosemide can be used together with acetazolamide or alone if the first medication is contraindicated. In some patients, the symptoms can resolve after the diagnostic lumbar puncture . Obesity seems to be a risk factor for migraine progression and frequency of migraines. The prevalence of episodic migraine in obese children is higher compared to children of normal weight (8.9% vs. 2.5%) . There is a relationship between headache physiopathology and the response of the central and peripheral mechanism to food consumption. The suggested mechanism includes obesity as a pro-inflammatory disease, which may be associated with neurovascular inflammation. Elevated levels of calcitonin gene-related peptides, dysregulation of the action of orexin, leptin, and adiponectin are possible proinflammatory factors related to obesity . Therefore, weight control should be part of migraine treatment in a child with excessive weight. Overweight and obesity can lead to physiological and biochemical disorders of the body, as well as a deterioration in self-esteem, well-being, and relations with the environment . In children, they often initiate a negative emotional attitude towards themselves and a sense of non-acceptance by others. In the following years, they can lead to a feeling of rejection, loneliness, and obese teenagers very often feel disliked, lonely, and rejected by their peers. In such young people who are overweight or obese, there are noticeable difficulties in realizing their dreams, and excessive body weight makes it difficult for them to start their adult life and pursue their professional plans. Additionally, it favors an unattractive self-image, which may contribute to loneliness, a sense of regret, sadness, and even depression. Wardle et al. found that body dissatisfaction was greater in obese children who developed it before the age of 16, therefore it should be identified as part of the multidisciplinary assessment. A referral to a specialist is needed in the suspicion of depressive and/or anxious symptoms, suicidal risk, dysmorphophobic traits, and eating disorders . 4.1. Weight Goal Reduction Weight loss goals are determined by the age of the child and the severity of obesity and related comorbidities . It has been suggested that in younger children with obesity the goal of treatment should be the stabilization of the body weight with successive BMI reduction. Maintenance of a stable weight for more than 1 year might be an appropriate goal for those children with overweight and mild obesity, because BMI will decrease as children gain height. In older children, weight loss is recommended to obtain the 85th percentile BMI. A weight loss of up to 1–2 kg/month is safe. Rapid weight loss is not recommended because of possible adverse effects on growth . Bioelectrical impedance (BIA) is a useful method to assess the change in body composition in children . 4.2. Effectiveness of Nutritional Interventions A stepwise approach to weight control in children is recommended, taking into account the child’s age, the severity of obesity, and the presence of obesity-related comorbidities . Treatment of childhood obesity involves adherence to a structured weight reduction program individualized for each child, along with the adoption of a healthy diet and lifestyle. Anti-obesity medications play a limited role in childhood and are not recommended in younger children. Bariatric surgery is reserved for morbidly obese older adolescents, but its long-term safety data are limited in this age group . The combination of increased PA and improved nutrition has shown promise as an intervention to combat obesity in children and adolescents . 4.3. Eating Behaviors and Lifestyle Modifications Obesity prevention and treatment should be a focus on diet, eating behaviors, and PA, and the reduction of body fat mass should be the summary effect of all this change . Efforts should be made to permanently change the lifestyle of the whole family . Nutritional behaviors such as avoiding breakfast, irregular eating, snacking between meals, insufficient eating vegetables, and fruits are proven predictors of obesity development as well as sedentary lifestyle . Special attention should be paid to them in patient education. The diet and other lifestyle modifications recommended for the treatment of obesity are summarized in . 4.4. Methods of Treatment by Dietary Modification Dietary modifications are essential in the treatment of obesity, but there is a lack of one validated dietary strategy for weight loss in children. Various dietary modifications are used in scientific research for weight loss in children with obesity. As shown by these studies, diets with modified carbohydrate intake, such as low glycemic index and low carbohydrate diets, have been as effective as diets with standard macronutrients proportional to portion size control . A well-balanced hypocaloric diet should be initiated among all obese children in consultation with a dietician . The total daily energy of the diet should be calculated related to the ideal body weight for the height of the child and macronutrients proportion should fulfill the National Recommended Nutrient Intake Levels for Healthy Children . The appropriate caloric restriction should be determined by a dietitian. The daily caloric value of the diet established to the ideal body weight for the height of the child may be reduced by 200–500 kcal. However, it should be noted that little to no evidence supports these specific recommendations. Rather, they represent an expert opinion. The reduced caloric intake should not be lower than 1000 kcal/day. For children with metabolic complications of obesity, especially insulin resistance and/or diabetes, more macronutrient modifications are needed. 4.5. Dietary Advice In dietary treatment, decisions about the range of dietary restrictions must be made depending on the degree of excess weight and existing complications. Lifestyle recommendations listed in are the basis of any intervention. Caution should be exercised regarding micronutrient and vitamin intake, particularly for the hypocaloric diet. If individually necessary, diet supplements should be used to meet the daily recommended intake . 4.6. Traffic Light and Modified Traffic Light Diet/Front-of-Pack (FOP) Nutrition Labeling Food labels are considered a key component of strategies to prevention unhealthy diets and obesity. Nutrition labeling can be an effective approach to encourage consumers to choose healthier products. Interpretive labels, such as traffic light labels, can be more effective . Appropriate labeling of foods with a Nutri Score can provide an important contribution to raising awareness for parents and children to support health-oriented purchases and influence improved diet quality . Food is classified into one of three groups: RED, YELLOW, or GREEN. RED foods are foods that are high in fat and/or calories. This group also includes all sweets and sweetened beverages. GREEN foods are those that are low in fat and/or calories per serving. YELLOW foods fit between the two categories. Do not exceed 1200 to 1500 calories per day and do not eat more than four RED foods per week . 4.7. “Non-Restrictive” Approach It does not consider the stated daily caloric intake or individual nutrients and focuses on eating foods that are low in fat and high in nutrients. 4.8. Industrial Diet (in the Original Replacement Meals, Replacement Meal) Not recommended because efficacy and safety have not been tested in children/young adults. 4.9. Hypocaloric Diets with Low Glycemic Index There was no evidence that the low glycemic index diet differed in effectiveness in reducing BMI or aspects of metabolic syndrome compared with other dietary recommendations in children and adolescents with obesity . The low glycemic index diet was as effective as the low-fat diet. Studies do not indicate that a low glycemic index diet suppresses hunger or increases satiety in children and adolescents with obesity . 4.10. Physical Exercise Eating habits and the level of PA affect human energy balance . Current studies have already shown that, in childhood, there is an increase in the frequency of sedentary lifestyle, such as spending time on playing or working with a computer or watching television (TV) . The increase in sedentary behavior and the reduction in the time spent in PA are important risk factors of the development of obesity in children . Regular PA is associated with improvements in aerobic capacity, strength, muscle growth, bone mass, and body weight or body composition . Metabolic benefits include lowering blood pressure, reduction of leptin, glycemia, and insulin resistance, improved lipid profile with lowering of TC, and increased HDL-C . The physical activity reduces the levels of these inflammatory cytokines leading in addition to increasing anti-inflammatory cytokines, such as interleukin 10 and adiponectin, even without modifying diet or lifestyle changes . Although exercise contributes to many health benefits, research suggests that exercise can play a role in both short- and long-term weight loss and maintenance. Obese children have to work harder than healthy weight children to perform the same task and therefore need an appropriate load. An exercise program for obese children should aim to increase caloric expenditure . Modification of Physical Activity The effects of PA may depend on the type of PA (aerobic exercise (AE), resistance training (RT), and mixed (CRAE)). For children with obesity, aerobic training (e.g., jumping rope, dancing, running, cycling) at moderate or moderate to vigorous intensity, for 30–60 min a day, 3–5 times a week is recommended . Meta-analyses available in the literature suggest that AE interventions are effective in lowering fasting insulin levels, insulin resistance , and body fat percentage (BF%) , as well as improving blood lipid levels in adolescents with obesity. In addition, AE training lowers overall body weight, BMI, and LDL-C . RT increases muscle strength, power, and/or endurance and is usually done 1 to 3 times a week, while the number of repetitions, series, duration, and intensity of the exercises depend on the RT program. AE training is optimal for reducing BF%, while RT is optimal for increasing lean body mass . Mixed training (CRAE) includes both AE and RT elements in a single exercise protocol to provide the benefits of each method is more beneficial for improvement of metabolic parameters and risk factors for cardiovascular disease than AE or RT alone. CRAE training generally involves performing a series of RT, one set of 8–20 repetitions of RT for the upper and lower body, followed by a series of AE, 20–30 min of moderate intensity, in one session of exercise. It has been shown that CRAE training improves both cardio-respiratory efficiency and muscle strength and reduces the body fat, especially visceral . The most appropriate recipe for exercise to reduce obesity in children is the CRAE training protocol, which includes both muscle-toning (RT) and aerobic (AE) ingredients with an emphasis on fat reduction and long-term effects . 4.11. Family Cognitive Behavioral Therapy and Psychotherapy Psychological and/or psychotherapeutic support is an essential part of the treatment of obesity in children and adolescents. Isolated treatment of obesity is not effective due to its multifaceted nature and the multitude of factors that both condition and maintain it . Adherence to medical treatment for obesity requires a wide variety of social and psychological skills. Psychological support aims to develop these skills to ensure compliance with medical recommendations . Psychological diagnosis can help with the correct choice of interaction methods and reduce the burden of care for the patient. At the beginning of the interaction, it is important to establish a proper psychological and/or psychiatric diagnosis . Patients who will struggle with additional psychiatric disorders may require additional interventions before obesity treatment can be addressed . A correct diagnosis is also intended to allow the most appropriate methods of interaction to be selected. Understanding the patient’s point of view can protect the medical team from burnout. This is because it allows for a realistic assessment of the pace and possibilities of the treatment process . Obesity is a chronic disease that triggers an adaptation process in the child. The adaptation process consists of different stages . As a chronic disease, obesity will provoke different responses in children and adolescents. At some stages of adaptation, it is possible that increased sadness and anger may occur. Being able to express these emotions and receiving help to experience them can contribute to better adaptation to chronic disease . Healthy adaptation can, in turn, be associated with greater participation in the treatment process. Enhanced behavioral control is difficult in a dysregulated nervous system. Psychological support is intended to help restore balance to facilitate natural self-regulation in children and adolescents with obesity . When a child’s nervous system is balanced and when they are not overloaded with excess stress, they have greater access to specific cognitive skills and intentional actions . A child or adolescent who is able to regulate his level of arousal is able to withstand discomfort more easily and cope with unpleasant emotions . Psychological help for an obese child should be for healthy emotional regulation, as this will facilitate tasks that require self-control . Cognitive behavioral therapy is a recommended approach. This is because it allows the development of skills relevant to the perspective of lifestyle and behavior change. Cognitive behavioral therapy and its methods are recommended for the treatment of obesity . An empathetic attitude on the part of the therapist is also considered important, which is expressed in not judging the difficulties experienced by the patient . This is important because criticism does not serve the long-term achievement of goals and can lead to reduced motivation and poorer well-being . Cognitive behavioral therapy is designed to help children master, among other techniques, (1) continuous monitoring of their behavior, (2) goal setting and management, (3) problem solving, (4) assertiveness, (5) ability to regulate emotions . These skills are intended to help the child cope with temptations and maintain a healthy lifestyle. Additionally, cognitive interactions that change the thinking process from one that is maladaptive to one that serves health and life can be helpful . The important role of motivation to maintain change should be considered . If motivation is insufficient, the focus should be on the use of motivational dialogue . Psychological support for children with obesity also has a protective function against psychological disturbances. Obesity is a risk factor for the development of psychosocial problems and mental disorders . Children with obesity are more likely to be isolated from peers and treated as less attractive playmates . This may cause the development of low self-esteem and as mood diseases such as anxiety and depression . Psychological interventions can correct the psychosocial situation of children and allow for the restoration of healthy self-esteem. Psychotherapy is a necessary part of the treatment of eating disturbances such as emotional eating, BED, and night eating syndrome. Parental involvement in therapy is crucial for younger children. It should be remembered that for school children, parental involvement in the child’s therapy is important . The influence of parents on children’s dietary compliance and PA is significant and important. The success of therapy will also depend on the functioning of the entire family system and the patient’s environment . Therefore, systemic therapy may be a helpful solution in the treatment of childhood obesity . 4.12. Pharmacotherapy Pharmacotherapy for children or adolescents with obesity may only be considered after a formal program of intensive lifestyle modification has not been effective in limiting weight gain or improving obesity complications in adolescents aged ≥12 years with obesity defined as BMI corresponding ≥ 30 kg/m 2 in adults . The only drug registered in Poland and Europe for people <18 years of age is the analog of the human glucagon—like peptide 1—liraglutide. While there are currently two formulations of liraglutide on the market, only one has been approved for the treatment of obesity under the name Saxneda. It may be used as a supplement to a healthy diet and increased PA . Liraglutide, a glucagon-like peptide 1 (GLP-1) analogue, increases the postprandial insulin level in a glucose-dependent manner, reduces glucagon secretion, delays gastric emptying, and induces weight loss through reductions in appetite and energy intake . Liraglutide under the name Saxenda approval was based on a 56-week, double-blind, randomized, placebo-controlled study in 251 pediatric pubertal patients aged 12 to 17 years. After a 12-week lifestyle run-in period, patients were randomized to Saxenda (3.0 mg) or placebo once a day. The mean change in BMI SDS from baseline to week 56 was −0.23 in the Saxenda group and −0.00 in the placebo group. The estimated treatment difference in the reduction in SDS in BMI from baseline between Saxenda vs. placebo was −0.22 (95% CI: −0.37, −0.08; p = 0.0022) . Approved pharmacotherapy for obesity should be administered only with a concomitant lifestyle modification program of the highest intensity available and only by clinicians who are experienced in the use of drugs supporting the treatment of obesity and are aware of the potential for adverse reactions. Most adverse events of liraglutide are mild or moderate gastrointestinal events—including nausea, vomiting, and diarrhea . The therapy should be discontinued and reevaluated if patients have not lost at least 4% of their BMI or BMI z-score after 12 weeks on the 3.0 mg/day or maximum tolerated dose . It is not recommended to use metformin as a drug supporting the treatment of obesity in children and adolescents . Metformin in children with overweight or obesity and metabolic complications reduces hepatic glucose production and increases peripheral insulin sensitivity . It is not recommended to prescribe drugs supporting the weight loss off-label due to: (1) the limited data on safety and efficacy between children and adolescents, (2) the limited efficacy demonstrated in adults for most agents, (3) the need to weigh the relative risk of drug-induced adverse events in children and adolescents against the long-term theoretical potential of a drug to reduce obesity complications and mortality, and (4) the risk of creating a false belief that the drug can replace basic, effective, and safe methods of obesity treatment—change diet and increase PA . 4.13. Bariatric Surgery 4.13.1. Requirements for Reference Centers Bariatric surgery is more effective than conservative management . Numerous studies have demonstrated the positive results of bariatric surgery on BMI reduction, reduction of blood pressure values, improvement in lipid and carbohydrate metabolism, and reduction of OSA . Bariatric surgery should only be performed in highly specialized centers based on the collaboration of an experienced multidisciplinary team capable of providing long-term care . The team should include a pediatric endocrinology and diabetes specialist or a pediatrician with experience in obesity treatment, a psychologist, an anesthetist, pediatric surgeon, a dietitian, and a physiotherapist. Depending on the needs, the team can be supplemented with specialists from other disciplines. The center should provide nephrology, gastroenterology, orthopedics, cardiology, pulmonology, psychiatric, and other consultations. 4.13.2. Qualification Bariatric surgery should be considered in pediatric patients with BMI > 40 kg/m 2 or BMI > 35 kg/m 2 with associated: diabetes mellitus, prediabetes, hypertension, OSA syndrome, dyslipidemia (especially hypertriglyceridemia), signs of intracranial hypertension (pseudotumor cerebri), MAFLD, severe skeletal abnormalities, and urinary incontinence. An additional indication is a significant deterioration in patient quality of life and limitation of daily activities . The decision to qualify for treatment should be preceded by at least 12 months of treatment with modification of diet and PA and, in selected cases, pharmacotherapy. The best candidates for treatment are patients who have obtained satisfactory results from this treatment, but in spite of this, severity of obesity or obesity complications continue to threaten their health and life . However, a prerequisite is that the patient and their parents are able to give their informed consent associated with a complete understanding of the nature of the surgery, the risks and benefits. It is also necessary to ensure that the minor patient has the support of his family during the preoperative and postoperative period. Consent should be preceded by psychological and psychiatric counseling for the patient and their family, and in selected cases by behavioral therapy. Currently, the prerequisite is no longer the sexual maturity of at least Tanner IV, the completion of the skeletal maturation, or the growth process, since no negative effects of bariatric surgery on growth and sexual maturation have been proven . Contraindications to bariatric surgery include substance or alcohol addiction, pregnancy or planning a pregnancy within 2 years of surgery, breastfeeding, lack of informed consent and consent to surgery, lack of cooperation from the patient and family, untreated psychiatric illness, severe personality disorders, incurable debilitating illness that is life-threatening in the short term, and high risk of anesthesia for surgery. Relative contraindications to surgery or indications for its postponement are states of exacerbation or temporary imbalance of chronic diseases. With great caution, the decision about bariatric surgery should be made in patients with intellectual disability due to the problems with following the recommendation after surgery. 4.13.3. Types of Bariatric Surgery There are many types of bariatric surgery methods. Choosing the appropriate method is decided by the doctor in collaboration with the patient based on their health history, medical indications, and risk assessment. Laparoscopic surgery is the preferred surgical technique due to its lower surgical risk. Among the interventions with well-documented effects on weight reduction and expected metabolic outcomes, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) are the most commonly performed in adolescents. 4.13.4. Post-Treatment Monitoring For at least two years after surgery, preferably until transfer to adult specialist care, the patient should remain under close multispecialty surveillance by the treating center. Adolescents should have access to lifelong monitoring following bariatric surgery to ensure that nutritional requirements, and risks of developing post-bariatric surgery-related nutritional deficiencies, are monitored. The type and frequency of nutritional monitoring should reflect the bariatric procedure and may need to be individualized. The first post-operative visit should be done preferably after 7–14 days after the procedure. The next schedule of follow-up for the first 6 months includes 4 visits for 1, 2, 3, and 6 months. Until the second year after the procedure, subsequent visits should be carried out every 6 months. After 2 years, patients should be offered transition to adult care monitoring of nutritional status at least annually as a part of multidisciplinary-care management. Renal and liver function, full blood count, and ferritin have to be monitored at 3, 6, and 12 months in the first year and then at least annually. Regular monitoring of folates, vitamin B12, 25-hydroxyvitamin D, and calcium is essential. PTH levels have to be checked if not performed before surgery to exclude primary hypoparathyroidism. HbA1c and lipids have to be monitored in patients with preoperative diabetes and dyslipidemia. Requirements for other minerals and vitamins (zinc, selenium, thiamine, etc.) assessment are related to the specific symptoms and comorbidities . Regular bone mineral density assessment (preferably annually) has to also be considered until peak bone mass has been reached . Once the patient has reached adulthood, treatment should be provided in adult reference centers following bariatric surgery . In the first-year post-operation, bariatric surgery results in a substantial weight loss of about 37%, leading to a significant decrease of all obesity-related metabolic complication, significantly improving health-related quality of life. However, in longer follow-ups, weight regain is observed in 50% of patients. Furthermore, reduced bone mass and nutritional deficiencies were reported in up to 90% of patients . 4.14. Effectiveness of Obesity Treatment in the Pediatric Population Obesity treatment in the pediatric population aims to change the behavioral habits of patient and their closest environment (family, neighborhood, school) . In long-term evaluation, those changes should result in improving the quality of liver and decreasing the risk of obesity complications . However, in everyday practice, clinical evaluation, and study facilities, several anthropometric measurements should be used. 4.14.1. BMI The simplest, most often used measurement is BMI related to a standard population matrix—presented as standard deviation score (SDS), z-score, BMI centiles or 95th percentile for BMI (%BMIp95) . These measurements are simple to use and repetitive. They can be performed in almost every facility with very limited equipment. Based on several measurements in time frames, it is easy to track any changes in the weight status of the patient using local or WHO based centile charts. The decrease in the SDS of 0.5 over 0–6 months of intervention is supposed to be associated with a decrease in body fat . As is known, these methods have serious limitations. They do not really track changes in health status, only in relative body mass. Additionally, they do not track the decrease of fat tissue nor the increase in body muscles. This is why, nowadays, BMI-based measures can/should be used in population-based studies and screening procedures as the “best available” method. Unfortunately, there is no other golden standard for clinical practice. Waist circumference can be used as a measure of visceral fat change as it is more accurate for tracking changes in fat tissue, yet not effective in assessing increases in body lean or muscle mass . 4.14.2. Other Anthropometric Measurements More precise methods like bioimpedance, dual energy X-ray absorptiometry (DEXA), CT, or MRI are used mostly in tertiary reference centers for research purposes . The availability of good quality and the reproducibility of bioimpedance is increasing, giving more accurate results on changes in fat and free fat mass. This method needs trained stuff and prepared patient—to give accurate and replicable measurements . DEXA together with MRI are reserved mostly for clinical trials and have also some limitations—like the luck of standard charts/values for the pediatric population . 4.14.3. Validation of Treatment Effects There are limited data on the impact of body mass/fat mass reduction on long-term health effects—assessed from childhood until late adulthood . The ones available are mainly observational or retrospective studies with limited factors accounted as possible bias. This also limits the usefulness of both anthropometric and equipment measures for assessing the changes of obesity . Moreover, the assessment of changes in behavior is even harder, as it is mostly based on questionnaire/survey tools. Assessment of nutritional or PA habits has this important limitation of self-awareness and veracity . PA is easily assessed by simple screening methods (step test, gait test, strength assessment) in both primary and reference centers. Therefore, the implementation of these methods would probably improve the quality of the assessment of changes in patients . As of now, there is no ideal measure of the long-term effectiveness of lifestyle changes that can be used in a daily clinical practice. Long-term follow-up—30–40 years—to detect a reduction in obesity complications development and mortality is available in a limited number of population-based studies . Moreover, focusing on weight and BMI-dependent measures may cause an increased risk of weight stigma and weight bias, which can contribute to discrimination, and can arise when children do not fit social norms for body weight or shape. This, in practice, can relay to the increased risk of depression, eating disorders, and low self-esteem, additionally contributing to overeating and decrease in PA behaviors . All these factors contribute to the issue of qualitative and quantitative assessment and comparison of different public health, clinical and healthcare interventions. In most interventional studies, independently from their structure (family-based, school-based, individual, and group interventions), the BMI or related measure is still used as the most important and easiest in comparison measure. On the other hand, it is very hard to believe that there will be other easier-to-use measures, especially understanding the long-term consequences, relapse character, and multifactorial nature of obesity . 4.14.4. Long-Term Monitoring Monitoring and evaluation are an essential element of most processes, including the therapeutic process in obesity. The main goal of obesity treatment in children and adolescents is to prevent and treat obesity complications, including metabolic disorders, and to improve the quality of life of patients. Treatment of obesity in children should result in the development of health-promoting behaviors in the field of nutrition and PA, and their consolidation for the rest of the child’s life . There is evidence of short-term efficacy of multi-module interventions in the treatment of childhood obesity for age groups up to 6 years , 6 to 11 years old , and from 12 to 17 years of age . Obesity as a chronic disease requires long-term lifestyle changes and thus long-term patient monitoring . One should remember about the possibility of recurrence of the disease, and thus the reevaluation of the causes of its occurrence and the selection of appropriate treatment methods, tailored to the patient’s abilities and needs. There are no long-term patterns of how often an obese patient should undergo specialist visits when he or she obtains the goals set in the treatment plan—not only weight reduction, but also all above behavior modification. The regular visits at intervals that would allow the therapeutic effect to be maintained and early identification of body weight gain should be recommended. In the case of bariatric surgery, except for the first two years after surgery, one visit per year is recommended in the following years . Weight loss goals are determined by the age of the child and the severity of obesity and related comorbidities . It has been suggested that in younger children with obesity the goal of treatment should be the stabilization of the body weight with successive BMI reduction. Maintenance of a stable weight for more than 1 year might be an appropriate goal for those children with overweight and mild obesity, because BMI will decrease as children gain height. In older children, weight loss is recommended to obtain the 85th percentile BMI. A weight loss of up to 1–2 kg/month is safe. Rapid weight loss is not recommended because of possible adverse effects on growth . Bioelectrical impedance (BIA) is a useful method to assess the change in body composition in children . A stepwise approach to weight control in children is recommended, taking into account the child’s age, the severity of obesity, and the presence of obesity-related comorbidities . Treatment of childhood obesity involves adherence to a structured weight reduction program individualized for each child, along with the adoption of a healthy diet and lifestyle. Anti-obesity medications play a limited role in childhood and are not recommended in younger children. Bariatric surgery is reserved for morbidly obese older adolescents, but its long-term safety data are limited in this age group . The combination of increased PA and improved nutrition has shown promise as an intervention to combat obesity in children and adolescents . Obesity prevention and treatment should be a focus on diet, eating behaviors, and PA, and the reduction of body fat mass should be the summary effect of all this change . Efforts should be made to permanently change the lifestyle of the whole family . Nutritional behaviors such as avoiding breakfast, irregular eating, snacking between meals, insufficient eating vegetables, and fruits are proven predictors of obesity development as well as sedentary lifestyle . Special attention should be paid to them in patient education. The diet and other lifestyle modifications recommended for the treatment of obesity are summarized in . Dietary modifications are essential in the treatment of obesity, but there is a lack of one validated dietary strategy for weight loss in children. Various dietary modifications are used in scientific research for weight loss in children with obesity. As shown by these studies, diets with modified carbohydrate intake, such as low glycemic index and low carbohydrate diets, have been as effective as diets with standard macronutrients proportional to portion size control . A well-balanced hypocaloric diet should be initiated among all obese children in consultation with a dietician . The total daily energy of the diet should be calculated related to the ideal body weight for the height of the child and macronutrients proportion should fulfill the National Recommended Nutrient Intake Levels for Healthy Children . The appropriate caloric restriction should be determined by a dietitian. The daily caloric value of the diet established to the ideal body weight for the height of the child may be reduced by 200–500 kcal. However, it should be noted that little to no evidence supports these specific recommendations. Rather, they represent an expert opinion. The reduced caloric intake should not be lower than 1000 kcal/day. For children with metabolic complications of obesity, especially insulin resistance and/or diabetes, more macronutrient modifications are needed. In dietary treatment, decisions about the range of dietary restrictions must be made depending on the degree of excess weight and existing complications. Lifestyle recommendations listed in are the basis of any intervention. Caution should be exercised regarding micronutrient and vitamin intake, particularly for the hypocaloric diet. If individually necessary, diet supplements should be used to meet the daily recommended intake . Food labels are considered a key component of strategies to prevention unhealthy diets and obesity. Nutrition labeling can be an effective approach to encourage consumers to choose healthier products. Interpretive labels, such as traffic light labels, can be more effective . Appropriate labeling of foods with a Nutri Score can provide an important contribution to raising awareness for parents and children to support health-oriented purchases and influence improved diet quality . Food is classified into one of three groups: RED, YELLOW, or GREEN. RED foods are foods that are high in fat and/or calories. This group also includes all sweets and sweetened beverages. GREEN foods are those that are low in fat and/or calories per serving. YELLOW foods fit between the two categories. Do not exceed 1200 to 1500 calories per day and do not eat more than four RED foods per week . It does not consider the stated daily caloric intake or individual nutrients and focuses on eating foods that are low in fat and high in nutrients. Not recommended because efficacy and safety have not been tested in children/young adults. There was no evidence that the low glycemic index diet differed in effectiveness in reducing BMI or aspects of metabolic syndrome compared with other dietary recommendations in children and adolescents with obesity . The low glycemic index diet was as effective as the low-fat diet. Studies do not indicate that a low glycemic index diet suppresses hunger or increases satiety in children and adolescents with obesity . Eating habits and the level of PA affect human energy balance . Current studies have already shown that, in childhood, there is an increase in the frequency of sedentary lifestyle, such as spending time on playing or working with a computer or watching television (TV) . The increase in sedentary behavior and the reduction in the time spent in PA are important risk factors of the development of obesity in children . Regular PA is associated with improvements in aerobic capacity, strength, muscle growth, bone mass, and body weight or body composition . Metabolic benefits include lowering blood pressure, reduction of leptin, glycemia, and insulin resistance, improved lipid profile with lowering of TC, and increased HDL-C . The physical activity reduces the levels of these inflammatory cytokines leading in addition to increasing anti-inflammatory cytokines, such as interleukin 10 and adiponectin, even without modifying diet or lifestyle changes . Although exercise contributes to many health benefits, research suggests that exercise can play a role in both short- and long-term weight loss and maintenance. Obese children have to work harder than healthy weight children to perform the same task and therefore need an appropriate load. An exercise program for obese children should aim to increase caloric expenditure . Modification of Physical Activity The effects of PA may depend on the type of PA (aerobic exercise (AE), resistance training (RT), and mixed (CRAE)). For children with obesity, aerobic training (e.g., jumping rope, dancing, running, cycling) at moderate or moderate to vigorous intensity, for 30–60 min a day, 3–5 times a week is recommended . Meta-analyses available in the literature suggest that AE interventions are effective in lowering fasting insulin levels, insulin resistance , and body fat percentage (BF%) , as well as improving blood lipid levels in adolescents with obesity. In addition, AE training lowers overall body weight, BMI, and LDL-C . RT increases muscle strength, power, and/or endurance and is usually done 1 to 3 times a week, while the number of repetitions, series, duration, and intensity of the exercises depend on the RT program. AE training is optimal for reducing BF%, while RT is optimal for increasing lean body mass . Mixed training (CRAE) includes both AE and RT elements in a single exercise protocol to provide the benefits of each method is more beneficial for improvement of metabolic parameters and risk factors for cardiovascular disease than AE or RT alone. CRAE training generally involves performing a series of RT, one set of 8–20 repetitions of RT for the upper and lower body, followed by a series of AE, 20–30 min of moderate intensity, in one session of exercise. It has been shown that CRAE training improves both cardio-respiratory efficiency and muscle strength and reduces the body fat, especially visceral . The most appropriate recipe for exercise to reduce obesity in children is the CRAE training protocol, which includes both muscle-toning (RT) and aerobic (AE) ingredients with an emphasis on fat reduction and long-term effects . The effects of PA may depend on the type of PA (aerobic exercise (AE), resistance training (RT), and mixed (CRAE)). For children with obesity, aerobic training (e.g., jumping rope, dancing, running, cycling) at moderate or moderate to vigorous intensity, for 30–60 min a day, 3–5 times a week is recommended . Meta-analyses available in the literature suggest that AE interventions are effective in lowering fasting insulin levels, insulin resistance , and body fat percentage (BF%) , as well as improving blood lipid levels in adolescents with obesity. In addition, AE training lowers overall body weight, BMI, and LDL-C . RT increases muscle strength, power, and/or endurance and is usually done 1 to 3 times a week, while the number of repetitions, series, duration, and intensity of the exercises depend on the RT program. AE training is optimal for reducing BF%, while RT is optimal for increasing lean body mass . Mixed training (CRAE) includes both AE and RT elements in a single exercise protocol to provide the benefits of each method is more beneficial for improvement of metabolic parameters and risk factors for cardiovascular disease than AE or RT alone. CRAE training generally involves performing a series of RT, one set of 8–20 repetitions of RT for the upper and lower body, followed by a series of AE, 20–30 min of moderate intensity, in one session of exercise. It has been shown that CRAE training improves both cardio-respiratory efficiency and muscle strength and reduces the body fat, especially visceral . The most appropriate recipe for exercise to reduce obesity in children is the CRAE training protocol, which includes both muscle-toning (RT) and aerobic (AE) ingredients with an emphasis on fat reduction and long-term effects . Psychological and/or psychotherapeutic support is an essential part of the treatment of obesity in children and adolescents. Isolated treatment of obesity is not effective due to its multifaceted nature and the multitude of factors that both condition and maintain it . Adherence to medical treatment for obesity requires a wide variety of social and psychological skills. Psychological support aims to develop these skills to ensure compliance with medical recommendations . Psychological diagnosis can help with the correct choice of interaction methods and reduce the burden of care for the patient. At the beginning of the interaction, it is important to establish a proper psychological and/or psychiatric diagnosis . Patients who will struggle with additional psychiatric disorders may require additional interventions before obesity treatment can be addressed . A correct diagnosis is also intended to allow the most appropriate methods of interaction to be selected. Understanding the patient’s point of view can protect the medical team from burnout. This is because it allows for a realistic assessment of the pace and possibilities of the treatment process . Obesity is a chronic disease that triggers an adaptation process in the child. The adaptation process consists of different stages . As a chronic disease, obesity will provoke different responses in children and adolescents. At some stages of adaptation, it is possible that increased sadness and anger may occur. Being able to express these emotions and receiving help to experience them can contribute to better adaptation to chronic disease . Healthy adaptation can, in turn, be associated with greater participation in the treatment process. Enhanced behavioral control is difficult in a dysregulated nervous system. Psychological support is intended to help restore balance to facilitate natural self-regulation in children and adolescents with obesity . When a child’s nervous system is balanced and when they are not overloaded with excess stress, they have greater access to specific cognitive skills and intentional actions . A child or adolescent who is able to regulate his level of arousal is able to withstand discomfort more easily and cope with unpleasant emotions . Psychological help for an obese child should be for healthy emotional regulation, as this will facilitate tasks that require self-control . Cognitive behavioral therapy is a recommended approach. This is because it allows the development of skills relevant to the perspective of lifestyle and behavior change. Cognitive behavioral therapy and its methods are recommended for the treatment of obesity . An empathetic attitude on the part of the therapist is also considered important, which is expressed in not judging the difficulties experienced by the patient . This is important because criticism does not serve the long-term achievement of goals and can lead to reduced motivation and poorer well-being . Cognitive behavioral therapy is designed to help children master, among other techniques, (1) continuous monitoring of their behavior, (2) goal setting and management, (3) problem solving, (4) assertiveness, (5) ability to regulate emotions . These skills are intended to help the child cope with temptations and maintain a healthy lifestyle. Additionally, cognitive interactions that change the thinking process from one that is maladaptive to one that serves health and life can be helpful . The important role of motivation to maintain change should be considered . If motivation is insufficient, the focus should be on the use of motivational dialogue . Psychological support for children with obesity also has a protective function against psychological disturbances. Obesity is a risk factor for the development of psychosocial problems and mental disorders . Children with obesity are more likely to be isolated from peers and treated as less attractive playmates . This may cause the development of low self-esteem and as mood diseases such as anxiety and depression . Psychological interventions can correct the psychosocial situation of children and allow for the restoration of healthy self-esteem. Psychotherapy is a necessary part of the treatment of eating disturbances such as emotional eating, BED, and night eating syndrome. Parental involvement in therapy is crucial for younger children. It should be remembered that for school children, parental involvement in the child’s therapy is important . The influence of parents on children’s dietary compliance and PA is significant and important. The success of therapy will also depend on the functioning of the entire family system and the patient’s environment . Therefore, systemic therapy may be a helpful solution in the treatment of childhood obesity . Pharmacotherapy for children or adolescents with obesity may only be considered after a formal program of intensive lifestyle modification has not been effective in limiting weight gain or improving obesity complications in adolescents aged ≥12 years with obesity defined as BMI corresponding ≥ 30 kg/m 2 in adults . The only drug registered in Poland and Europe for people <18 years of age is the analog of the human glucagon—like peptide 1—liraglutide. While there are currently two formulations of liraglutide on the market, only one has been approved for the treatment of obesity under the name Saxneda. It may be used as a supplement to a healthy diet and increased PA . Liraglutide, a glucagon-like peptide 1 (GLP-1) analogue, increases the postprandial insulin level in a glucose-dependent manner, reduces glucagon secretion, delays gastric emptying, and induces weight loss through reductions in appetite and energy intake . Liraglutide under the name Saxenda approval was based on a 56-week, double-blind, randomized, placebo-controlled study in 251 pediatric pubertal patients aged 12 to 17 years. After a 12-week lifestyle run-in period, patients were randomized to Saxenda (3.0 mg) or placebo once a day. The mean change in BMI SDS from baseline to week 56 was −0.23 in the Saxenda group and −0.00 in the placebo group. The estimated treatment difference in the reduction in SDS in BMI from baseline between Saxenda vs. placebo was −0.22 (95% CI: −0.37, −0.08; p = 0.0022) . Approved pharmacotherapy for obesity should be administered only with a concomitant lifestyle modification program of the highest intensity available and only by clinicians who are experienced in the use of drugs supporting the treatment of obesity and are aware of the potential for adverse reactions. Most adverse events of liraglutide are mild or moderate gastrointestinal events—including nausea, vomiting, and diarrhea . The therapy should be discontinued and reevaluated if patients have not lost at least 4% of their BMI or BMI z-score after 12 weeks on the 3.0 mg/day or maximum tolerated dose . It is not recommended to use metformin as a drug supporting the treatment of obesity in children and adolescents . Metformin in children with overweight or obesity and metabolic complications reduces hepatic glucose production and increases peripheral insulin sensitivity . It is not recommended to prescribe drugs supporting the weight loss off-label due to: (1) the limited data on safety and efficacy between children and adolescents, (2) the limited efficacy demonstrated in adults for most agents, (3) the need to weigh the relative risk of drug-induced adverse events in children and adolescents against the long-term theoretical potential of a drug to reduce obesity complications and mortality, and (4) the risk of creating a false belief that the drug can replace basic, effective, and safe methods of obesity treatment—change diet and increase PA . 4.13.1. Requirements for Reference Centers Bariatric surgery is more effective than conservative management . Numerous studies have demonstrated the positive results of bariatric surgery on BMI reduction, reduction of blood pressure values, improvement in lipid and carbohydrate metabolism, and reduction of OSA . Bariatric surgery should only be performed in highly specialized centers based on the collaboration of an experienced multidisciplinary team capable of providing long-term care . The team should include a pediatric endocrinology and diabetes specialist or a pediatrician with experience in obesity treatment, a psychologist, an anesthetist, pediatric surgeon, a dietitian, and a physiotherapist. Depending on the needs, the team can be supplemented with specialists from other disciplines. The center should provide nephrology, gastroenterology, orthopedics, cardiology, pulmonology, psychiatric, and other consultations. 4.13.2. Qualification Bariatric surgery should be considered in pediatric patients with BMI > 40 kg/m 2 or BMI > 35 kg/m 2 with associated: diabetes mellitus, prediabetes, hypertension, OSA syndrome, dyslipidemia (especially hypertriglyceridemia), signs of intracranial hypertension (pseudotumor cerebri), MAFLD, severe skeletal abnormalities, and urinary incontinence. An additional indication is a significant deterioration in patient quality of life and limitation of daily activities . The decision to qualify for treatment should be preceded by at least 12 months of treatment with modification of diet and PA and, in selected cases, pharmacotherapy. The best candidates for treatment are patients who have obtained satisfactory results from this treatment, but in spite of this, severity of obesity or obesity complications continue to threaten their health and life . However, a prerequisite is that the patient and their parents are able to give their informed consent associated with a complete understanding of the nature of the surgery, the risks and benefits. It is also necessary to ensure that the minor patient has the support of his family during the preoperative and postoperative period. Consent should be preceded by psychological and psychiatric counseling for the patient and their family, and in selected cases by behavioral therapy. Currently, the prerequisite is no longer the sexual maturity of at least Tanner IV, the completion of the skeletal maturation, or the growth process, since no negative effects of bariatric surgery on growth and sexual maturation have been proven . Contraindications to bariatric surgery include substance or alcohol addiction, pregnancy or planning a pregnancy within 2 years of surgery, breastfeeding, lack of informed consent and consent to surgery, lack of cooperation from the patient and family, untreated psychiatric illness, severe personality disorders, incurable debilitating illness that is life-threatening in the short term, and high risk of anesthesia for surgery. Relative contraindications to surgery or indications for its postponement are states of exacerbation or temporary imbalance of chronic diseases. With great caution, the decision about bariatric surgery should be made in patients with intellectual disability due to the problems with following the recommendation after surgery. 4.13.3. Types of Bariatric Surgery There are many types of bariatric surgery methods. Choosing the appropriate method is decided by the doctor in collaboration with the patient based on their health history, medical indications, and risk assessment. Laparoscopic surgery is the preferred surgical technique due to its lower surgical risk. Among the interventions with well-documented effects on weight reduction and expected metabolic outcomes, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) are the most commonly performed in adolescents. 4.13.4. Post-Treatment Monitoring For at least two years after surgery, preferably until transfer to adult specialist care, the patient should remain under close multispecialty surveillance by the treating center. Adolescents should have access to lifelong monitoring following bariatric surgery to ensure that nutritional requirements, and risks of developing post-bariatric surgery-related nutritional deficiencies, are monitored. The type and frequency of nutritional monitoring should reflect the bariatric procedure and may need to be individualized. The first post-operative visit should be done preferably after 7–14 days after the procedure. The next schedule of follow-up for the first 6 months includes 4 visits for 1, 2, 3, and 6 months. Until the second year after the procedure, subsequent visits should be carried out every 6 months. After 2 years, patients should be offered transition to adult care monitoring of nutritional status at least annually as a part of multidisciplinary-care management. Renal and liver function, full blood count, and ferritin have to be monitored at 3, 6, and 12 months in the first year and then at least annually. Regular monitoring of folates, vitamin B12, 25-hydroxyvitamin D, and calcium is essential. PTH levels have to be checked if not performed before surgery to exclude primary hypoparathyroidism. HbA1c and lipids have to be monitored in patients with preoperative diabetes and dyslipidemia. Requirements for other minerals and vitamins (zinc, selenium, thiamine, etc.) assessment are related to the specific symptoms and comorbidities . Regular bone mineral density assessment (preferably annually) has to also be considered until peak bone mass has been reached . Once the patient has reached adulthood, treatment should be provided in adult reference centers following bariatric surgery . In the first-year post-operation, bariatric surgery results in a substantial weight loss of about 37%, leading to a significant decrease of all obesity-related metabolic complication, significantly improving health-related quality of life. However, in longer follow-ups, weight regain is observed in 50% of patients. Furthermore, reduced bone mass and nutritional deficiencies were reported in up to 90% of patients . Bariatric surgery is more effective than conservative management . Numerous studies have demonstrated the positive results of bariatric surgery on BMI reduction, reduction of blood pressure values, improvement in lipid and carbohydrate metabolism, and reduction of OSA . Bariatric surgery should only be performed in highly specialized centers based on the collaboration of an experienced multidisciplinary team capable of providing long-term care . The team should include a pediatric endocrinology and diabetes specialist or a pediatrician with experience in obesity treatment, a psychologist, an anesthetist, pediatric surgeon, a dietitian, and a physiotherapist. Depending on the needs, the team can be supplemented with specialists from other disciplines. The center should provide nephrology, gastroenterology, orthopedics, cardiology, pulmonology, psychiatric, and other consultations. Bariatric surgery should be considered in pediatric patients with BMI > 40 kg/m 2 or BMI > 35 kg/m 2 with associated: diabetes mellitus, prediabetes, hypertension, OSA syndrome, dyslipidemia (especially hypertriglyceridemia), signs of intracranial hypertension (pseudotumor cerebri), MAFLD, severe skeletal abnormalities, and urinary incontinence. An additional indication is a significant deterioration in patient quality of life and limitation of daily activities . The decision to qualify for treatment should be preceded by at least 12 months of treatment with modification of diet and PA and, in selected cases, pharmacotherapy. The best candidates for treatment are patients who have obtained satisfactory results from this treatment, but in spite of this, severity of obesity or obesity complications continue to threaten their health and life . However, a prerequisite is that the patient and their parents are able to give their informed consent associated with a complete understanding of the nature of the surgery, the risks and benefits. It is also necessary to ensure that the minor patient has the support of his family during the preoperative and postoperative period. Consent should be preceded by psychological and psychiatric counseling for the patient and their family, and in selected cases by behavioral therapy. Currently, the prerequisite is no longer the sexual maturity of at least Tanner IV, the completion of the skeletal maturation, or the growth process, since no negative effects of bariatric surgery on growth and sexual maturation have been proven . Contraindications to bariatric surgery include substance or alcohol addiction, pregnancy or planning a pregnancy within 2 years of surgery, breastfeeding, lack of informed consent and consent to surgery, lack of cooperation from the patient and family, untreated psychiatric illness, severe personality disorders, incurable debilitating illness that is life-threatening in the short term, and high risk of anesthesia for surgery. Relative contraindications to surgery or indications for its postponement are states of exacerbation or temporary imbalance of chronic diseases. With great caution, the decision about bariatric surgery should be made in patients with intellectual disability due to the problems with following the recommendation after surgery. There are many types of bariatric surgery methods. Choosing the appropriate method is decided by the doctor in collaboration with the patient based on their health history, medical indications, and risk assessment. Laparoscopic surgery is the preferred surgical technique due to its lower surgical risk. Among the interventions with well-documented effects on weight reduction and expected metabolic outcomes, sleeve gastrectomy (SG) and Roux-en-Y gastric bypass (RYGB) are the most commonly performed in adolescents. For at least two years after surgery, preferably until transfer to adult specialist care, the patient should remain under close multispecialty surveillance by the treating center. Adolescents should have access to lifelong monitoring following bariatric surgery to ensure that nutritional requirements, and risks of developing post-bariatric surgery-related nutritional deficiencies, are monitored. The type and frequency of nutritional monitoring should reflect the bariatric procedure and may need to be individualized. The first post-operative visit should be done preferably after 7–14 days after the procedure. The next schedule of follow-up for the first 6 months includes 4 visits for 1, 2, 3, and 6 months. Until the second year after the procedure, subsequent visits should be carried out every 6 months. After 2 years, patients should be offered transition to adult care monitoring of nutritional status at least annually as a part of multidisciplinary-care management. Renal and liver function, full blood count, and ferritin have to be monitored at 3, 6, and 12 months in the first year and then at least annually. Regular monitoring of folates, vitamin B12, 25-hydroxyvitamin D, and calcium is essential. PTH levels have to be checked if not performed before surgery to exclude primary hypoparathyroidism. HbA1c and lipids have to be monitored in patients with preoperative diabetes and dyslipidemia. Requirements for other minerals and vitamins (zinc, selenium, thiamine, etc.) assessment are related to the specific symptoms and comorbidities . Regular bone mineral density assessment (preferably annually) has to also be considered until peak bone mass has been reached . Once the patient has reached adulthood, treatment should be provided in adult reference centers following bariatric surgery . In the first-year post-operation, bariatric surgery results in a substantial weight loss of about 37%, leading to a significant decrease of all obesity-related metabolic complication, significantly improving health-related quality of life. However, in longer follow-ups, weight regain is observed in 50% of patients. Furthermore, reduced bone mass and nutritional deficiencies were reported in up to 90% of patients . Obesity treatment in the pediatric population aims to change the behavioral habits of patient and their closest environment (family, neighborhood, school) . In long-term evaluation, those changes should result in improving the quality of liver and decreasing the risk of obesity complications . However, in everyday practice, clinical evaluation, and study facilities, several anthropometric measurements should be used. 4.14.1. BMI The simplest, most often used measurement is BMI related to a standard population matrix—presented as standard deviation score (SDS), z-score, BMI centiles or 95th percentile for BMI (%BMIp95) . These measurements are simple to use and repetitive. They can be performed in almost every facility with very limited equipment. Based on several measurements in time frames, it is easy to track any changes in the weight status of the patient using local or WHO based centile charts. The decrease in the SDS of 0.5 over 0–6 months of intervention is supposed to be associated with a decrease in body fat . As is known, these methods have serious limitations. They do not really track changes in health status, only in relative body mass. Additionally, they do not track the decrease of fat tissue nor the increase in body muscles. This is why, nowadays, BMI-based measures can/should be used in population-based studies and screening procedures as the “best available” method. Unfortunately, there is no other golden standard for clinical practice. Waist circumference can be used as a measure of visceral fat change as it is more accurate for tracking changes in fat tissue, yet not effective in assessing increases in body lean or muscle mass . 4.14.2. Other Anthropometric Measurements More precise methods like bioimpedance, dual energy X-ray absorptiometry (DEXA), CT, or MRI are used mostly in tertiary reference centers for research purposes . The availability of good quality and the reproducibility of bioimpedance is increasing, giving more accurate results on changes in fat and free fat mass. This method needs trained stuff and prepared patient—to give accurate and replicable measurements . DEXA together with MRI are reserved mostly for clinical trials and have also some limitations—like the luck of standard charts/values for the pediatric population . 4.14.3. Validation of Treatment Effects There are limited data on the impact of body mass/fat mass reduction on long-term health effects—assessed from childhood until late adulthood . The ones available are mainly observational or retrospective studies with limited factors accounted as possible bias. This also limits the usefulness of both anthropometric and equipment measures for assessing the changes of obesity . Moreover, the assessment of changes in behavior is even harder, as it is mostly based on questionnaire/survey tools. Assessment of nutritional or PA habits has this important limitation of self-awareness and veracity . PA is easily assessed by simple screening methods (step test, gait test, strength assessment) in both primary and reference centers. Therefore, the implementation of these methods would probably improve the quality of the assessment of changes in patients . As of now, there is no ideal measure of the long-term effectiveness of lifestyle changes that can be used in a daily clinical practice. Long-term follow-up—30–40 years—to detect a reduction in obesity complications development and mortality is available in a limited number of population-based studies . Moreover, focusing on weight and BMI-dependent measures may cause an increased risk of weight stigma and weight bias, which can contribute to discrimination, and can arise when children do not fit social norms for body weight or shape. This, in practice, can relay to the increased risk of depression, eating disorders, and low self-esteem, additionally contributing to overeating and decrease in PA behaviors . All these factors contribute to the issue of qualitative and quantitative assessment and comparison of different public health, clinical and healthcare interventions. In most interventional studies, independently from their structure (family-based, school-based, individual, and group interventions), the BMI or related measure is still used as the most important and easiest in comparison measure. On the other hand, it is very hard to believe that there will be other easier-to-use measures, especially understanding the long-term consequences, relapse character, and multifactorial nature of obesity . 4.14.4. Long-Term Monitoring Monitoring and evaluation are an essential element of most processes, including the therapeutic process in obesity. The main goal of obesity treatment in children and adolescents is to prevent and treat obesity complications, including metabolic disorders, and to improve the quality of life of patients. Treatment of obesity in children should result in the development of health-promoting behaviors in the field of nutrition and PA, and their consolidation for the rest of the child’s life . There is evidence of short-term efficacy of multi-module interventions in the treatment of childhood obesity for age groups up to 6 years , 6 to 11 years old , and from 12 to 17 years of age . Obesity as a chronic disease requires long-term lifestyle changes and thus long-term patient monitoring . One should remember about the possibility of recurrence of the disease, and thus the reevaluation of the causes of its occurrence and the selection of appropriate treatment methods, tailored to the patient’s abilities and needs. There are no long-term patterns of how often an obese patient should undergo specialist visits when he or she obtains the goals set in the treatment plan—not only weight reduction, but also all above behavior modification. The regular visits at intervals that would allow the therapeutic effect to be maintained and early identification of body weight gain should be recommended. In the case of bariatric surgery, except for the first two years after surgery, one visit per year is recommended in the following years . The simplest, most often used measurement is BMI related to a standard population matrix—presented as standard deviation score (SDS), z-score, BMI centiles or 95th percentile for BMI (%BMIp95) . These measurements are simple to use and repetitive. They can be performed in almost every facility with very limited equipment. Based on several measurements in time frames, it is easy to track any changes in the weight status of the patient using local or WHO based centile charts. The decrease in the SDS of 0.5 over 0–6 months of intervention is supposed to be associated with a decrease in body fat . As is known, these methods have serious limitations. They do not really track changes in health status, only in relative body mass. Additionally, they do not track the decrease of fat tissue nor the increase in body muscles. This is why, nowadays, BMI-based measures can/should be used in population-based studies and screening procedures as the “best available” method. Unfortunately, there is no other golden standard for clinical practice. Waist circumference can be used as a measure of visceral fat change as it is more accurate for tracking changes in fat tissue, yet not effective in assessing increases in body lean or muscle mass . More precise methods like bioimpedance, dual energy X-ray absorptiometry (DEXA), CT, or MRI are used mostly in tertiary reference centers for research purposes . The availability of good quality and the reproducibility of bioimpedance is increasing, giving more accurate results on changes in fat and free fat mass. This method needs trained stuff and prepared patient—to give accurate and replicable measurements . DEXA together with MRI are reserved mostly for clinical trials and have also some limitations—like the luck of standard charts/values for the pediatric population . There are limited data on the impact of body mass/fat mass reduction on long-term health effects—assessed from childhood until late adulthood . The ones available are mainly observational or retrospective studies with limited factors accounted as possible bias. This also limits the usefulness of both anthropometric and equipment measures for assessing the changes of obesity . Moreover, the assessment of changes in behavior is even harder, as it is mostly based on questionnaire/survey tools. Assessment of nutritional or PA habits has this important limitation of self-awareness and veracity . PA is easily assessed by simple screening methods (step test, gait test, strength assessment) in both primary and reference centers. Therefore, the implementation of these methods would probably improve the quality of the assessment of changes in patients . As of now, there is no ideal measure of the long-term effectiveness of lifestyle changes that can be used in a daily clinical practice. Long-term follow-up—30–40 years—to detect a reduction in obesity complications development and mortality is available in a limited number of population-based studies . Moreover, focusing on weight and BMI-dependent measures may cause an increased risk of weight stigma and weight bias, which can contribute to discrimination, and can arise when children do not fit social norms for body weight or shape. This, in practice, can relay to the increased risk of depression, eating disorders, and low self-esteem, additionally contributing to overeating and decrease in PA behaviors . All these factors contribute to the issue of qualitative and quantitative assessment and comparison of different public health, clinical and healthcare interventions. In most interventional studies, independently from their structure (family-based, school-based, individual, and group interventions), the BMI or related measure is still used as the most important and easiest in comparison measure. On the other hand, it is very hard to believe that there will be other easier-to-use measures, especially understanding the long-term consequences, relapse character, and multifactorial nature of obesity . Monitoring and evaluation are an essential element of most processes, including the therapeutic process in obesity. The main goal of obesity treatment in children and adolescents is to prevent and treat obesity complications, including metabolic disorders, and to improve the quality of life of patients. Treatment of obesity in children should result in the development of health-promoting behaviors in the field of nutrition and PA, and their consolidation for the rest of the child’s life . There is evidence of short-term efficacy of multi-module interventions in the treatment of childhood obesity for age groups up to 6 years , 6 to 11 years old , and from 12 to 17 years of age . Obesity as a chronic disease requires long-term lifestyle changes and thus long-term patient monitoring . One should remember about the possibility of recurrence of the disease, and thus the reevaluation of the causes of its occurrence and the selection of appropriate treatment methods, tailored to the patient’s abilities and needs. There are no long-term patterns of how often an obese patient should undergo specialist visits when he or she obtains the goals set in the treatment plan—not only weight reduction, but also all above behavior modification. The regular visits at intervals that would allow the therapeutic effect to be maintained and early identification of body weight gain should be recommended. In the case of bariatric surgery, except for the first two years after surgery, one visit per year is recommended in the following years . According to the recommendations of the WHO, prophylactic/preventive activities should occupy the leading position among activities aimed at reducing the occurrence of overweight and obesity in the population of children and adolescents . Increasing incidence of overweight and obesity in children and adolescents in European countries requires quick and effective measures taken by governmental and non-governmental institutions, local governments, the food industry, health care system, educational institutions, and individually at the level of families and individuals themselves. 5.1. The Importance of the Family Parenthood is based on caring for children and their development. It has been confirmed that the basic criterion for the proper development of adults as parents is the successful development of their offspring . Parents’ knowledge and participation is crucial for them to take appropriate, necessary actions to maintain their child’s health. The guidelines emphasize that all activities aimed at preventing excessive weight gain, improving diet, and the level of PA in children and adolescents must actively involve parents and guardians. Education aimed at parents should emphasize the importance of their role in modeling health behavior (diet, exercise), control, support, and motivation. In the early stages of life, the mechanism of learning through imitation, i.e., repeating, recreating activities, behaviors, and choices of parents, plays a special role. It is important that parents and guardians verify their behavior. Data show that most adults’ behaviors are motivated by experiences from their own family home . Educational activities are also necessary regarding the principles of proper nutrition of children at various stages of a child’s life, the recommended time and forms of PA, and the impact of excessive body weight on the child’s health and the reduction of obesity. The key is for parents to understand the essence of the disease, which is obesity—parents often do not perceive the excess body weight of a child in terms of a disease and thus do not take any intervention measures . The role of the parent in the prevention of obesity should focus on shaping the correct nutritional behavior, starting with the proper nutrition of the parents themselves, exclusive breastfeeding of infants up to the age of 6 months, in accordance with the recommendations of expanding the child’s diet, and then maintaining a proper diet. Parental control of the child’s menu is also important when the child begins to make food choices on its own. The parent’s task is to take dietary modification measures to prevent excess weight gain and, if necessary, reduce excess body weight. The correct formation of patterns of behavior related to PA is also the responsibility of parents and guardians who, by their own example, stimulate children to engage in PA. The parent’s task is to allow the child to comply with the WHO guidelines on PA: children and adolescents require, on average, 60 min a day of moderate to high aerobic intensity . In conclusion, the role of the parent and the family is to create an environment that models appropriate health behavior, flexible, and ready to change in the event of a threat to the health of the child, but also supports the child in pro-health behaviors. These family tasks should be carried out in the home environment but also outside it, involving other key people in the process (e.g., grandparents, neighbors, friends, helping with caring for the child). Lifestyle modification in child is most effective when changes affect all family members. Behavioral correction may also bring health benefits to households with healthy body weight, while not causing a feeling of exclusion or stigmatization in a child with excessive body weight . 5.2. Prevention–Prenatal Period The prevention of childhood obesity should be started at the pre-pregnancy period because both preconception and perinatal maternal health, and especially BMI, consistently predict excessive weight in the offspring . The modifiable risk factors for childhood overweight and obesity development related to pregnancy are: high maternal preconception BMI, excessive weight gain during pregnancy, maternal gestational diabetes mellitus, hypertension, and smoking during pregnancy . These factors are related to newborn’s low birth weight, macrosomia, and also to small-for-gestational age (SGA) and large-for-gestational age (LGA), which are related with increased risk of high fat mass and metabolic disturbances in later life . It was shown that women who have excessive weight were twice as likely to have an overweight or obese child compared to women with normal weight . In addition, the disturbed intrauterine environment caused by an elevated glucose level in the mother’s blood is related to an increased risk of increased birth weight, obesity, and metabolic disorders later in life . Other prenatal conditions, hypertension, and smoking during pregnancy, are associated with the risk of low birth weight and SGA. Prevention actions should focus on modifiable pregnancy-related risk factors for childhood overweight and obesity. Healthy lifestyle, PA, and balanced diet leading to maintain the normal body weight before conception, as well as proper weight gain during pregnancy, should reduce overweight and obesity risks in a child . In women at risk of gestational diabetes mellitus, the prevention, early diagnosis, and proper treatment of glucose metabolism disturbances are essential for child’s health . In addition to monitoring the glucose level and possible insulin treatment, a diet with decreased carbohydrates and PA are crucial. In pregnant women with low weight and undernutrition, the risk of having infants with SGA or low birth weight is high. For mothers, an energy-balanced, protein supplemented diet could be considered . 5.3. Nutrition for Children 0–2 Years Proper nutrition in the first period of life is primarily to meet the demand for energy and necessary nutrients, ensuring proper physical and psychomotor development. This will help prevent overweight development. It is also recommended to avoid excessive weight gain and/or an increased weight-to-length ratio from the first months of life. Children with obesity are more likely to become adults with obesity . The goal to be pursued is exclusive breastfeeding for the first 6 months of life. Partial or shorter breastfeeding is also beneficial. Breastfeeding should continue for as long as desired by the mother and baby . Human food produced in sufficient quantity fully satisfies the infant’s need for all necessary nutrients, while ensuring its proper development in the first half of its life. Healthy infants 1–6 months of pure breast feeding consume approximately 75 ± 12.6 g of milk from one breast and 101 ± 15.6 g from both breasts. The average number of feedings decreases with the age of the baby and is as follows: ▪ in the first six months of life, 8–12/24 h ▪ in the second half of the first year of life, 6–8/24 h ▪ in the 2nd year of life, 3–6/24 h . It should be aimed to ensure that a child after 1 year of age was no longer fed at night. Infants not fed naturally should receive breast milk substitutes. Based on the consensus of experts, a recommendation was formulated that after reaching the 12th month of life, breastfeeding should continue for as long as desired by the mother and baby. During this time, it is recommended to provide complementary foods. The introduction of complementary products should start when the infant shows the developmental skills needed to consume them, usually not earlier than 17 weeks of age (beginning of the fifth month of life) and not later than 26 weeks of age (beginning of the seventh month of life) . In the nutrition of toddlers, there are significant changes in the eating patterns related to the transition from a typical milk (liquid) diet to a more varied diet (infant diet → transitional diet → family, table diet). During this period, behavior and food preferences also form. The demand for energy and most nutrients in toddlers is reduced per 1 kg of body weight compared to infancy, and for some components it remains relatively constant . 5.4. Nutrition from Preschool to Adolescence Eating a variety of vegetables and fruits, whole grains, a variety of lean protein foods, and low-fat and fat-free dairy products is essential for maintaining a normal body weight and health . It is also recommended to limit foods and beverages with added sugars, solid fats, or sodium, as well as alcoholic and energy drinks. Rational nutrition should optimally include five meals a day. The appropriate proportions between meals and the regular hours of their consumption should be promoted . 5.4.1. Physical Activity The first years of life are essential for starting obesity prevention focused on promoting and maintaining an appropriate level of PA. Prevention strategies should include families, schools, social networks, media, and the general community, which should promote a healthy lifestyle by giving an example to follow or providing a supportive environment . For many children, maintaining an appropriate level of PA may be sufficient to prevent obesity. Children who are physically active have lower body fat content than their physically inactive peers . The 2020 PHYSICAL activity guidelines call for children and adolescents aged from 5 to 17 to accumulate at least an average of 60 minutes of moderate- to vigorous PA (MVPA) per day, mostly aerobic. They also recommend that vigorous physical activities and exercise to strengthen muscles and bones be undertaken at least 3 days a week. Infants (<1 year) should be encouraged to be physically active several times a day by supervised, interactive floor-based play. Toddlers (1–2 years) and preschoolers (3–4 years) should accumulate at least 180 min of PA at any intensity, including MVPA, spread throughout the day. A higher level of PA than the recommended minimum is associated with additional health benefits, such as increased physical fitness (cardiorespiratory and muscular fitness), decreasing of body fat, improvement of cardiometabolic health (BP, dyslipidaemia, glucose, and insulin resistance), improvement of bone health, cognitive outcomes, and mental health . 5.4.2. Sedentary Behaviors There is evidence to suggest that in the pediatric population, greater time spent in sedentary behavior (especially screen time, including TV viewing) is associated with excessive body weight and poorer health outcomes, such as decreased physical fitness and cardiometabolic health . This can be explained by the fact that the screen time competes with PA time, and therefore displaces energy expenditure . Moreover, screen time is often associated with increased consumption of food, exposure to high-calorie, nutrient-poor food, and shorter sleep duration . Reallocation of sedentary time to MVPA is related with a reduction of adiposity among youth . Evidence suggested that screen time over 2 h per day was related to a higher risk of overweight/obesity in children Therefore, it is recommended to limit the time spent in sedentary behavior to 2 h per day by breaking up long periods of sitting as often as possible. Less time spent in sedentary behaviors seems to have better health outcomes . The 2020 WHO guidelines call for children and adolescents to limit sedentary behavior, especially the amount of time spent on recreational screen time . 5.4.3. Sleep—Preventive Behavior—Sleep in Obesity As part of the prevention of the development of obesity, the time spent watching TV, playing computer games, and using mobile phones should be limited. The time spent in older children (i.e., >2 years of age) is up to a maximum of 2 h per day completed at least 30 min before going to bed. In infants and children up to 2 years of age, complete use of multimedia devices is discouraged. These behaviors can have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day. Short duration of sleep is a potential risk factor for obesity because it affects the neuroendocrine and metabolic systems. Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . 5.4.4. Role Involving the School Community The school environment, after the home environment, is the second most important center where the lives of children and young people are concentrated. A child suffering from obesity usually stands out in their peer group: they are larger, often less physically fit. In the social aspect: children and adolescents with excess body weight face nonacceptance or even rejection from their peer group. The effect of this is lowered self-esteem, which becomes a common problem in the mental sphere, leading to the development of depression, behavioral disorders, and a reduction in quality of life . A child with excess body weight in the school environment can become a victim of verbal, physical, and mental aggression, stigmatized due to obesity. The school environment, which is the place of contact between the child and the peer group, may positively or negatively influence the shaping of social relationships. A well-prepared school environment, educated teaching staff in the field of obesity, including stigmatization, can effectively support the building of positive behaviors and attitudes toward a child suffering from obesity. A properly moderated peer group can be a support group for a child with overweight and obesity, strengthening his self-esteem and positive self-image. Acceptance of the peer group reduces the feeling of fear and guilt often accompanying children with overweight and obesity, which significantly interferes with the process of adaptation to the environment . From the perspective of the organization of the school environment, the preventive programs in which the school participates are important, focusing its activities on the area of healthy eating, PA, or directly prevention of obesity development. Active participation in this type of initiative increases the chances of a child with overweight and obesity to return to a normal body weight by shaping appropriate prohealth behaviors. In addition, an important aspect is also the organization of the school nutrition system: the principles of the school cafeteria (quality of meals served, portion sizes, and hours of serving meals), the school shop (quality of the available assortment), the presence of vending machines (the quality of the assortment available in them), and finally the organization of breaks between lessons to allow children to eat a meal in peace. The way of organizing PA in the school as part of physical education (PE) lessons is also important, as well as extracurricular activities. It should be noted here that the correct planning of PE classes in the hour grid may be a factor that increases the active participation of children and adolescents in these classes. The method of conducting PE classes, which should be a form of fun, is also important. Sports rivalry and discriminatory situations should be avoided. It should also be mentioned the role of the school nurse, who, in the pre-school and school period, as part of primary health care, together with a pediatrician or family doctor, provides preventive care for children. Balance examinations are an opportunity to assess the health of children, monitor their development, diagnose irregularities, and take corrective actions to detect deficits. The school nurse, by being present in the school environment, can stimulate actions aimed at improving diet and PA . Nevertheless, it should be noted that the field of activities of the school nurse, the scope of duties, and the proper use of the obtained data require improvement . 5.5. The Social Environmental Factor in the Prevention of Childhood Obesity 5.5.1. The Influence of Culture on Childhood Obesity Culture is believed to significantly affect children’s body weight. First, the development of body image occurs in a cultural context and differs in shared understandings as to valued and disvalued body image . In some communities, thinness is considered beauty, while in others, a plump child is considered healthier . Parents’ perceptions of their children’s body mass varied geographically. Parents from Southern Europe more often misclassified overweight children as normal weight compared with parents from Central and Northern Europe . Moreover, cultural factors have a strong influence on eating habits and behavior and, consequently, the body weight of children and adolescents . Eating traditional foods with the family may be associated with lowering the risk of obesity in some children (e.g., Asians) or increasing the risk of obesity in other children (e.g., African Americans) . Culture also influences the preferences and possibilities of practicing physical activity. Children model the types of physical activity undertaken by parents. Therefore, in a culture that views rest after a long working day as healthier than physical activity, a parent is less likely to have children who understand the importance of exercise for health and well-being . 5.5.2. The Influence of Policy on Childhood Obesity The progressive phenomenon of overweight and obesity in children and adolescents requires action by governmental organizations. In Poland, the National Health Program 2016–2020 was developed where obesity was recognized as a disease of civilization and its treatment as one of the priorities . The previous National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases through Improved Nutrition and Physical Activity for 2007–2011 was based on increasing public awareness of the importance of adequate nutrition and physical activity in relation to health maintenance . In 2015, a law was introduced concerning groups of foodstuffs for sale to children and adolescents in units of the educational system and the requirements to be met by foodstuffs used in the collective nutrition of children and adolescents in these units. It prohibits the sale of unhealthy food products in school canteens . 5.5.3. National Level Approach and Childhood Obesity The main reasons for the development of childhood obesity are insufficient physical activity, improper nutrition of children at home, resulting from the lack of knowledge of parents, acquiring knowledge about children’s nutrition mainly from the Internet, and easy availability of unhealthy food for children. The World Health Organization points out that only an integrated effort can help to be successful in raising awareness and changing health behaviors in order to prevent the trend of an increase in the prevalence of obesity in children . Educational activities aimed at changing lifestyles are of particular importance in the prevention programs for obesity in children and adolescents. She draws attention to the importance of proper care for a pregnant woman, breastfeeding, and recommends the introduction of taxation of sweetened drinks and the inclusion of obesity prevention in the tasks of the school nurse . In recent years, several initiatives have been taken in Poland to tackle the problem of child obesity. This problem was included as a priority and included in the National Health Program. The National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases by improving nutrition and physical activity for 2007–2011 was developed. It was based on increasing public awareness of the importance of adequate nutrition and exercise in relation to health. In 2015, an act was introduced on groups of foodstuffs intended for sale to children and adolescents in education system units and the requirements to be met by foodstuffs used as part of mass nutrition of children and adolescents in these units, the so-called “Shop act”. It prohibits the sale of unhealthy food products in school shops . A 21-day program of physical activity and clinical dietetics for obese children aged 15–17 was also introduced to examine the lipid profile and glutathione levels, and the obligation to record a conversation about children’s nutrition in the Children’s Health Book . “Bicycle May” is the largest campaign in Poland promoting an active way to school, a healthy lifestyle and sustainable mobility among preschool children, primary school students, teachers, parents, and guardians. Bicycle May, through fun combined with elements of competition, popularizes the bicycle as a means of transport to school, teaching good and healthy habits that persist even after the end of the campaign. There are prizes for the most active participants, classes, and institutions . This is one of the elements of creating a healthy environment in kindergartens and schools, but is also for the employees of institutions and parents of students. As part of educational programs addressed to students, “Fruit and vegetables at school” and “Milk at school” were introduced. Their goal was to improve the eating habits of schoolchildren by promoting and increasing the consumption of vegetables, fruit, milk, and dairy products, i.e., products important for the proper development of a child, and at the same time often deficient in the daily diet. Since 2017, both programs have been combined into the “Program for Schools”, which currently covers students from grades 1–5 of most Polish primary schools . In Poland, legal regulations on advertising food in programs for children were included in the Broadcasting Act of 29 December 1992, as amended (2015), according to which "programs for children should not be accompanied by commercial communications regarding food or beverages containing ingredients, the presence of which in excessive amounts in the daily diet is not recommended” . In 2018, the educational program “5 portions of health at school” began. Its aim is to draw attention to the need for education in the field of proper nutrition from an early age and at the same time to start it early. The program is addressed to students of 2nd and 3rd grades of primary schools from all over Poland and their teachers, school principals and school nutritionists. 2016–2020 . The “Keep Fit” program, co-implemented by the Chief Sanitary Inspectorate and the Polish Federation of Food Producers, is an initiative promoting a healthy lifestyle, combining balanced nutrition with regular physical activity . Although obesity was recognized as a civilization disease in the National Health Program for 2016–2020, and its treatment as one of the priorities, the report of the NIK (Najwyższa Izba Kontroli Supreme Audit) shows that the actions taken by successive health ministers not only did not lead to a decline, but even to inhibiting the growth rate of the number of children and adolescents with excess body weight. The scale of the problem was growing, and the effectiveness of the therapy was modest. In the opinion of the Chamber, the reasons for such low effectiveness of treatment were diagnostic errors and outdated methods of therapy, but most of all the lack of access to treatment, mainly due to the lack of specialists . 5.6. The Role of Primary Care in the Prevention of Obesity Development and Its Treatment in Children Epidemiological data show that a family doctor encounters obesity and its complications daily in their practice. The employees of primary care facilities are the basic representatives of medical care to have first contact with patients who suffer from obesity. Treatment of overweight and obesity prevents the development of complications. Sometimes, it also allows to heal already developed complications. Consequently, the main task of primary healthcare workers—family physicians, pediatricians, and nurses—is the diagnosis and treatment of overweight and obesity. As shown in previously published studies, parents often do not perceive the overweight and obesity of a child in terms of a health problem and thus do not take any intervention measures . Professional medical support provided to overweight and obese patients benefits both the patient and their family, as well as the whole society, reducing the direct and indirect costs of obesity. Primary health care workers have the greatest opportunity to observe changes in body weight in their patients and to identify environmental determinants and psychological factors that contribute to the emergence and perpetuation of abnormal behavior. At the local level, they are responsible for promoting the health of their charges. Patients who choose a primary care physician remain under their care for many years. This allows the relationship to be consolidated, trust built, and thus continuously monitor and motivate patients to pro-health behavior. The long-term relationship with the patient, the knowledge of their medical history, and frequent contact are of particular importance in the case of diseases that develop over many years and chronic diseases such as obesity. Monitoring development, including changes in body weight in a child, makes it possible to capture the growing excess body weight at an early stage and initiate therapeutic measures as soon as possible. However, the diagnosis and treatment of obesity is one of the main problems in not only primary, but also specialist health care in Poland. More actions are needed to strengthen the role of primary care in the effective prevention and treatment of obesity. Parenthood is based on caring for children and their development. It has been confirmed that the basic criterion for the proper development of adults as parents is the successful development of their offspring . Parents’ knowledge and participation is crucial for them to take appropriate, necessary actions to maintain their child’s health. The guidelines emphasize that all activities aimed at preventing excessive weight gain, improving diet, and the level of PA in children and adolescents must actively involve parents and guardians. Education aimed at parents should emphasize the importance of their role in modeling health behavior (diet, exercise), control, support, and motivation. In the early stages of life, the mechanism of learning through imitation, i.e., repeating, recreating activities, behaviors, and choices of parents, plays a special role. It is important that parents and guardians verify their behavior. Data show that most adults’ behaviors are motivated by experiences from their own family home . Educational activities are also necessary regarding the principles of proper nutrition of children at various stages of a child’s life, the recommended time and forms of PA, and the impact of excessive body weight on the child’s health and the reduction of obesity. The key is for parents to understand the essence of the disease, which is obesity—parents often do not perceive the excess body weight of a child in terms of a disease and thus do not take any intervention measures . The role of the parent in the prevention of obesity should focus on shaping the correct nutritional behavior, starting with the proper nutrition of the parents themselves, exclusive breastfeeding of infants up to the age of 6 months, in accordance with the recommendations of expanding the child’s diet, and then maintaining a proper diet. Parental control of the child’s menu is also important when the child begins to make food choices on its own. The parent’s task is to take dietary modification measures to prevent excess weight gain and, if necessary, reduce excess body weight. The correct formation of patterns of behavior related to PA is also the responsibility of parents and guardians who, by their own example, stimulate children to engage in PA. The parent’s task is to allow the child to comply with the WHO guidelines on PA: children and adolescents require, on average, 60 min a day of moderate to high aerobic intensity . In conclusion, the role of the parent and the family is to create an environment that models appropriate health behavior, flexible, and ready to change in the event of a threat to the health of the child, but also supports the child in pro-health behaviors. These family tasks should be carried out in the home environment but also outside it, involving other key people in the process (e.g., grandparents, neighbors, friends, helping with caring for the child). Lifestyle modification in child is most effective when changes affect all family members. Behavioral correction may also bring health benefits to households with healthy body weight, while not causing a feeling of exclusion or stigmatization in a child with excessive body weight . The prevention of childhood obesity should be started at the pre-pregnancy period because both preconception and perinatal maternal health, and especially BMI, consistently predict excessive weight in the offspring . The modifiable risk factors for childhood overweight and obesity development related to pregnancy are: high maternal preconception BMI, excessive weight gain during pregnancy, maternal gestational diabetes mellitus, hypertension, and smoking during pregnancy . These factors are related to newborn’s low birth weight, macrosomia, and also to small-for-gestational age (SGA) and large-for-gestational age (LGA), which are related with increased risk of high fat mass and metabolic disturbances in later life . It was shown that women who have excessive weight were twice as likely to have an overweight or obese child compared to women with normal weight . In addition, the disturbed intrauterine environment caused by an elevated glucose level in the mother’s blood is related to an increased risk of increased birth weight, obesity, and metabolic disorders later in life . Other prenatal conditions, hypertension, and smoking during pregnancy, are associated with the risk of low birth weight and SGA. Prevention actions should focus on modifiable pregnancy-related risk factors for childhood overweight and obesity. Healthy lifestyle, PA, and balanced diet leading to maintain the normal body weight before conception, as well as proper weight gain during pregnancy, should reduce overweight and obesity risks in a child . In women at risk of gestational diabetes mellitus, the prevention, early diagnosis, and proper treatment of glucose metabolism disturbances are essential for child’s health . In addition to monitoring the glucose level and possible insulin treatment, a diet with decreased carbohydrates and PA are crucial. In pregnant women with low weight and undernutrition, the risk of having infants with SGA or low birth weight is high. For mothers, an energy-balanced, protein supplemented diet could be considered . Proper nutrition in the first period of life is primarily to meet the demand for energy and necessary nutrients, ensuring proper physical and psychomotor development. This will help prevent overweight development. It is also recommended to avoid excessive weight gain and/or an increased weight-to-length ratio from the first months of life. Children with obesity are more likely to become adults with obesity . The goal to be pursued is exclusive breastfeeding for the first 6 months of life. Partial or shorter breastfeeding is also beneficial. Breastfeeding should continue for as long as desired by the mother and baby . Human food produced in sufficient quantity fully satisfies the infant’s need for all necessary nutrients, while ensuring its proper development in the first half of its life. Healthy infants 1–6 months of pure breast feeding consume approximately 75 ± 12.6 g of milk from one breast and 101 ± 15.6 g from both breasts. The average number of feedings decreases with the age of the baby and is as follows: ▪ in the first six months of life, 8–12/24 h ▪ in the second half of the first year of life, 6–8/24 h ▪ in the 2nd year of life, 3–6/24 h . It should be aimed to ensure that a child after 1 year of age was no longer fed at night. Infants not fed naturally should receive breast milk substitutes. Based on the consensus of experts, a recommendation was formulated that after reaching the 12th month of life, breastfeeding should continue for as long as desired by the mother and baby. During this time, it is recommended to provide complementary foods. The introduction of complementary products should start when the infant shows the developmental skills needed to consume them, usually not earlier than 17 weeks of age (beginning of the fifth month of life) and not later than 26 weeks of age (beginning of the seventh month of life) . In the nutrition of toddlers, there are significant changes in the eating patterns related to the transition from a typical milk (liquid) diet to a more varied diet (infant diet → transitional diet → family, table diet). During this period, behavior and food preferences also form. The demand for energy and most nutrients in toddlers is reduced per 1 kg of body weight compared to infancy, and for some components it remains relatively constant . Eating a variety of vegetables and fruits, whole grains, a variety of lean protein foods, and low-fat and fat-free dairy products is essential for maintaining a normal body weight and health . It is also recommended to limit foods and beverages with added sugars, solid fats, or sodium, as well as alcoholic and energy drinks. Rational nutrition should optimally include five meals a day. The appropriate proportions between meals and the regular hours of their consumption should be promoted . 5.4.1. Physical Activity The first years of life are essential for starting obesity prevention focused on promoting and maintaining an appropriate level of PA. Prevention strategies should include families, schools, social networks, media, and the general community, which should promote a healthy lifestyle by giving an example to follow or providing a supportive environment . For many children, maintaining an appropriate level of PA may be sufficient to prevent obesity. Children who are physically active have lower body fat content than their physically inactive peers . The 2020 PHYSICAL activity guidelines call for children and adolescents aged from 5 to 17 to accumulate at least an average of 60 minutes of moderate- to vigorous PA (MVPA) per day, mostly aerobic. They also recommend that vigorous physical activities and exercise to strengthen muscles and bones be undertaken at least 3 days a week. Infants (<1 year) should be encouraged to be physically active several times a day by supervised, interactive floor-based play. Toddlers (1–2 years) and preschoolers (3–4 years) should accumulate at least 180 min of PA at any intensity, including MVPA, spread throughout the day. A higher level of PA than the recommended minimum is associated with additional health benefits, such as increased physical fitness (cardiorespiratory and muscular fitness), decreasing of body fat, improvement of cardiometabolic health (BP, dyslipidaemia, glucose, and insulin resistance), improvement of bone health, cognitive outcomes, and mental health . 5.4.2. Sedentary Behaviors There is evidence to suggest that in the pediatric population, greater time spent in sedentary behavior (especially screen time, including TV viewing) is associated with excessive body weight and poorer health outcomes, such as decreased physical fitness and cardiometabolic health . This can be explained by the fact that the screen time competes with PA time, and therefore displaces energy expenditure . Moreover, screen time is often associated with increased consumption of food, exposure to high-calorie, nutrient-poor food, and shorter sleep duration . Reallocation of sedentary time to MVPA is related with a reduction of adiposity among youth . Evidence suggested that screen time over 2 h per day was related to a higher risk of overweight/obesity in children Therefore, it is recommended to limit the time spent in sedentary behavior to 2 h per day by breaking up long periods of sitting as often as possible. Less time spent in sedentary behaviors seems to have better health outcomes . The 2020 WHO guidelines call for children and adolescents to limit sedentary behavior, especially the amount of time spent on recreational screen time . 5.4.3. Sleep—Preventive Behavior—Sleep in Obesity As part of the prevention of the development of obesity, the time spent watching TV, playing computer games, and using mobile phones should be limited. The time spent in older children (i.e., >2 years of age) is up to a maximum of 2 h per day completed at least 30 min before going to bed. In infants and children up to 2 years of age, complete use of multimedia devices is discouraged. These behaviors can have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day. Short duration of sleep is a potential risk factor for obesity because it affects the neuroendocrine and metabolic systems. Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . 5.4.4. Role Involving the School Community The school environment, after the home environment, is the second most important center where the lives of children and young people are concentrated. A child suffering from obesity usually stands out in their peer group: they are larger, often less physically fit. In the social aspect: children and adolescents with excess body weight face nonacceptance or even rejection from their peer group. The effect of this is lowered self-esteem, which becomes a common problem in the mental sphere, leading to the development of depression, behavioral disorders, and a reduction in quality of life . A child with excess body weight in the school environment can become a victim of verbal, physical, and mental aggression, stigmatized due to obesity. The school environment, which is the place of contact between the child and the peer group, may positively or negatively influence the shaping of social relationships. A well-prepared school environment, educated teaching staff in the field of obesity, including stigmatization, can effectively support the building of positive behaviors and attitudes toward a child suffering from obesity. A properly moderated peer group can be a support group for a child with overweight and obesity, strengthening his self-esteem and positive self-image. Acceptance of the peer group reduces the feeling of fear and guilt often accompanying children with overweight and obesity, which significantly interferes with the process of adaptation to the environment . From the perspective of the organization of the school environment, the preventive programs in which the school participates are important, focusing its activities on the area of healthy eating, PA, or directly prevention of obesity development. Active participation in this type of initiative increases the chances of a child with overweight and obesity to return to a normal body weight by shaping appropriate prohealth behaviors. In addition, an important aspect is also the organization of the school nutrition system: the principles of the school cafeteria (quality of meals served, portion sizes, and hours of serving meals), the school shop (quality of the available assortment), the presence of vending machines (the quality of the assortment available in them), and finally the organization of breaks between lessons to allow children to eat a meal in peace. The way of organizing PA in the school as part of physical education (PE) lessons is also important, as well as extracurricular activities. It should be noted here that the correct planning of PE classes in the hour grid may be a factor that increases the active participation of children and adolescents in these classes. The method of conducting PE classes, which should be a form of fun, is also important. Sports rivalry and discriminatory situations should be avoided. It should also be mentioned the role of the school nurse, who, in the pre-school and school period, as part of primary health care, together with a pediatrician or family doctor, provides preventive care for children. Balance examinations are an opportunity to assess the health of children, monitor their development, diagnose irregularities, and take corrective actions to detect deficits. The school nurse, by being present in the school environment, can stimulate actions aimed at improving diet and PA . Nevertheless, it should be noted that the field of activities of the school nurse, the scope of duties, and the proper use of the obtained data require improvement . The first years of life are essential for starting obesity prevention focused on promoting and maintaining an appropriate level of PA. Prevention strategies should include families, schools, social networks, media, and the general community, which should promote a healthy lifestyle by giving an example to follow or providing a supportive environment . For many children, maintaining an appropriate level of PA may be sufficient to prevent obesity. Children who are physically active have lower body fat content than their physically inactive peers . The 2020 PHYSICAL activity guidelines call for children and adolescents aged from 5 to 17 to accumulate at least an average of 60 minutes of moderate- to vigorous PA (MVPA) per day, mostly aerobic. They also recommend that vigorous physical activities and exercise to strengthen muscles and bones be undertaken at least 3 days a week. Infants (<1 year) should be encouraged to be physically active several times a day by supervised, interactive floor-based play. Toddlers (1–2 years) and preschoolers (3–4 years) should accumulate at least 180 min of PA at any intensity, including MVPA, spread throughout the day. A higher level of PA than the recommended minimum is associated with additional health benefits, such as increased physical fitness (cardiorespiratory and muscular fitness), decreasing of body fat, improvement of cardiometabolic health (BP, dyslipidaemia, glucose, and insulin resistance), improvement of bone health, cognitive outcomes, and mental health . There is evidence to suggest that in the pediatric population, greater time spent in sedentary behavior (especially screen time, including TV viewing) is associated with excessive body weight and poorer health outcomes, such as decreased physical fitness and cardiometabolic health . This can be explained by the fact that the screen time competes with PA time, and therefore displaces energy expenditure . Moreover, screen time is often associated with increased consumption of food, exposure to high-calorie, nutrient-poor food, and shorter sleep duration . Reallocation of sedentary time to MVPA is related with a reduction of adiposity among youth . Evidence suggested that screen time over 2 h per day was related to a higher risk of overweight/obesity in children Therefore, it is recommended to limit the time spent in sedentary behavior to 2 h per day by breaking up long periods of sitting as often as possible. Less time spent in sedentary behaviors seems to have better health outcomes . The 2020 WHO guidelines call for children and adolescents to limit sedentary behavior, especially the amount of time spent on recreational screen time . As part of the prevention of the development of obesity, the time spent watching TV, playing computer games, and using mobile phones should be limited. The time spent in older children (i.e., >2 years of age) is up to a maximum of 2 h per day completed at least 30 min before going to bed. In infants and children up to 2 years of age, complete use of multimedia devices is discouraged. These behaviors can have a disruptive effect on sleep patterns, leading to a greater desire to eat at night and snack during the day. Short duration of sleep is a potential risk factor for obesity because it affects the neuroendocrine and metabolic systems. Sleep restriction in children and adolescents appears to be associated with an increased risk of weight gain, visceral obesity, and increased body fat mass, which may persist or manifest several years later. Increasing PA to at least 60 min per day promotes sleep hygiene and a reduced risk of overweight or obesity development . The school environment, after the home environment, is the second most important center where the lives of children and young people are concentrated. A child suffering from obesity usually stands out in their peer group: they are larger, often less physically fit. In the social aspect: children and adolescents with excess body weight face nonacceptance or even rejection from their peer group. The effect of this is lowered self-esteem, which becomes a common problem in the mental sphere, leading to the development of depression, behavioral disorders, and a reduction in quality of life . A child with excess body weight in the school environment can become a victim of verbal, physical, and mental aggression, stigmatized due to obesity. The school environment, which is the place of contact between the child and the peer group, may positively or negatively influence the shaping of social relationships. A well-prepared school environment, educated teaching staff in the field of obesity, including stigmatization, can effectively support the building of positive behaviors and attitudes toward a child suffering from obesity. A properly moderated peer group can be a support group for a child with overweight and obesity, strengthening his self-esteem and positive self-image. Acceptance of the peer group reduces the feeling of fear and guilt often accompanying children with overweight and obesity, which significantly interferes with the process of adaptation to the environment . From the perspective of the organization of the school environment, the preventive programs in which the school participates are important, focusing its activities on the area of healthy eating, PA, or directly prevention of obesity development. Active participation in this type of initiative increases the chances of a child with overweight and obesity to return to a normal body weight by shaping appropriate prohealth behaviors. In addition, an important aspect is also the organization of the school nutrition system: the principles of the school cafeteria (quality of meals served, portion sizes, and hours of serving meals), the school shop (quality of the available assortment), the presence of vending machines (the quality of the assortment available in them), and finally the organization of breaks between lessons to allow children to eat a meal in peace. The way of organizing PA in the school as part of physical education (PE) lessons is also important, as well as extracurricular activities. It should be noted here that the correct planning of PE classes in the hour grid may be a factor that increases the active participation of children and adolescents in these classes. The method of conducting PE classes, which should be a form of fun, is also important. Sports rivalry and discriminatory situations should be avoided. It should also be mentioned the role of the school nurse, who, in the pre-school and school period, as part of primary health care, together with a pediatrician or family doctor, provides preventive care for children. Balance examinations are an opportunity to assess the health of children, monitor their development, diagnose irregularities, and take corrective actions to detect deficits. The school nurse, by being present in the school environment, can stimulate actions aimed at improving diet and PA . Nevertheless, it should be noted that the field of activities of the school nurse, the scope of duties, and the proper use of the obtained data require improvement . 5.5.1. The Influence of Culture on Childhood Obesity Culture is believed to significantly affect children’s body weight. First, the development of body image occurs in a cultural context and differs in shared understandings as to valued and disvalued body image . In some communities, thinness is considered beauty, while in others, a plump child is considered healthier . Parents’ perceptions of their children’s body mass varied geographically. Parents from Southern Europe more often misclassified overweight children as normal weight compared with parents from Central and Northern Europe . Moreover, cultural factors have a strong influence on eating habits and behavior and, consequently, the body weight of children and adolescents . Eating traditional foods with the family may be associated with lowering the risk of obesity in some children (e.g., Asians) or increasing the risk of obesity in other children (e.g., African Americans) . Culture also influences the preferences and possibilities of practicing physical activity. Children model the types of physical activity undertaken by parents. Therefore, in a culture that views rest after a long working day as healthier than physical activity, a parent is less likely to have children who understand the importance of exercise for health and well-being . 5.5.2. The Influence of Policy on Childhood Obesity The progressive phenomenon of overweight and obesity in children and adolescents requires action by governmental organizations. In Poland, the National Health Program 2016–2020 was developed where obesity was recognized as a disease of civilization and its treatment as one of the priorities . The previous National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases through Improved Nutrition and Physical Activity for 2007–2011 was based on increasing public awareness of the importance of adequate nutrition and physical activity in relation to health maintenance . In 2015, a law was introduced concerning groups of foodstuffs for sale to children and adolescents in units of the educational system and the requirements to be met by foodstuffs used in the collective nutrition of children and adolescents in these units. It prohibits the sale of unhealthy food products in school canteens . 5.5.3. National Level Approach and Childhood Obesity The main reasons for the development of childhood obesity are insufficient physical activity, improper nutrition of children at home, resulting from the lack of knowledge of parents, acquiring knowledge about children’s nutrition mainly from the Internet, and easy availability of unhealthy food for children. The World Health Organization points out that only an integrated effort can help to be successful in raising awareness and changing health behaviors in order to prevent the trend of an increase in the prevalence of obesity in children . Educational activities aimed at changing lifestyles are of particular importance in the prevention programs for obesity in children and adolescents. She draws attention to the importance of proper care for a pregnant woman, breastfeeding, and recommends the introduction of taxation of sweetened drinks and the inclusion of obesity prevention in the tasks of the school nurse . In recent years, several initiatives have been taken in Poland to tackle the problem of child obesity. This problem was included as a priority and included in the National Health Program. The National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases by improving nutrition and physical activity for 2007–2011 was developed. It was based on increasing public awareness of the importance of adequate nutrition and exercise in relation to health. In 2015, an act was introduced on groups of foodstuffs intended for sale to children and adolescents in education system units and the requirements to be met by foodstuffs used as part of mass nutrition of children and adolescents in these units, the so-called “Shop act”. It prohibits the sale of unhealthy food products in school shops . A 21-day program of physical activity and clinical dietetics for obese children aged 15–17 was also introduced to examine the lipid profile and glutathione levels, and the obligation to record a conversation about children’s nutrition in the Children’s Health Book . “Bicycle May” is the largest campaign in Poland promoting an active way to school, a healthy lifestyle and sustainable mobility among preschool children, primary school students, teachers, parents, and guardians. Bicycle May, through fun combined with elements of competition, popularizes the bicycle as a means of transport to school, teaching good and healthy habits that persist even after the end of the campaign. There are prizes for the most active participants, classes, and institutions . This is one of the elements of creating a healthy environment in kindergartens and schools, but is also for the employees of institutions and parents of students. As part of educational programs addressed to students, “Fruit and vegetables at school” and “Milk at school” were introduced. Their goal was to improve the eating habits of schoolchildren by promoting and increasing the consumption of vegetables, fruit, milk, and dairy products, i.e., products important for the proper development of a child, and at the same time often deficient in the daily diet. Since 2017, both programs have been combined into the “Program for Schools”, which currently covers students from grades 1–5 of most Polish primary schools . In Poland, legal regulations on advertising food in programs for children were included in the Broadcasting Act of 29 December 1992, as amended (2015), according to which "programs for children should not be accompanied by commercial communications regarding food or beverages containing ingredients, the presence of which in excessive amounts in the daily diet is not recommended” . In 2018, the educational program “5 portions of health at school” began. Its aim is to draw attention to the need for education in the field of proper nutrition from an early age and at the same time to start it early. The program is addressed to students of 2nd and 3rd grades of primary schools from all over Poland and their teachers, school principals and school nutritionists. 2016–2020 . The “Keep Fit” program, co-implemented by the Chief Sanitary Inspectorate and the Polish Federation of Food Producers, is an initiative promoting a healthy lifestyle, combining balanced nutrition with regular physical activity . Although obesity was recognized as a civilization disease in the National Health Program for 2016–2020, and its treatment as one of the priorities, the report of the NIK (Najwyższa Izba Kontroli Supreme Audit) shows that the actions taken by successive health ministers not only did not lead to a decline, but even to inhibiting the growth rate of the number of children and adolescents with excess body weight. The scale of the problem was growing, and the effectiveness of the therapy was modest. In the opinion of the Chamber, the reasons for such low effectiveness of treatment were diagnostic errors and outdated methods of therapy, but most of all the lack of access to treatment, mainly due to the lack of specialists . Culture is believed to significantly affect children’s body weight. First, the development of body image occurs in a cultural context and differs in shared understandings as to valued and disvalued body image . In some communities, thinness is considered beauty, while in others, a plump child is considered healthier . Parents’ perceptions of their children’s body mass varied geographically. Parents from Southern Europe more often misclassified overweight children as normal weight compared with parents from Central and Northern Europe . Moreover, cultural factors have a strong influence on eating habits and behavior and, consequently, the body weight of children and adolescents . Eating traditional foods with the family may be associated with lowering the risk of obesity in some children (e.g., Asians) or increasing the risk of obesity in other children (e.g., African Americans) . Culture also influences the preferences and possibilities of practicing physical activity. Children model the types of physical activity undertaken by parents. Therefore, in a culture that views rest after a long working day as healthier than physical activity, a parent is less likely to have children who understand the importance of exercise for health and well-being . The progressive phenomenon of overweight and obesity in children and adolescents requires action by governmental organizations. In Poland, the National Health Program 2016–2020 was developed where obesity was recognized as a disease of civilization and its treatment as one of the priorities . The previous National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases through Improved Nutrition and Physical Activity for 2007–2011 was based on increasing public awareness of the importance of adequate nutrition and physical activity in relation to health maintenance . In 2015, a law was introduced concerning groups of foodstuffs for sale to children and adolescents in units of the educational system and the requirements to be met by foodstuffs used in the collective nutrition of children and adolescents in these units. It prohibits the sale of unhealthy food products in school canteens . The main reasons for the development of childhood obesity are insufficient physical activity, improper nutrition of children at home, resulting from the lack of knowledge of parents, acquiring knowledge about children’s nutrition mainly from the Internet, and easy availability of unhealthy food for children. The World Health Organization points out that only an integrated effort can help to be successful in raising awareness and changing health behaviors in order to prevent the trend of an increase in the prevalence of obesity in children . Educational activities aimed at changing lifestyles are of particular importance in the prevention programs for obesity in children and adolescents. She draws attention to the importance of proper care for a pregnant woman, breastfeeding, and recommends the introduction of taxation of sweetened drinks and the inclusion of obesity prevention in the tasks of the school nurse . In recent years, several initiatives have been taken in Poland to tackle the problem of child obesity. This problem was included as a priority and included in the National Health Program. The National Program for the Prevention of Overweight and Obesity and Chronic Non-Communicable Diseases by improving nutrition and physical activity for 2007–2011 was developed. It was based on increasing public awareness of the importance of adequate nutrition and exercise in relation to health. In 2015, an act was introduced on groups of foodstuffs intended for sale to children and adolescents in education system units and the requirements to be met by foodstuffs used as part of mass nutrition of children and adolescents in these units, the so-called “Shop act”. It prohibits the sale of unhealthy food products in school shops . A 21-day program of physical activity and clinical dietetics for obese children aged 15–17 was also introduced to examine the lipid profile and glutathione levels, and the obligation to record a conversation about children’s nutrition in the Children’s Health Book . “Bicycle May” is the largest campaign in Poland promoting an active way to school, a healthy lifestyle and sustainable mobility among preschool children, primary school students, teachers, parents, and guardians. Bicycle May, through fun combined with elements of competition, popularizes the bicycle as a means of transport to school, teaching good and healthy habits that persist even after the end of the campaign. There are prizes for the most active participants, classes, and institutions . This is one of the elements of creating a healthy environment in kindergartens and schools, but is also for the employees of institutions and parents of students. As part of educational programs addressed to students, “Fruit and vegetables at school” and “Milk at school” were introduced. Their goal was to improve the eating habits of schoolchildren by promoting and increasing the consumption of vegetables, fruit, milk, and dairy products, i.e., products important for the proper development of a child, and at the same time often deficient in the daily diet. Since 2017, both programs have been combined into the “Program for Schools”, which currently covers students from grades 1–5 of most Polish primary schools . In Poland, legal regulations on advertising food in programs for children were included in the Broadcasting Act of 29 December 1992, as amended (2015), according to which "programs for children should not be accompanied by commercial communications regarding food or beverages containing ingredients, the presence of which in excessive amounts in the daily diet is not recommended” . In 2018, the educational program “5 portions of health at school” began. Its aim is to draw attention to the need for education in the field of proper nutrition from an early age and at the same time to start it early. The program is addressed to students of 2nd and 3rd grades of primary schools from all over Poland and their teachers, school principals and school nutritionists. 2016–2020 . The “Keep Fit” program, co-implemented by the Chief Sanitary Inspectorate and the Polish Federation of Food Producers, is an initiative promoting a healthy lifestyle, combining balanced nutrition with regular physical activity . Although obesity was recognized as a civilization disease in the National Health Program for 2016–2020, and its treatment as one of the priorities, the report of the NIK (Najwyższa Izba Kontroli Supreme Audit) shows that the actions taken by successive health ministers not only did not lead to a decline, but even to inhibiting the growth rate of the number of children and adolescents with excess body weight. The scale of the problem was growing, and the effectiveness of the therapy was modest. In the opinion of the Chamber, the reasons for such low effectiveness of treatment were diagnostic errors and outdated methods of therapy, but most of all the lack of access to treatment, mainly due to the lack of specialists . Epidemiological data show that a family doctor encounters obesity and its complications daily in their practice. The employees of primary care facilities are the basic representatives of medical care to have first contact with patients who suffer from obesity. Treatment of overweight and obesity prevents the development of complications. Sometimes, it also allows to heal already developed complications. Consequently, the main task of primary healthcare workers—family physicians, pediatricians, and nurses—is the diagnosis and treatment of overweight and obesity. As shown in previously published studies, parents often do not perceive the overweight and obesity of a child in terms of a health problem and thus do not take any intervention measures . Professional medical support provided to overweight and obese patients benefits both the patient and their family, as well as the whole society, reducing the direct and indirect costs of obesity. Primary health care workers have the greatest opportunity to observe changes in body weight in their patients and to identify environmental determinants and psychological factors that contribute to the emergence and perpetuation of abnormal behavior. At the local level, they are responsible for promoting the health of their charges. Patients who choose a primary care physician remain under their care for many years. This allows the relationship to be consolidated, trust built, and thus continuously monitor and motivate patients to pro-health behavior. The long-term relationship with the patient, the knowledge of their medical history, and frequent contact are of particular importance in the case of diseases that develop over many years and chronic diseases such as obesity. Monitoring development, including changes in body weight in a child, makes it possible to capture the growing excess body weight at an early stage and initiate therapeutic measures as soon as possible. However, the diagnosis and treatment of obesity is one of the main problems in not only primary, but also specialist health care in Poland. More actions are needed to strengthen the role of primary care in the effective prevention and treatment of obesity. Recommendation for general practitioners (GPs) Primary care physicians who provide preventive care to children should: as part of each contact with the child, especially with vaccinations and periodical check-ups assess their nutritional status and if BMI indicates excess body weight, in particular obesity, which is a chronic disease, they should make such a diagnosis and undertake appropriate treatment; educate the patient’s parents and the child about a healthy lifestyle based on their own observations and an interview on the diet and level of PA; educate parents about the dangers of obesity and its complications; cooperating with representatives of other medical professions (dieticians, physiotherapists, psychologists) to improve the effectiveness of caring for a child with obesity; inform parents of children with overweight and obesity about available support methods and places where such support can be obtained operating in the region (information about specialist clinics, available preventive programs and health policy programs); cooperate with local government authorities to build an effective system of support for patients with excessive weight in the region. Recommendations for Parents Parents of a child who is overweight or obese play a special role in the prevention of obesity, and their correct attitude directly translates into the effectiveness of the therapeutic process. Top recommendations for parents include: building appropriate health behaviors from the earliest stage of a child’s development, through own appropriate behavior, modeling the child’s behavior; exclusive breastfeeding of infants up to 6 months of age and expanding the menu of young children according to applicable recommendations; organization of the diet based on the principles of healthy eating; enabling the child to carry out min. 60 min of PA, according to the guidelines of the WHO; if necessary, modification of the lifestyle of the whole family: diet and exercise to stop the accumulation of overweight and obesity and achieve a reduction in excess; ensuring that sleep time is appropriate for the age of the child; active participation in preventive health care; create an environment that supports the child’s pro-health behavior. Recommendations for teachers The teacher, who is the guardian of the child in the school environment, is obliged to ensure their safety and support their physical, mental, and social development. Top recommendations for teachers include: support overweight and obese children in a peer group—building a child’s self-esteem, including, in particular, a child with overweight and obesity—showing interest, expressing recognition, and appreciation of the child. Such activities may have a protective effect, on the one hand, preventing the development of lowered self-esteem in a child and, on the other hand, preventing stigmatization in the peer group; influence the social position of a child with overweight and obesity in the peer group, e.g., by showing the group of strengths of individual students, build their position by counteracting exclusion; undertake activities that will support the return to normal body weight, e.g., by establishing group rules for bringing sweet products to school; motivate the child and support the reduction of excess body weight; shape, by their own example, the positive behavior of your pupils, for example, through the right food choices, not rewarding children with sweets, and a reasonable choice of places where children eat meals during school trips; enabling students to take an active part in health policy programs implemented at school; provide students with access to clean water and allow them to drink water during class; encourage PA, including interclass activities, and active participation in physical education classes. Recommendations to regional authorities To be effective, prevention of obesity must be taken at various organizational levels. In addition to activities at the national level, all activities undertaken at the regional level are essential. The tasks of the regional authorities in the field of obesity prevention include: building public–private partnerships, engaging all entities to cooperate, in particular nongovernmental organizations promoting a healthy lifestyle, consumers, organizations, and private sector entities, including the food industry and media—joint activities to promote healthy behavior; use of mechanisms/instruments of impact available at a given level (including legal regulations), which may improve environmental conditions to those that are more conducive to pro-health behavior; implementation of health policy programs dedicated to the widest possible group of recipients; programs aimed, in particular, at building proper eating behavior and increasing PA; providing additional measures to children and adolescents from the group at higher risk of developing obesity and its complications; creating social campaigns aimed at improving diet and increasing PA; assistance in organizing psychological support; organization of events during which pro-health behaviors will be promoted (picnics, sports events involving whole families, educational workshops, culinary shows); investments in infrastructure supporting pro-health behavior (e.g., playgrounds with devices enabling the movement of the youngest playgrounds, catering studios in educational institutions, community centers, bicycle paths, walking paths, etc.), creating conditions for active recreation; cooperation with the scientific and medical community in the region for the best possible diagnosis of health needs and the provision of services corresponding to the diagnosed needs . |
HippoUnit: A software tool for the automated testing and systematic comparison of detailed models of hippocampal neurons based on electrophysiological data | c05681c2-2b68-46eb-9089-89488a49e468 | 7875359 | Physiology[mh] | The construction and simulation of anatomically and biophysically detailed models is becoming a standard tool in neuroscience . Such models, which typically employ the compartmental modeling approach and a Hodgkin-Huxley-type description of voltage-gated ion channels, are capable of providing fairly accurate models of single neurons and (when complemented by appropriate models of synaptic interactions) even large-scale circuits . However, building such detailed multi-compartmental models of neurons requires setting a large number of parameters (such as the densities of various ion channels in multiple neuronal compartments) that are often not directly constrained by the available experimental data. These parameters are typically tuned (either manually or using automated parameter-search methods ) until the simulated physiological behavior of the model matches some pre-defined set of experimental observations. For an increasing number of cell types, the available experimental data already provide diverse constraints on the expected physiological behavior of the neuron under a variety of conditions. Based on various (typically small) subsets of the available constraints, a large number of different models of several cell types have been developed to investigate diverse aspects of single-cell behavior, and for inclusion in realistic circuit models. As an example, there are currently 131 different models related to the hippocampal CA1 pyramidal cell (PC) in the ModelDB database . However, even though these models are publicly available, it is still technically challenging to verify their behavior beyond the examples explicitly included with the model, and especially to test their behavior outside the context of the original study, or to compare it with the behavior of other models. This sparsity of information about the performance of detailed models may also be one reason why model re-use in the community is relatively limited, which decreases the chance of spotting errors in modeling studies, and may lead to an unnecessary replication of effort. A systematic comparison of existing models built in different laboratories requires the development of a comprehensive validation suite, a set of automated tests that quantitatively compare various aspects of model behavior with the corresponding experimental data. Such validation suites enable all modeling groups to evaluate their existing and newly developed models according to the same set of well-defined criteria, thus facilitating model comparison and providing an objective measure of progress in matching relevant experimental observations. Applying automated tests also allows researchers to learn more about models published by other groups (beyond the results included in the papers) with relatively little effort, thus facilitating optimal model re-use and co-operative model development. In addition, systematic, automated testing is expected to speed up model development in general by allowing researchers to easily evaluate models in relation to the relevant experimental data after every iteration of model adjustment. Finally, a comprehensive evaluation of model behavior appears to be critical for models that are then expected to provide useful predictions in a new context. A prime example of this is detailed single cell models included in network models, where diverse aspects of cellular function such as synaptic integration, intracellular signal propagation, spike generation and adaptation mechanisms all contribute to the input-output function of the neuron in the context of an active network. By comparing multiple different aspects of the behavior of the single cell model with experimental data, one can increase the chance of having a model that also behaves correctly within the network. The technical framework for developing automated test suites for models already exists , and is currently used by several groups to create a variety of tests for models of neural structure and function at different scales . In the current study, our goal was to develop a validation suite for the physiological behavior of one of the most studied cell types of the mammalian brain, the pyramidal cell in area CA1 of the rat hippocampus. CA1 pyramidal neurons display a large repertoire of nonlinear responses in all of their compartments (including the soma, axon, and various functionally distinct parts of the dendritic tree), which are experimentally well-characterized. In particular, there are detailed quantitative results available on the subthreshold and spiking voltage response to somatic current injections ; on the properties of the action potentials back-propagating from the soma into the dendrites , which is a basic measure of dendritic excitability; and on the characteristics of the spread and non-linear integration of synaptically evoked signals in the dendrites, including the conditions necessary for the generation of dendritic spikes . The test suite that we have developed allows the quantitative comparison of the behavior of anatomically and biophysically detailed models of rat CA1 pyramidal neurons with experimental data in all of these domains. In this paper, we first describe the implementation of the HippoUnit validation suite. Next, we show how we used this test suite to systematically compare existing models from six prominent publications from different laboratories. We then show an example of how the tests have been applied to aid the development of new models in the context of the European Human Brain Project (HBP). Finally, we describe the integration of our test suite into the general validation framework developed in the HBP.
Implementation of HippoUnit HippoUnit is a Python test suite based on the SciUnit framework, which is a Python package for testing scientific models, and during its implementation the NeuronUnit package was taken into account as an example of how to use the SciUnit framework for testing neuronal models. In SciUnit tests usually four main classes are implemented: the test class, the model class, the capabilities class and the score class. HippoUnit is built in a way that keeps this structure. The key idea behind this structure is the decoupling of the model implementation from the test implementation by defining standardized interfaces (capabilities) between them, so that tests can easily be used with different models without being rewritten, and models can easily be adapted to fit the framework. Each test of HippoUnit is a separate Python class that, similarly to other SciUnit packages, can run simulations on the models to generate model predictions , which can be compared with experimental observations to yield the final score, provided that the model has the required capabilities implemented to mimic the appropriate experimental protocol and produce the same type of measurable output. All measured or calculated data that contribute to the final score (including the recorded voltage traces, the extracted features and the calculated feature scores) are saved in JSON or pickle files (or, in many cases, in both types of files). JSON files are human readable, and can be easily loaded into Python dictionaries. Data with a more complex structure are saved into pickle files. This makes it possible to easily write and read the data (for further processing or analysis) without changing its Python structure, no matter what type of object or variable it is. In addition to the JSON files a text file (log file) is also saved, that contains the final score and some useful information or notes specific to the given test and model. Furthermore, the recorded voltage traces, the extracted features and the calculated feature scores are also plotted for visualization. Similarly to many of the existing SciUnit packages the implementations of specific models are not part of the HippoUnit package itself. Instead, HippoUnit contains a general ModelLoader class. This class is implemented in a way that it is able to load and deal with most types of models defined in the HOC language of the NEURON simulator (either as standalone HOC models or as HOC templates) . It implements all model-related methods (capabilities) that are needed to simulate these kinds of neural models in order to generate the prediction without any further coding required from the user. For the smooth validation of the models developed using parameter optimization within the HBP there is a child class of the ModelLoader available in HippoUnit that is called ModelLoader_BPO . This class inherits most of the functions (especially the capability functions) from the ModelLoader class, but it implements additional functions that are able to automatically deal with the specific way in which information is represented and stored in these optimized models. The role of these functions is to gather all the information from the metadata and configuration files of the models that are needed to set the parameters required to load the models and run the simulations on them (such as path to the model files, name of the model template or the simulation temperature (the celsius variable of Neuron)). This enables the validation of these models without any manual intervention needed from the user. The section lists required by the tests of HippoUnit are also created automatically using the morphology files of these models (for details see the “Classify apical sections of pyramidal cells” subsection). For neural models developed using other software and methods, the user needs to implement the capabilities through which the tests of HippoUnit perform the simulations and recordings on the model. The capabilities are the interface between the tests and the models. The ModelLoader class inherits from the capabilities and must implement the methods of the capability. The test can only be run on a model if the necessary capability methods are implemented in the ModelLoader . All communication between the test and the model happens through the capabilities. The methods of the score classes perform the quantitative comparison between the prediction and the observation , and return the score object containing the final score and some related data, such as the paths to the saved figure and data (JSON) files and the prediction and observation data. Although SciUnit and NeuronUnit have a number of different score types implemented, those typically compare a single prediction value to a single observation value, while the tests of HippoUnit typically extract several features from the model’s response to be compared with experimental data. Therefore, each test of HippoUnit has its own score class implemented that is designed to deal with the specific structure of the output prediction data and the corresponding observation data. For simplicity, we refer to the discrepancy between the target experimental data ( observation ) and the models’ behavior ( prediction ) with respect to a studied feature using the term feature score. In most cases, when the basic statistics (mean and standard deviation) of the experimental features (typically measured in several different cells of the same cell type) are available, feature scores are computed as the absolute difference between the feature value of the model and the experimental mean feature value, divided by the experimental standard deviation (Z-score) . The final score of a given test achieved by a given model is given by the average (or, in some cases, the sum) of the feature scores for all the features evaluated by the test. Implementation of the tests of HippoUnit The Somatic Features Test The Somatic Features Test uses the Electrophys Feature Extraction Library (eFEL) to extract and evaluate the values of both subthreshold and suprathreshold (spiking) features from voltage traces that represent the response of the model to somatic current injections of different positive (depolarizing) and negative (hyperpolarizing) current amplitudes. Spiking features describe action potential shape (such as AP width, AP rise/fall rate, AP amplitude, etc.) and timing (frequency, inter-spike intervals, time to first/last spike, etc.), while some passive features (such as the voltage base or the steady state voltage), and subthreshold features for negative current stimuli (voltage deflection, sag amplitude, etc.) are also examined. In this test step currents of varying amplitudes are injected into the soma of the model and the voltage response is recorded. The simulation protocol is set according to an input configuration JSON file, which contains all the current amplitudes, the delay and the duration of the stimuli, and the stimulation and recording positions. Simulations using different current amplitudes are run in parallel if this is supported by the computing environment. As the voltage responses of neurons to somatic current injections can strongly depend on the experimental method, and especially on the type of electrode used, target values for these features were extracted from two different datasets. One dataset was obtained from sharp electrode recordings from adult rat CA1 neurons (this will be called the sharp electrode dataset) , and the other dataset is from patch clamp recordings in rat CA1 pyramidal cells (data provided by Judit Makara, which will be referred to as the patch clamp dataset). For both of these datasets we had access to the recorded voltage traces from multiple neurons, which made it possible to perform our own feature extraction using eFEL. This ensures that the features are interpreted and calculated the same way for both the experimental data and the models’ voltage response during the simulation. Furthermore, it allows a more thorough comparison against a large number of features extracted from experimental recordings yielded using the exact same protocol, which is unlikely to be found in any paper of the available literature. However, to see how representative these datasets are of the literature as a whole we first compared some of the features extracted from these datasets to data available on Neuroelectro.org and on Hippocampome.org . The features we compared were the following: resting potential, voltage threshold, after-hyperpolarization (AHP) amplitudes (fast, slow), action potential width and sag ratio. Although these databases have mean and standard deviation values for these features that are calculated from measurements using different methods, protocols and from different animals, we found that most of the feature values for our two experimental datasets fall into the ranges declared as typical for CA1 PCs in the online databases. The only conspicuous exception is the fast AHP amplitude of the patch clamp dataset used in this study, which is 1.7 ± 1.5 mV, while the databases cite values between 6.8 and 11.64 mV. This deviation could possibly stem from a difference in the way that the fast AHP is measured. However, we note that during the patch clamp recordings some of the cells were filled with a high-affinity Ca 2+ sensor, which may have affected several Ca-sensitive mechanisms (such as Ca-dependent potassium currents) in the cell, and therefore may have influenced features like the AP width and properties of the spike after-hyperpolarization. We also performed a more specific review of the relevant literature to compare the most important somatic features of the patch clamp dataset to results from available patch clamp recordings . Our analysis confirmed that the values of several basic electrophysiological features such as the AP voltage threshold, the AP amplitude, the AP width, and the amplitude of the hyperpolarizing sag extracted from our patch clamp dataset fall into the range observed experimentally. We conclude that the patch clamp dataset is in good agreement with experimental observations available in the literature, and will be used as a representative example in this study. The observation data are loaded from a JSON file of a given format which contains the names of the features to be evaluated, the current amplitude for which the given feature is evaluated and the corresponding experimental mean and standard deviation values. The feature means and standard deviations are extracted using BluePyEfe from a number of voltage traces recorded from several different cells. Its output can be converted to stimulus and feature JSON files used by HippoUnit using the script available here: https://github.com/sasaray/HippoUnit_demo/blob/master/target_features/Examples_on_creating_JSON_files/Somatic_Features/convert_new_output_feature_data_for_valid.py . Setting the specify_data_set parameter it can be ensured that the test results against different experimental data sets are saved into different folders. For certain features eFEL returns a vector as a result; in these cases, the feature value used by HippoUnit is the average of the elements of the vector. These are typically spiking features for which eFEL extracts a value corresponding to each spike fired. For features that use the ‘AP_begin_time’ or ‘AP_begin_voltage’ feature values for further calculations, we exclude the first element of the vector output before averaging because we discovered that these features are often incorrectly detected for the first action potential of a train. The score class of this test returns as the final score the average of Z-scores for the evaluated eFEL features achieved by the model. Those features that could not be evaluated (e.g., spiking features from voltage responses without any spikes) are listed in a log file to inform the user, and the number of successfully evaluated features out of the number of features attempted to be evaluated is also reported. The Depolarization Block Test This test aims to determine whether the model enters depolarization block in response to a prolonged, high intensity somatic current stimulus. For CA1 pyramidal cells, the test relies on experimental data from Bianchi et al. . According to these data, rat CA1 PCs respond to somatic current injections of increasing intensity with an increasing number of action potentials until a certain threshold current intensity is reached. For current intensities higher than the threshold, the cell does not fire over the whole period of the stimulus; instead, firing stops after some action potentials, and the membrane potential is sustained at some constant depolarized level for the rest of the stimulus. This phenomenon is termed depolarization block . This test uses the same capability class as the Somatic Features Test for injecting current and recording the somatic membrane potential (see the description above). Using this capability, the model is stimulated with 1000 ms long square current pulses increasing in amplitude from 0 to 1.6 nA in 0.05 nA steps, analogous to the experimental protocol. The stimuli of different amplitudes are run in parallel. Somatic spikes are detected and counted using eFEL . From the somatic voltage responses of the model, the following features are evaluated. I th is the threshold current to reach depolarization block; experimentally, this is both the amplitude of the current injection at which the cell exhibits the maximum number of spikes, and the highest stimulus amplitude that does not elicit depolarization block. In the test two separate features are evaluated for the model and compared to the experimental I th : the current intensity for which the model fires the maximum number of action potentials ( I_maxNumAP ), and the current intensity one step before the model enters depolarization block ( I_below_depol_block ). If these two feature values are not equal, a penalty is added to the score. The model is defined to exhibit depolarization block if I_maxNumAP is not the highest amplitude tested, and if there exists a current intensity higher than I_maxNumAP , for which the model does not fire action potentials during the last 100 ms of its voltage response. In the experiment the V eq feature is extracted from the voltage response of the pyramidal cells to the current injection one step above I th (or I_max_num_AP in the test). Both in the experiment and in this test this is calculated as the mean voltage over the last 100 ms of the voltage trace. However, in the test, before calculating this value it is examined whether there are any action potentials during this period. The presence of spikes here means that the model did not enter depolarization block prior to this period. In these cases the test iterates further on the voltage traces corresponding to larger current steps to find if there is any where the model actually entered depolarization block; if an appropriate trace is found, the value of V eq is extracted there. This trace is the response to the current intensity one step above I_below_depol_block . If the model does not enter depolarization block, a penalty is applied, and the final score gets the value of 100. Otherwise, the final score achieved by the model on this test is the average of the feature scores (Z-scores) for the features described above, plus an additional penalty if I_maxNumAP and I_below_depol_block differ. This penalty is 200 times the difference between the two current amplitude values (in pA–which in this case is 10 times the number of examined steps between them). The Back-propagating AP Test This test evaluates the strength of action potential back-propagation in the apical trunk at locations of different distances from the soma. The observation data for this test were yielded by the digitization of Fig 1B of , using the DigitizeIt software . The values were then averaged over distances of 50, 150, 250, 350 ± 20 μm from the soma to get the mean and standard deviation of the features. The features tested here are the amplitudes of the first and last action potentials of a 15 Hz spike train, measured at the 4 different dendritic locations. The test automatically finds current amplitudes for which the soma fires, on average, between 10–20 Hz and chooses the amplitude that leads to firing nearest to 15 Hz. For this task, the following algorithm was implemented. Increasing current step stimuli of 0.0–1.0 nA amplitude with a step size of 0.1 nA are applied to the model and the number of spikes is counted for each resulting voltage trace. If spontaneous spiking occurs (i.e., if there are spikes even when no current is injected) or if the spiking rate does not reach 10 Hz even for the highest amplitude, the test quits with an error message. Otherwise the amplitudes for which the soma fires between 10 and 20 Hz are appended to a list and (if the list is not empty) the one providing the spiking rate nearest to 15 Hz is chosen. If the list is empty because the spiking rate is smaller than 10 Hz for a step amplitude but higher than 20 Hz for the next step, a binary search method is used to find an appropriate amplitude in this range. This test uses a trunk section list (or generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classifying the apical sections of pyramidal cells’ below) to automatically find the dendritic locations for the measurements. The desired distances of the locations from the soma and the distance tolerance are read from the input configuration file, and must agree with the distances and the tolerance over which the experimental data were averaged. All the trunk dendritic segments whose distance from the soma falls into one of the distance ranges are selected. The locations and also their distances are then returned in separate dictionaries. Then the soma is stimulated with a current injection of the previously chosen amplitude and the voltage response of the soma and the selected dendritic locations are recorded and returned. The test implements its own function to extract the amplitudes of back-propagating action potentials, but the method is based on eFEL features. This is needed because eFEL’s spike detection is based on a given threshold value for spike initiation, which may not be reached by the back-propagating signal at more distant regions. First the maximum depolarization of the first and the last action potentials are calculated. This is the maximum value of the voltage trace in a time interval around the somatic action potential, based on the start time of the spike (using the AP_begin_time feature of eFEL) and the inter-spike interval to the next spike recorded at the soma. Then the amplitudes are calculated as the difference between this maximum value and the voltage at the begin time of the spike (on the soma) minus 1 ms (which is early enough not to include the rising phase of the spike, and late enough in the case of the last action potential not to include the afterhyperpolarization of the previous spike). To calculate the feature scores the amplitude values are first averaged over the distance ranges to be compared to the experimental data and get the feature Z-scores. The final score here is the average of the Z-scores achieved for the features of first and last action potential amplitudes at different dendritic distances. In the result it is also stated whether the model is more like a strongly or a weakly propagating cell in the experiment, where they found examples of both types . The PSP Attenuation Test The PSP Attenuation Test evaluates how much the post-synaptic potential attenuates as it propagates from different dendritic locations to the soma in rat CA1 pyramidal cell models. The observation data for this test were yielded by the digitization of and of Magee and Cook, 2000 using the DigitizeIt software . The somatic and dendritic depolarization values were then averaged over distances of 100, 200, 300 ± 50 μm from the soma and the soma/dendrite attenuation was calculated to get the mean and standard deviation of the attenuation features at the three different input distances. The digitized data and the script that calculates the feature means and standard deviations, and creates the JSON file are available here: https://github.com/sasaray/HippoUnit_demo/tree/master/target_features/Examples_on_creating_JSON_files/Magee2000-PSP_att/ . In this test the apical trunk receives excitatory post-synaptic current (EPSC)-shaped current stimuli at locations of different distances from the soma. The maximum depolarization caused by the input is extracted at the soma and divided by the maximum depolarization at the location of the stimulus to get the soma/dendrite attenuation values that are then averaged in distance ranges of 100, 200, 300 ± 50 μm and compared to the experimental data. The distances and tolerance are defined in the configuration file and must agree with how the observation data were generated. The test uses a trunk section list, which needs to be specified in the NEURON HOC model (or the test generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below) to find the dendritic locations to be stimulated. Randomly selected dendritic locations are used because the distance ranges that are evaluated cover almost the whole length of the trunk of a pyramidal cell. The probability of selecting a given dendritic segment is set to be proportional to its length. The number of dendritic segments examined can be chosen by the user by setting the num_of_dend_locations argument of the test. The random seed (also an argument of the test) must be kept constant to make the selection reproducible. If a given segment is selected multiple times (or it is closer than 50 μm or further than 350 μm), a new random number is generated. If the number of locations to be selected is more than the number of trunk segments available in the model, all the segments are selected. The Exp2Syn synaptic model of NEURON with a previously calculated weight is used to stimulate the dendrite. The desired EPSC amplitude and time constants are given in the input configuration file according to the experimental protocol. To get the proper synaptic weight, first the stimulus is run with weight = 0. The last 10% of the trace is averaged to get the resting membrane potential (Vm). Then the synaptic weight required to induce EPSCs with the experimentally determined amplitude is calculated according to : w e i g h t = ‐ E P S C _ a m p / V m (1) where EPSC_amp is read from the config dictionary, and the synaptic reversal potential is assumed to be 0 mV. To get the somatic and dendritic maximum depolarization from the voltage traces, the baseline trace (weight = 0) is subtracted from the trace recorded in the presence of the input. To get the attenuation ratio the maximum value of the somatic depolarization is divided by the maximum value of the dendritic depolarization. To calculate the feature scores the soma/dendrite attenuation values are first averaged over the distance ranges to be compared to the experimental data to get the feature Z-scores. The final score is the average of the feature scores calculated at the different dendritic locations. The Oblique Integration Test This test evaluates the signal integration properties of radial oblique dendrites, determined by providing an increasing number of synchronous (0.1 ms between inputs) or asynchronous (2 ms between inputs) clustered synaptic inputs. The experimental mean and standard error (SE) of the features examined are available in the paper of Losonczy and Magee and are read from a JSON file into the observation dictionary of the test. The SE values are then converted to standard deviation values. The following features are tested: voltage threshold for dendritic spike initiation (defined as the expected somatic depolarization at which a step-like increase in peak dV/dt occurs); proximal threshold (defined the same way as above, but including only those results in the statistics where the proximal part of the examined dendrite was stimulated); distal threshold; degree of nonlinearity at threshold; suprathreshold degree of nonlinearity; peak derivative of somatic voltage at threshold; peak amplitude of somatic EPSP; time to peak of somatic EPSP; degree of nonlinearity in the case of asynchronous inputs. The test automatically selects a list of oblique dendrites that meet the criteria of the experimental protocol, based on a section list containing the oblique dendritic sections (this can either be provided by the HOC model, or generated automatically if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below). For each selected oblique dendrite a proximal and a distal location is examined. The criteria for the selection of dendrites, which were also applied in the experiments, are the following. The selected oblique dendrites should be terminal dendrites (they have no child sections) and they should be at most 120 μm from the soma. This latter criterion can be changed by the user by changing the value of the ModelLoader’s max_dist_from_soma variable, and it can also increase automatically if needed. In particular, if no appropriate oblique is found up to the upper bound provided, the distance is increased iteratively by 15 μm, but not further than 190 μm. Then an increasing number of synaptic inputs are activated at the selected dendritic locations separately, while recording the local and somatic voltage response. HippoUnit provides a default synapse model to be used in the ObliqueIntegrationTest . If the AMPA_name , and NMDA_name variables are not set by the user, the default synapse is used. In this case the AMPA component of the synapse is given by the built-in Exp2Syn synapse of NEURON, while the NMDA component is defined in an NMODL (.mod) file which is part of the HippoUnit package. This NMDA receptor model uses a Jahr-Stevens voltage dependence and rise and decay time constants of 3.3 and 102.38 ms, respectively. The time constant values used here are temperature- (Q10-) corrected values from . Q10 values for the rise and decay time constants were 2.2 and 1.7 , respectively. The model’s own AMPA and NMDA receptor models can also be used in this test if their NMODL files are available and compiled among the other mechanisms of the model. In this case the AMPA_name , and NMDA_name variables need to be provided by the user. The time constants of the built-in Exp2Syn AMPA component and the AMPA/NMDA ratio can be adjusted by the user by setting the AMPA_tau1 , AMPA_tau2 and AMPA_NMDA_ratio parameter of the ModelLoader . The default AMPA/NMDA ratio is 2.0 from , and the default AMPA_tau1 and AMPA_tau2 are 0.1 ms and 2.0 ms, respectively . To test the Poirazi et al. 2003 model using its own receptor models, we also had to implement a modified version of the synapse functions of the ModelLoader that can deal with the different (pointer-based) implementation of synaptic activation in this model. For this purpose, a child class was implemented that inherits from the ModelLoader . This modified version is not part of the official HippoUnit version, because this older, more complicated implementation of synaptic models is not generally used anymore; however, this is a good example on how one can modify the capability methods of HippoUnit to match their own models or purposes. The code for this modified ModelLoader is available here: https://github.com/KaliLab/HippoUnit_demo/blob/master/ModelLoader_Poirazi_2003_CA1.py . The synaptic weights for each selected dendritic location are automatically adjusted by the test using a binary search algorithm so that the threshold for dendritic spike generation is 5 synchronous inputs–which was the average number of inputs that had to be activated by glutamate uncaging to evoke a dendritic spike in the experiments . This search runs in parallel for all selected dendritic locations. The search interval of the binary search and the initial step size of the searching range can be adjusted by the user through the c_minmax and c_step_start variables of the ModelLoader . During the iterations of the algorithm the step size may decrease if needed; a lower threshold for the step size ( c_step_stop variable of the ModelLoader ) must be set to avoid infinite looping. Those dendritic locations where this first dendritic spike generates a somatic action potential, or where no dendritic spike can be evoked, are excluded from further analysis. To let the user know, this information is displayed on the output and also printed into the log file saved by the test. Most of the features above are extracted at the threshold input level (5 inputs). The final score of this test is the average of the feature scores achieved by the model for the different features; however, a T-test analysis is also available as a separate score type for this test. Parallel computing Most of the tests of HippoUnit require multiple simulations of the same model, either using stimuli of different intensities or at different locations in the cell. To run these simulations in parallel and save time, the Python multiprocessing.Pool module is used. The size of the pool can be set by the user. Moreover, all NEURON simulations are performed in multiprocessing pools to ensure that they run independently of each other, and to make it easy to erase the models after the process has finished. This is especially important in the case of HOC templates in order to avoid previously loaded templates running in the background and the occurrence of ‘Template cannot be redefined’ errors when the same model template is loaded again. Classifying the apical sections of pyramidal cells Some of the validation tests of HippoUnit require lists of sections belonging to the different dendritic types of the apical tree (main apical trunk, apical tuft dendrites, and radial oblique dendrites). To classify the dendrites NeuroM is used as a base package. NeuroM contains a script that, starting from the tuft (uppermost dendritic branches in ) endpoints, iterates down the tree to find a single common ancestor. This is considered as the apical point. The apical point is the upper end of the main apical dendrite (trunk), from where the tuft region arises. Every dendrite branching from the trunk below this point is considered an oblique dendrite. However, there are many CA1 pyramidal cell morphologies where the trunk bifurcates close to the soma to form two or even more branches. In these cases the method described above finds this proximal bifurcation point as the apical point (see ). To overcome this issue, we worked out and implemented a method to find multiple apical points by iterating the function provided by NeuroM. In particular, if the initial apical point is closer to the soma than a pre-defined threshold, the function is run again on subtrees of the apical tree where the root node of the subtree is the previously found apical point, to find apical points on those subtrees (see ). When (possibly after multiple iterations) apical points that are far enough from the soma are found, NeuroM is used to iterate down from them on the parent sections, which will be the trunk sections (blue dots in ). Iterating up, the tuft sections are found (green dots in ), and the other descendants of the trunk sections are considered to be oblique dendrites (yellow dots in ). Once all the sections are classified, their NeuroM coordinates are converted to NEURON section information for further use. We note that this function can only be used for hoc models that load their morphologies from a separate morphology file (e.g., ASC, SWC) as NeuroM can only deal with morphologies provided in these standard formats. For models with NEURON morphologies implemented directly in the hoc language, the SectionLists required by a given test should be implemented within the model. Models from literature In this paper we demonstrate the utility of the HippoUnit validation test suite by applying its tests to validate and compare the behavior of several different detailed rat hippocampal CA1 pyramidal cell models available on ModelDB . For this initial comparison we chose models published by several modeling groups worldwide that were originally developed for various purposes. The models compared were the following: the Golding et al., 2001 model (ModelDB accession number: 64167), the Katz et al., 2009 model (ModelDB accession number: 127351), the Migliore et al., 2011 model (ModelDB accession number: 138205), the Poirazi et al., 2003 model (ModelDB accession number: 20212), the Bianchi et al., 2012 model (ModelDB accession number: 143719), and the Gómez González et al., 2011 model (ModelDB accession number: 144450). Models from literature that are published on ModelDB typically implement their own simulations and plots to make it easier for users and readers to reproduce and visualize the results shown in the corresponding paper. Therefore, to be able to test the models described above using our test suite, we needed to create standalone versions of them. These standalone versions do not display any GUI, or contain any built-in simulations and run-time modifications, but otherwise their behavior should be identical to the published version of the models. We also added section lists of the radial oblique and the trunk dendritic sections to those models where this was not done yet, as some of the tests require these lists. To ensure that the standalone versions have the same properties as the original models, we checked their parameters after running their built-in simulations (in case including any run-time modifications), and made sure they match the parameters of the standalone version. The modified models used for running validation tests are available in this GitHub repository: https://github.com/KaliLab/HippoUnit_demo .
HippoUnit is a Python test suite based on the SciUnit framework, which is a Python package for testing scientific models, and during its implementation the NeuronUnit package was taken into account as an example of how to use the SciUnit framework for testing neuronal models. In SciUnit tests usually four main classes are implemented: the test class, the model class, the capabilities class and the score class. HippoUnit is built in a way that keeps this structure. The key idea behind this structure is the decoupling of the model implementation from the test implementation by defining standardized interfaces (capabilities) between them, so that tests can easily be used with different models without being rewritten, and models can easily be adapted to fit the framework. Each test of HippoUnit is a separate Python class that, similarly to other SciUnit packages, can run simulations on the models to generate model predictions , which can be compared with experimental observations to yield the final score, provided that the model has the required capabilities implemented to mimic the appropriate experimental protocol and produce the same type of measurable output. All measured or calculated data that contribute to the final score (including the recorded voltage traces, the extracted features and the calculated feature scores) are saved in JSON or pickle files (or, in many cases, in both types of files). JSON files are human readable, and can be easily loaded into Python dictionaries. Data with a more complex structure are saved into pickle files. This makes it possible to easily write and read the data (for further processing or analysis) without changing its Python structure, no matter what type of object or variable it is. In addition to the JSON files a text file (log file) is also saved, that contains the final score and some useful information or notes specific to the given test and model. Furthermore, the recorded voltage traces, the extracted features and the calculated feature scores are also plotted for visualization. Similarly to many of the existing SciUnit packages the implementations of specific models are not part of the HippoUnit package itself. Instead, HippoUnit contains a general ModelLoader class. This class is implemented in a way that it is able to load and deal with most types of models defined in the HOC language of the NEURON simulator (either as standalone HOC models or as HOC templates) . It implements all model-related methods (capabilities) that are needed to simulate these kinds of neural models in order to generate the prediction without any further coding required from the user. For the smooth validation of the models developed using parameter optimization within the HBP there is a child class of the ModelLoader available in HippoUnit that is called ModelLoader_BPO . This class inherits most of the functions (especially the capability functions) from the ModelLoader class, but it implements additional functions that are able to automatically deal with the specific way in which information is represented and stored in these optimized models. The role of these functions is to gather all the information from the metadata and configuration files of the models that are needed to set the parameters required to load the models and run the simulations on them (such as path to the model files, name of the model template or the simulation temperature (the celsius variable of Neuron)). This enables the validation of these models without any manual intervention needed from the user. The section lists required by the tests of HippoUnit are also created automatically using the morphology files of these models (for details see the “Classify apical sections of pyramidal cells” subsection). For neural models developed using other software and methods, the user needs to implement the capabilities through which the tests of HippoUnit perform the simulations and recordings on the model. The capabilities are the interface between the tests and the models. The ModelLoader class inherits from the capabilities and must implement the methods of the capability. The test can only be run on a model if the necessary capability methods are implemented in the ModelLoader . All communication between the test and the model happens through the capabilities. The methods of the score classes perform the quantitative comparison between the prediction and the observation , and return the score object containing the final score and some related data, such as the paths to the saved figure and data (JSON) files and the prediction and observation data. Although SciUnit and NeuronUnit have a number of different score types implemented, those typically compare a single prediction value to a single observation value, while the tests of HippoUnit typically extract several features from the model’s response to be compared with experimental data. Therefore, each test of HippoUnit has its own score class implemented that is designed to deal with the specific structure of the output prediction data and the corresponding observation data. For simplicity, we refer to the discrepancy between the target experimental data ( observation ) and the models’ behavior ( prediction ) with respect to a studied feature using the term feature score. In most cases, when the basic statistics (mean and standard deviation) of the experimental features (typically measured in several different cells of the same cell type) are available, feature scores are computed as the absolute difference between the feature value of the model and the experimental mean feature value, divided by the experimental standard deviation (Z-score) . The final score of a given test achieved by a given model is given by the average (or, in some cases, the sum) of the feature scores for all the features evaluated by the test.
The Somatic Features Test The Somatic Features Test uses the Electrophys Feature Extraction Library (eFEL) to extract and evaluate the values of both subthreshold and suprathreshold (spiking) features from voltage traces that represent the response of the model to somatic current injections of different positive (depolarizing) and negative (hyperpolarizing) current amplitudes. Spiking features describe action potential shape (such as AP width, AP rise/fall rate, AP amplitude, etc.) and timing (frequency, inter-spike intervals, time to first/last spike, etc.), while some passive features (such as the voltage base or the steady state voltage), and subthreshold features for negative current stimuli (voltage deflection, sag amplitude, etc.) are also examined. In this test step currents of varying amplitudes are injected into the soma of the model and the voltage response is recorded. The simulation protocol is set according to an input configuration JSON file, which contains all the current amplitudes, the delay and the duration of the stimuli, and the stimulation and recording positions. Simulations using different current amplitudes are run in parallel if this is supported by the computing environment. As the voltage responses of neurons to somatic current injections can strongly depend on the experimental method, and especially on the type of electrode used, target values for these features were extracted from two different datasets. One dataset was obtained from sharp electrode recordings from adult rat CA1 neurons (this will be called the sharp electrode dataset) , and the other dataset is from patch clamp recordings in rat CA1 pyramidal cells (data provided by Judit Makara, which will be referred to as the patch clamp dataset). For both of these datasets we had access to the recorded voltage traces from multiple neurons, which made it possible to perform our own feature extraction using eFEL. This ensures that the features are interpreted and calculated the same way for both the experimental data and the models’ voltage response during the simulation. Furthermore, it allows a more thorough comparison against a large number of features extracted from experimental recordings yielded using the exact same protocol, which is unlikely to be found in any paper of the available literature. However, to see how representative these datasets are of the literature as a whole we first compared some of the features extracted from these datasets to data available on Neuroelectro.org and on Hippocampome.org . The features we compared were the following: resting potential, voltage threshold, after-hyperpolarization (AHP) amplitudes (fast, slow), action potential width and sag ratio. Although these databases have mean and standard deviation values for these features that are calculated from measurements using different methods, protocols and from different animals, we found that most of the feature values for our two experimental datasets fall into the ranges declared as typical for CA1 PCs in the online databases. The only conspicuous exception is the fast AHP amplitude of the patch clamp dataset used in this study, which is 1.7 ± 1.5 mV, while the databases cite values between 6.8 and 11.64 mV. This deviation could possibly stem from a difference in the way that the fast AHP is measured. However, we note that during the patch clamp recordings some of the cells were filled with a high-affinity Ca 2+ sensor, which may have affected several Ca-sensitive mechanisms (such as Ca-dependent potassium currents) in the cell, and therefore may have influenced features like the AP width and properties of the spike after-hyperpolarization. We also performed a more specific review of the relevant literature to compare the most important somatic features of the patch clamp dataset to results from available patch clamp recordings . Our analysis confirmed that the values of several basic electrophysiological features such as the AP voltage threshold, the AP amplitude, the AP width, and the amplitude of the hyperpolarizing sag extracted from our patch clamp dataset fall into the range observed experimentally. We conclude that the patch clamp dataset is in good agreement with experimental observations available in the literature, and will be used as a representative example in this study. The observation data are loaded from a JSON file of a given format which contains the names of the features to be evaluated, the current amplitude for which the given feature is evaluated and the corresponding experimental mean and standard deviation values. The feature means and standard deviations are extracted using BluePyEfe from a number of voltage traces recorded from several different cells. Its output can be converted to stimulus and feature JSON files used by HippoUnit using the script available here: https://github.com/sasaray/HippoUnit_demo/blob/master/target_features/Examples_on_creating_JSON_files/Somatic_Features/convert_new_output_feature_data_for_valid.py . Setting the specify_data_set parameter it can be ensured that the test results against different experimental data sets are saved into different folders. For certain features eFEL returns a vector as a result; in these cases, the feature value used by HippoUnit is the average of the elements of the vector. These are typically spiking features for which eFEL extracts a value corresponding to each spike fired. For features that use the ‘AP_begin_time’ or ‘AP_begin_voltage’ feature values for further calculations, we exclude the first element of the vector output before averaging because we discovered that these features are often incorrectly detected for the first action potential of a train. The score class of this test returns as the final score the average of Z-scores for the evaluated eFEL features achieved by the model. Those features that could not be evaluated (e.g., spiking features from voltage responses without any spikes) are listed in a log file to inform the user, and the number of successfully evaluated features out of the number of features attempted to be evaluated is also reported. The Depolarization Block Test This test aims to determine whether the model enters depolarization block in response to a prolonged, high intensity somatic current stimulus. For CA1 pyramidal cells, the test relies on experimental data from Bianchi et al. . According to these data, rat CA1 PCs respond to somatic current injections of increasing intensity with an increasing number of action potentials until a certain threshold current intensity is reached. For current intensities higher than the threshold, the cell does not fire over the whole period of the stimulus; instead, firing stops after some action potentials, and the membrane potential is sustained at some constant depolarized level for the rest of the stimulus. This phenomenon is termed depolarization block . This test uses the same capability class as the Somatic Features Test for injecting current and recording the somatic membrane potential (see the description above). Using this capability, the model is stimulated with 1000 ms long square current pulses increasing in amplitude from 0 to 1.6 nA in 0.05 nA steps, analogous to the experimental protocol. The stimuli of different amplitudes are run in parallel. Somatic spikes are detected and counted using eFEL . From the somatic voltage responses of the model, the following features are evaluated. I th is the threshold current to reach depolarization block; experimentally, this is both the amplitude of the current injection at which the cell exhibits the maximum number of spikes, and the highest stimulus amplitude that does not elicit depolarization block. In the test two separate features are evaluated for the model and compared to the experimental I th : the current intensity for which the model fires the maximum number of action potentials ( I_maxNumAP ), and the current intensity one step before the model enters depolarization block ( I_below_depol_block ). If these two feature values are not equal, a penalty is added to the score. The model is defined to exhibit depolarization block if I_maxNumAP is not the highest amplitude tested, and if there exists a current intensity higher than I_maxNumAP , for which the model does not fire action potentials during the last 100 ms of its voltage response. In the experiment the V eq feature is extracted from the voltage response of the pyramidal cells to the current injection one step above I th (or I_max_num_AP in the test). Both in the experiment and in this test this is calculated as the mean voltage over the last 100 ms of the voltage trace. However, in the test, before calculating this value it is examined whether there are any action potentials during this period. The presence of spikes here means that the model did not enter depolarization block prior to this period. In these cases the test iterates further on the voltage traces corresponding to larger current steps to find if there is any where the model actually entered depolarization block; if an appropriate trace is found, the value of V eq is extracted there. This trace is the response to the current intensity one step above I_below_depol_block . If the model does not enter depolarization block, a penalty is applied, and the final score gets the value of 100. Otherwise, the final score achieved by the model on this test is the average of the feature scores (Z-scores) for the features described above, plus an additional penalty if I_maxNumAP and I_below_depol_block differ. This penalty is 200 times the difference between the two current amplitude values (in pA–which in this case is 10 times the number of examined steps between them). The Back-propagating AP Test This test evaluates the strength of action potential back-propagation in the apical trunk at locations of different distances from the soma. The observation data for this test were yielded by the digitization of Fig 1B of , using the DigitizeIt software . The values were then averaged over distances of 50, 150, 250, 350 ± 20 μm from the soma to get the mean and standard deviation of the features. The features tested here are the amplitudes of the first and last action potentials of a 15 Hz spike train, measured at the 4 different dendritic locations. The test automatically finds current amplitudes for which the soma fires, on average, between 10–20 Hz and chooses the amplitude that leads to firing nearest to 15 Hz. For this task, the following algorithm was implemented. Increasing current step stimuli of 0.0–1.0 nA amplitude with a step size of 0.1 nA are applied to the model and the number of spikes is counted for each resulting voltage trace. If spontaneous spiking occurs (i.e., if there are spikes even when no current is injected) or if the spiking rate does not reach 10 Hz even for the highest amplitude, the test quits with an error message. Otherwise the amplitudes for which the soma fires between 10 and 20 Hz are appended to a list and (if the list is not empty) the one providing the spiking rate nearest to 15 Hz is chosen. If the list is empty because the spiking rate is smaller than 10 Hz for a step amplitude but higher than 20 Hz for the next step, a binary search method is used to find an appropriate amplitude in this range. This test uses a trunk section list (or generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classifying the apical sections of pyramidal cells’ below) to automatically find the dendritic locations for the measurements. The desired distances of the locations from the soma and the distance tolerance are read from the input configuration file, and must agree with the distances and the tolerance over which the experimental data were averaged. All the trunk dendritic segments whose distance from the soma falls into one of the distance ranges are selected. The locations and also their distances are then returned in separate dictionaries. Then the soma is stimulated with a current injection of the previously chosen amplitude and the voltage response of the soma and the selected dendritic locations are recorded and returned. The test implements its own function to extract the amplitudes of back-propagating action potentials, but the method is based on eFEL features. This is needed because eFEL’s spike detection is based on a given threshold value for spike initiation, which may not be reached by the back-propagating signal at more distant regions. First the maximum depolarization of the first and the last action potentials are calculated. This is the maximum value of the voltage trace in a time interval around the somatic action potential, based on the start time of the spike (using the AP_begin_time feature of eFEL) and the inter-spike interval to the next spike recorded at the soma. Then the amplitudes are calculated as the difference between this maximum value and the voltage at the begin time of the spike (on the soma) minus 1 ms (which is early enough not to include the rising phase of the spike, and late enough in the case of the last action potential not to include the afterhyperpolarization of the previous spike). To calculate the feature scores the amplitude values are first averaged over the distance ranges to be compared to the experimental data and get the feature Z-scores. The final score here is the average of the Z-scores achieved for the features of first and last action potential amplitudes at different dendritic distances. In the result it is also stated whether the model is more like a strongly or a weakly propagating cell in the experiment, where they found examples of both types . The PSP Attenuation Test The PSP Attenuation Test evaluates how much the post-synaptic potential attenuates as it propagates from different dendritic locations to the soma in rat CA1 pyramidal cell models. The observation data for this test were yielded by the digitization of and of Magee and Cook, 2000 using the DigitizeIt software . The somatic and dendritic depolarization values were then averaged over distances of 100, 200, 300 ± 50 μm from the soma and the soma/dendrite attenuation was calculated to get the mean and standard deviation of the attenuation features at the three different input distances. The digitized data and the script that calculates the feature means and standard deviations, and creates the JSON file are available here: https://github.com/sasaray/HippoUnit_demo/tree/master/target_features/Examples_on_creating_JSON_files/Magee2000-PSP_att/ . In this test the apical trunk receives excitatory post-synaptic current (EPSC)-shaped current stimuli at locations of different distances from the soma. The maximum depolarization caused by the input is extracted at the soma and divided by the maximum depolarization at the location of the stimulus to get the soma/dendrite attenuation values that are then averaged in distance ranges of 100, 200, 300 ± 50 μm and compared to the experimental data. The distances and tolerance are defined in the configuration file and must agree with how the observation data were generated. The test uses a trunk section list, which needs to be specified in the NEURON HOC model (or the test generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below) to find the dendritic locations to be stimulated. Randomly selected dendritic locations are used because the distance ranges that are evaluated cover almost the whole length of the trunk of a pyramidal cell. The probability of selecting a given dendritic segment is set to be proportional to its length. The number of dendritic segments examined can be chosen by the user by setting the num_of_dend_locations argument of the test. The random seed (also an argument of the test) must be kept constant to make the selection reproducible. If a given segment is selected multiple times (or it is closer than 50 μm or further than 350 μm), a new random number is generated. If the number of locations to be selected is more than the number of trunk segments available in the model, all the segments are selected. The Exp2Syn synaptic model of NEURON with a previously calculated weight is used to stimulate the dendrite. The desired EPSC amplitude and time constants are given in the input configuration file according to the experimental protocol. To get the proper synaptic weight, first the stimulus is run with weight = 0. The last 10% of the trace is averaged to get the resting membrane potential (Vm). Then the synaptic weight required to induce EPSCs with the experimentally determined amplitude is calculated according to : w e i g h t = ‐ E P S C _ a m p / V m (1) where EPSC_amp is read from the config dictionary, and the synaptic reversal potential is assumed to be 0 mV. To get the somatic and dendritic maximum depolarization from the voltage traces, the baseline trace (weight = 0) is subtracted from the trace recorded in the presence of the input. To get the attenuation ratio the maximum value of the somatic depolarization is divided by the maximum value of the dendritic depolarization. To calculate the feature scores the soma/dendrite attenuation values are first averaged over the distance ranges to be compared to the experimental data to get the feature Z-scores. The final score is the average of the feature scores calculated at the different dendritic locations. The Oblique Integration Test This test evaluates the signal integration properties of radial oblique dendrites, determined by providing an increasing number of synchronous (0.1 ms between inputs) or asynchronous (2 ms between inputs) clustered synaptic inputs. The experimental mean and standard error (SE) of the features examined are available in the paper of Losonczy and Magee and are read from a JSON file into the observation dictionary of the test. The SE values are then converted to standard deviation values. The following features are tested: voltage threshold for dendritic spike initiation (defined as the expected somatic depolarization at which a step-like increase in peak dV/dt occurs); proximal threshold (defined the same way as above, but including only those results in the statistics where the proximal part of the examined dendrite was stimulated); distal threshold; degree of nonlinearity at threshold; suprathreshold degree of nonlinearity; peak derivative of somatic voltage at threshold; peak amplitude of somatic EPSP; time to peak of somatic EPSP; degree of nonlinearity in the case of asynchronous inputs. The test automatically selects a list of oblique dendrites that meet the criteria of the experimental protocol, based on a section list containing the oblique dendritic sections (this can either be provided by the HOC model, or generated automatically if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below). For each selected oblique dendrite a proximal and a distal location is examined. The criteria for the selection of dendrites, which were also applied in the experiments, are the following. The selected oblique dendrites should be terminal dendrites (they have no child sections) and they should be at most 120 μm from the soma. This latter criterion can be changed by the user by changing the value of the ModelLoader’s max_dist_from_soma variable, and it can also increase automatically if needed. In particular, if no appropriate oblique is found up to the upper bound provided, the distance is increased iteratively by 15 μm, but not further than 190 μm. Then an increasing number of synaptic inputs are activated at the selected dendritic locations separately, while recording the local and somatic voltage response. HippoUnit provides a default synapse model to be used in the ObliqueIntegrationTest . If the AMPA_name , and NMDA_name variables are not set by the user, the default synapse is used. In this case the AMPA component of the synapse is given by the built-in Exp2Syn synapse of NEURON, while the NMDA component is defined in an NMODL (.mod) file which is part of the HippoUnit package. This NMDA receptor model uses a Jahr-Stevens voltage dependence and rise and decay time constants of 3.3 and 102.38 ms, respectively. The time constant values used here are temperature- (Q10-) corrected values from . Q10 values for the rise and decay time constants were 2.2 and 1.7 , respectively. The model’s own AMPA and NMDA receptor models can also be used in this test if their NMODL files are available and compiled among the other mechanisms of the model. In this case the AMPA_name , and NMDA_name variables need to be provided by the user. The time constants of the built-in Exp2Syn AMPA component and the AMPA/NMDA ratio can be adjusted by the user by setting the AMPA_tau1 , AMPA_tau2 and AMPA_NMDA_ratio parameter of the ModelLoader . The default AMPA/NMDA ratio is 2.0 from , and the default AMPA_tau1 and AMPA_tau2 are 0.1 ms and 2.0 ms, respectively . To test the Poirazi et al. 2003 model using its own receptor models, we also had to implement a modified version of the synapse functions of the ModelLoader that can deal with the different (pointer-based) implementation of synaptic activation in this model. For this purpose, a child class was implemented that inherits from the ModelLoader . This modified version is not part of the official HippoUnit version, because this older, more complicated implementation of synaptic models is not generally used anymore; however, this is a good example on how one can modify the capability methods of HippoUnit to match their own models or purposes. The code for this modified ModelLoader is available here: https://github.com/KaliLab/HippoUnit_demo/blob/master/ModelLoader_Poirazi_2003_CA1.py . The synaptic weights for each selected dendritic location are automatically adjusted by the test using a binary search algorithm so that the threshold for dendritic spike generation is 5 synchronous inputs–which was the average number of inputs that had to be activated by glutamate uncaging to evoke a dendritic spike in the experiments . This search runs in parallel for all selected dendritic locations. The search interval of the binary search and the initial step size of the searching range can be adjusted by the user through the c_minmax and c_step_start variables of the ModelLoader . During the iterations of the algorithm the step size may decrease if needed; a lower threshold for the step size ( c_step_stop variable of the ModelLoader ) must be set to avoid infinite looping. Those dendritic locations where this first dendritic spike generates a somatic action potential, or where no dendritic spike can be evoked, are excluded from further analysis. To let the user know, this information is displayed on the output and also printed into the log file saved by the test. Most of the features above are extracted at the threshold input level (5 inputs). The final score of this test is the average of the feature scores achieved by the model for the different features; however, a T-test analysis is also available as a separate score type for this test.
The Somatic Features Test uses the Electrophys Feature Extraction Library (eFEL) to extract and evaluate the values of both subthreshold and suprathreshold (spiking) features from voltage traces that represent the response of the model to somatic current injections of different positive (depolarizing) and negative (hyperpolarizing) current amplitudes. Spiking features describe action potential shape (such as AP width, AP rise/fall rate, AP amplitude, etc.) and timing (frequency, inter-spike intervals, time to first/last spike, etc.), while some passive features (such as the voltage base or the steady state voltage), and subthreshold features for negative current stimuli (voltage deflection, sag amplitude, etc.) are also examined. In this test step currents of varying amplitudes are injected into the soma of the model and the voltage response is recorded. The simulation protocol is set according to an input configuration JSON file, which contains all the current amplitudes, the delay and the duration of the stimuli, and the stimulation and recording positions. Simulations using different current amplitudes are run in parallel if this is supported by the computing environment. As the voltage responses of neurons to somatic current injections can strongly depend on the experimental method, and especially on the type of electrode used, target values for these features were extracted from two different datasets. One dataset was obtained from sharp electrode recordings from adult rat CA1 neurons (this will be called the sharp electrode dataset) , and the other dataset is from patch clamp recordings in rat CA1 pyramidal cells (data provided by Judit Makara, which will be referred to as the patch clamp dataset). For both of these datasets we had access to the recorded voltage traces from multiple neurons, which made it possible to perform our own feature extraction using eFEL. This ensures that the features are interpreted and calculated the same way for both the experimental data and the models’ voltage response during the simulation. Furthermore, it allows a more thorough comparison against a large number of features extracted from experimental recordings yielded using the exact same protocol, which is unlikely to be found in any paper of the available literature. However, to see how representative these datasets are of the literature as a whole we first compared some of the features extracted from these datasets to data available on Neuroelectro.org and on Hippocampome.org . The features we compared were the following: resting potential, voltage threshold, after-hyperpolarization (AHP) amplitudes (fast, slow), action potential width and sag ratio. Although these databases have mean and standard deviation values for these features that are calculated from measurements using different methods, protocols and from different animals, we found that most of the feature values for our two experimental datasets fall into the ranges declared as typical for CA1 PCs in the online databases. The only conspicuous exception is the fast AHP amplitude of the patch clamp dataset used in this study, which is 1.7 ± 1.5 mV, while the databases cite values between 6.8 and 11.64 mV. This deviation could possibly stem from a difference in the way that the fast AHP is measured. However, we note that during the patch clamp recordings some of the cells were filled with a high-affinity Ca 2+ sensor, which may have affected several Ca-sensitive mechanisms (such as Ca-dependent potassium currents) in the cell, and therefore may have influenced features like the AP width and properties of the spike after-hyperpolarization. We also performed a more specific review of the relevant literature to compare the most important somatic features of the patch clamp dataset to results from available patch clamp recordings . Our analysis confirmed that the values of several basic electrophysiological features such as the AP voltage threshold, the AP amplitude, the AP width, and the amplitude of the hyperpolarizing sag extracted from our patch clamp dataset fall into the range observed experimentally. We conclude that the patch clamp dataset is in good agreement with experimental observations available in the literature, and will be used as a representative example in this study. The observation data are loaded from a JSON file of a given format which contains the names of the features to be evaluated, the current amplitude for which the given feature is evaluated and the corresponding experimental mean and standard deviation values. The feature means and standard deviations are extracted using BluePyEfe from a number of voltage traces recorded from several different cells. Its output can be converted to stimulus and feature JSON files used by HippoUnit using the script available here: https://github.com/sasaray/HippoUnit_demo/blob/master/target_features/Examples_on_creating_JSON_files/Somatic_Features/convert_new_output_feature_data_for_valid.py . Setting the specify_data_set parameter it can be ensured that the test results against different experimental data sets are saved into different folders. For certain features eFEL returns a vector as a result; in these cases, the feature value used by HippoUnit is the average of the elements of the vector. These are typically spiking features for which eFEL extracts a value corresponding to each spike fired. For features that use the ‘AP_begin_time’ or ‘AP_begin_voltage’ feature values for further calculations, we exclude the first element of the vector output before averaging because we discovered that these features are often incorrectly detected for the first action potential of a train. The score class of this test returns as the final score the average of Z-scores for the evaluated eFEL features achieved by the model. Those features that could not be evaluated (e.g., spiking features from voltage responses without any spikes) are listed in a log file to inform the user, and the number of successfully evaluated features out of the number of features attempted to be evaluated is also reported.
This test aims to determine whether the model enters depolarization block in response to a prolonged, high intensity somatic current stimulus. For CA1 pyramidal cells, the test relies on experimental data from Bianchi et al. . According to these data, rat CA1 PCs respond to somatic current injections of increasing intensity with an increasing number of action potentials until a certain threshold current intensity is reached. For current intensities higher than the threshold, the cell does not fire over the whole period of the stimulus; instead, firing stops after some action potentials, and the membrane potential is sustained at some constant depolarized level for the rest of the stimulus. This phenomenon is termed depolarization block . This test uses the same capability class as the Somatic Features Test for injecting current and recording the somatic membrane potential (see the description above). Using this capability, the model is stimulated with 1000 ms long square current pulses increasing in amplitude from 0 to 1.6 nA in 0.05 nA steps, analogous to the experimental protocol. The stimuli of different amplitudes are run in parallel. Somatic spikes are detected and counted using eFEL . From the somatic voltage responses of the model, the following features are evaluated. I th is the threshold current to reach depolarization block; experimentally, this is both the amplitude of the current injection at which the cell exhibits the maximum number of spikes, and the highest stimulus amplitude that does not elicit depolarization block. In the test two separate features are evaluated for the model and compared to the experimental I th : the current intensity for which the model fires the maximum number of action potentials ( I_maxNumAP ), and the current intensity one step before the model enters depolarization block ( I_below_depol_block ). If these two feature values are not equal, a penalty is added to the score. The model is defined to exhibit depolarization block if I_maxNumAP is not the highest amplitude tested, and if there exists a current intensity higher than I_maxNumAP , for which the model does not fire action potentials during the last 100 ms of its voltage response. In the experiment the V eq feature is extracted from the voltage response of the pyramidal cells to the current injection one step above I th (or I_max_num_AP in the test). Both in the experiment and in this test this is calculated as the mean voltage over the last 100 ms of the voltage trace. However, in the test, before calculating this value it is examined whether there are any action potentials during this period. The presence of spikes here means that the model did not enter depolarization block prior to this period. In these cases the test iterates further on the voltage traces corresponding to larger current steps to find if there is any where the model actually entered depolarization block; if an appropriate trace is found, the value of V eq is extracted there. This trace is the response to the current intensity one step above I_below_depol_block . If the model does not enter depolarization block, a penalty is applied, and the final score gets the value of 100. Otherwise, the final score achieved by the model on this test is the average of the feature scores (Z-scores) for the features described above, plus an additional penalty if I_maxNumAP and I_below_depol_block differ. This penalty is 200 times the difference between the two current amplitude values (in pA–which in this case is 10 times the number of examined steps between them).
This test evaluates the strength of action potential back-propagation in the apical trunk at locations of different distances from the soma. The observation data for this test were yielded by the digitization of Fig 1B of , using the DigitizeIt software . The values were then averaged over distances of 50, 150, 250, 350 ± 20 μm from the soma to get the mean and standard deviation of the features. The features tested here are the amplitudes of the first and last action potentials of a 15 Hz spike train, measured at the 4 different dendritic locations. The test automatically finds current amplitudes for which the soma fires, on average, between 10–20 Hz and chooses the amplitude that leads to firing nearest to 15 Hz. For this task, the following algorithm was implemented. Increasing current step stimuli of 0.0–1.0 nA amplitude with a step size of 0.1 nA are applied to the model and the number of spikes is counted for each resulting voltage trace. If spontaneous spiking occurs (i.e., if there are spikes even when no current is injected) or if the spiking rate does not reach 10 Hz even for the highest amplitude, the test quits with an error message. Otherwise the amplitudes for which the soma fires between 10 and 20 Hz are appended to a list and (if the list is not empty) the one providing the spiking rate nearest to 15 Hz is chosen. If the list is empty because the spiking rate is smaller than 10 Hz for a step amplitude but higher than 20 Hz for the next step, a binary search method is used to find an appropriate amplitude in this range. This test uses a trunk section list (or generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classifying the apical sections of pyramidal cells’ below) to automatically find the dendritic locations for the measurements. The desired distances of the locations from the soma and the distance tolerance are read from the input configuration file, and must agree with the distances and the tolerance over which the experimental data were averaged. All the trunk dendritic segments whose distance from the soma falls into one of the distance ranges are selected. The locations and also their distances are then returned in separate dictionaries. Then the soma is stimulated with a current injection of the previously chosen amplitude and the voltage response of the soma and the selected dendritic locations are recorded and returned. The test implements its own function to extract the amplitudes of back-propagating action potentials, but the method is based on eFEL features. This is needed because eFEL’s spike detection is based on a given threshold value for spike initiation, which may not be reached by the back-propagating signal at more distant regions. First the maximum depolarization of the first and the last action potentials are calculated. This is the maximum value of the voltage trace in a time interval around the somatic action potential, based on the start time of the spike (using the AP_begin_time feature of eFEL) and the inter-spike interval to the next spike recorded at the soma. Then the amplitudes are calculated as the difference between this maximum value and the voltage at the begin time of the spike (on the soma) minus 1 ms (which is early enough not to include the rising phase of the spike, and late enough in the case of the last action potential not to include the afterhyperpolarization of the previous spike). To calculate the feature scores the amplitude values are first averaged over the distance ranges to be compared to the experimental data and get the feature Z-scores. The final score here is the average of the Z-scores achieved for the features of first and last action potential amplitudes at different dendritic distances. In the result it is also stated whether the model is more like a strongly or a weakly propagating cell in the experiment, where they found examples of both types .
The PSP Attenuation Test evaluates how much the post-synaptic potential attenuates as it propagates from different dendritic locations to the soma in rat CA1 pyramidal cell models. The observation data for this test were yielded by the digitization of and of Magee and Cook, 2000 using the DigitizeIt software . The somatic and dendritic depolarization values were then averaged over distances of 100, 200, 300 ± 50 μm from the soma and the soma/dendrite attenuation was calculated to get the mean and standard deviation of the attenuation features at the three different input distances. The digitized data and the script that calculates the feature means and standard deviations, and creates the JSON file are available here: https://github.com/sasaray/HippoUnit_demo/tree/master/target_features/Examples_on_creating_JSON_files/Magee2000-PSP_att/ . In this test the apical trunk receives excitatory post-synaptic current (EPSC)-shaped current stimuli at locations of different distances from the soma. The maximum depolarization caused by the input is extracted at the soma and divided by the maximum depolarization at the location of the stimulus to get the soma/dendrite attenuation values that are then averaged in distance ranges of 100, 200, 300 ± 50 μm and compared to the experimental data. The distances and tolerance are defined in the configuration file and must agree with how the observation data were generated. The test uses a trunk section list, which needs to be specified in the NEURON HOC model (or the test generates one if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below) to find the dendritic locations to be stimulated. Randomly selected dendritic locations are used because the distance ranges that are evaluated cover almost the whole length of the trunk of a pyramidal cell. The probability of selecting a given dendritic segment is set to be proportional to its length. The number of dendritic segments examined can be chosen by the user by setting the num_of_dend_locations argument of the test. The random seed (also an argument of the test) must be kept constant to make the selection reproducible. If a given segment is selected multiple times (or it is closer than 50 μm or further than 350 μm), a new random number is generated. If the number of locations to be selected is more than the number of trunk segments available in the model, all the segments are selected. The Exp2Syn synaptic model of NEURON with a previously calculated weight is used to stimulate the dendrite. The desired EPSC amplitude and time constants are given in the input configuration file according to the experimental protocol. To get the proper synaptic weight, first the stimulus is run with weight = 0. The last 10% of the trace is averaged to get the resting membrane potential (Vm). Then the synaptic weight required to induce EPSCs with the experimentally determined amplitude is calculated according to : w e i g h t = ‐ E P S C _ a m p / V m (1) where EPSC_amp is read from the config dictionary, and the synaptic reversal potential is assumed to be 0 mV. To get the somatic and dendritic maximum depolarization from the voltage traces, the baseline trace (weight = 0) is subtracted from the trace recorded in the presence of the input. To get the attenuation ratio the maximum value of the somatic depolarization is divided by the maximum value of the dendritic depolarization. To calculate the feature scores the soma/dendrite attenuation values are first averaged over the distance ranges to be compared to the experimental data to get the feature Z-scores. The final score is the average of the feature scores calculated at the different dendritic locations.
This test evaluates the signal integration properties of radial oblique dendrites, determined by providing an increasing number of synchronous (0.1 ms between inputs) or asynchronous (2 ms between inputs) clustered synaptic inputs. The experimental mean and standard error (SE) of the features examined are available in the paper of Losonczy and Magee and are read from a JSON file into the observation dictionary of the test. The SE values are then converted to standard deviation values. The following features are tested: voltage threshold for dendritic spike initiation (defined as the expected somatic depolarization at which a step-like increase in peak dV/dt occurs); proximal threshold (defined the same way as above, but including only those results in the statistics where the proximal part of the examined dendrite was stimulated); distal threshold; degree of nonlinearity at threshold; suprathreshold degree of nonlinearity; peak derivative of somatic voltage at threshold; peak amplitude of somatic EPSP; time to peak of somatic EPSP; degree of nonlinearity in the case of asynchronous inputs. The test automatically selects a list of oblique dendrites that meet the criteria of the experimental protocol, based on a section list containing the oblique dendritic sections (this can either be provided by the HOC model, or generated automatically if the find_section_lists variable of the ModelLoader is set to True–see the section ‘Classify apical sections of pyramidal cells’ below). For each selected oblique dendrite a proximal and a distal location is examined. The criteria for the selection of dendrites, which were also applied in the experiments, are the following. The selected oblique dendrites should be terminal dendrites (they have no child sections) and they should be at most 120 μm from the soma. This latter criterion can be changed by the user by changing the value of the ModelLoader’s max_dist_from_soma variable, and it can also increase automatically if needed. In particular, if no appropriate oblique is found up to the upper bound provided, the distance is increased iteratively by 15 μm, but not further than 190 μm. Then an increasing number of synaptic inputs are activated at the selected dendritic locations separately, while recording the local and somatic voltage response. HippoUnit provides a default synapse model to be used in the ObliqueIntegrationTest . If the AMPA_name , and NMDA_name variables are not set by the user, the default synapse is used. In this case the AMPA component of the synapse is given by the built-in Exp2Syn synapse of NEURON, while the NMDA component is defined in an NMODL (.mod) file which is part of the HippoUnit package. This NMDA receptor model uses a Jahr-Stevens voltage dependence and rise and decay time constants of 3.3 and 102.38 ms, respectively. The time constant values used here are temperature- (Q10-) corrected values from . Q10 values for the rise and decay time constants were 2.2 and 1.7 , respectively. The model’s own AMPA and NMDA receptor models can also be used in this test if their NMODL files are available and compiled among the other mechanisms of the model. In this case the AMPA_name , and NMDA_name variables need to be provided by the user. The time constants of the built-in Exp2Syn AMPA component and the AMPA/NMDA ratio can be adjusted by the user by setting the AMPA_tau1 , AMPA_tau2 and AMPA_NMDA_ratio parameter of the ModelLoader . The default AMPA/NMDA ratio is 2.0 from , and the default AMPA_tau1 and AMPA_tau2 are 0.1 ms and 2.0 ms, respectively . To test the Poirazi et al. 2003 model using its own receptor models, we also had to implement a modified version of the synapse functions of the ModelLoader that can deal with the different (pointer-based) implementation of synaptic activation in this model. For this purpose, a child class was implemented that inherits from the ModelLoader . This modified version is not part of the official HippoUnit version, because this older, more complicated implementation of synaptic models is not generally used anymore; however, this is a good example on how one can modify the capability methods of HippoUnit to match their own models or purposes. The code for this modified ModelLoader is available here: https://github.com/KaliLab/HippoUnit_demo/blob/master/ModelLoader_Poirazi_2003_CA1.py . The synaptic weights for each selected dendritic location are automatically adjusted by the test using a binary search algorithm so that the threshold for dendritic spike generation is 5 synchronous inputs–which was the average number of inputs that had to be activated by glutamate uncaging to evoke a dendritic spike in the experiments . This search runs in parallel for all selected dendritic locations. The search interval of the binary search and the initial step size of the searching range can be adjusted by the user through the c_minmax and c_step_start variables of the ModelLoader . During the iterations of the algorithm the step size may decrease if needed; a lower threshold for the step size ( c_step_stop variable of the ModelLoader ) must be set to avoid infinite looping. Those dendritic locations where this first dendritic spike generates a somatic action potential, or where no dendritic spike can be evoked, are excluded from further analysis. To let the user know, this information is displayed on the output and also printed into the log file saved by the test. Most of the features above are extracted at the threshold input level (5 inputs). The final score of this test is the average of the feature scores achieved by the model for the different features; however, a T-test analysis is also available as a separate score type for this test.
Most of the tests of HippoUnit require multiple simulations of the same model, either using stimuli of different intensities or at different locations in the cell. To run these simulations in parallel and save time, the Python multiprocessing.Pool module is used. The size of the pool can be set by the user. Moreover, all NEURON simulations are performed in multiprocessing pools to ensure that they run independently of each other, and to make it easy to erase the models after the process has finished. This is especially important in the case of HOC templates in order to avoid previously loaded templates running in the background and the occurrence of ‘Template cannot be redefined’ errors when the same model template is loaded again.
Some of the validation tests of HippoUnit require lists of sections belonging to the different dendritic types of the apical tree (main apical trunk, apical tuft dendrites, and radial oblique dendrites). To classify the dendrites NeuroM is used as a base package. NeuroM contains a script that, starting from the tuft (uppermost dendritic branches in ) endpoints, iterates down the tree to find a single common ancestor. This is considered as the apical point. The apical point is the upper end of the main apical dendrite (trunk), from where the tuft region arises. Every dendrite branching from the trunk below this point is considered an oblique dendrite. However, there are many CA1 pyramidal cell morphologies where the trunk bifurcates close to the soma to form two or even more branches. In these cases the method described above finds this proximal bifurcation point as the apical point (see ). To overcome this issue, we worked out and implemented a method to find multiple apical points by iterating the function provided by NeuroM. In particular, if the initial apical point is closer to the soma than a pre-defined threshold, the function is run again on subtrees of the apical tree where the root node of the subtree is the previously found apical point, to find apical points on those subtrees (see ). When (possibly after multiple iterations) apical points that are far enough from the soma are found, NeuroM is used to iterate down from them on the parent sections, which will be the trunk sections (blue dots in ). Iterating up, the tuft sections are found (green dots in ), and the other descendants of the trunk sections are considered to be oblique dendrites (yellow dots in ). Once all the sections are classified, their NeuroM coordinates are converted to NEURON section information for further use. We note that this function can only be used for hoc models that load their morphologies from a separate morphology file (e.g., ASC, SWC) as NeuroM can only deal with morphologies provided in these standard formats. For models with NEURON morphologies implemented directly in the hoc language, the SectionLists required by a given test should be implemented within the model.
In this paper we demonstrate the utility of the HippoUnit validation test suite by applying its tests to validate and compare the behavior of several different detailed rat hippocampal CA1 pyramidal cell models available on ModelDB . For this initial comparison we chose models published by several modeling groups worldwide that were originally developed for various purposes. The models compared were the following: the Golding et al., 2001 model (ModelDB accession number: 64167), the Katz et al., 2009 model (ModelDB accession number: 127351), the Migliore et al., 2011 model (ModelDB accession number: 138205), the Poirazi et al., 2003 model (ModelDB accession number: 20212), the Bianchi et al., 2012 model (ModelDB accession number: 143719), and the Gómez González et al., 2011 model (ModelDB accession number: 144450). Models from literature that are published on ModelDB typically implement their own simulations and plots to make it easier for users and readers to reproduce and visualize the results shown in the corresponding paper. Therefore, to be able to test the models described above using our test suite, we needed to create standalone versions of them. These standalone versions do not display any GUI, or contain any built-in simulations and run-time modifications, but otherwise their behavior should be identical to the published version of the models. We also added section lists of the radial oblique and the trunk dendritic sections to those models where this was not done yet, as some of the tests require these lists. To ensure that the standalone versions have the same properties as the original models, we checked their parameters after running their built-in simulations (in case including any run-time modifications), and made sure they match the parameters of the standalone version. The modified models used for running validation tests are available in this GitHub repository: https://github.com/KaliLab/HippoUnit_demo .
The HippoUnit validation suite HippoUnit ( https://github.com/KaliLab/hippounit ) is an open source test suite for the automatic and quantitative evaluation of the behavior of neural single cell models. The tests of HippoUnit automatically perform simulations that mimic common electrophysiological protocols on neuronal models to compare their behavior with quantitative experimental data using various feature-based error functions. Current validation tests cover somatic (subthreshold and spiking) behavior as well as signal propagation and integration in the dendrites. These tests were chosen because they collectively cover diverse functional aspects of cellular behavior that have been thoroughly investigated in experimental and modeling studies, and particularly because the necessary experimental data were available in sufficient quality and quantity. However, we note that the currently implemented tests, even in combination, probably do not fully constrain the behavior of the cell under all physiological conditions, and thus the test suite can be further improved by including additional tests and more experimental data. The tests were developed using data and models for rat hippocampal CA1 pyramidal cells. However, most of the tests are directly applicable to or can be adapted for other cell types if the necessary experimental data are available; examples of this will be presented in later sections. HippoUnit is implemented in the Python programming language, and is based on the SciUnit framework for testing scientific models. The current version of HippoUnit is capable of handling single cell models implemented in the NEURON simulator, provided that they do not apply any runtime modification, do not have a built-in graphical user interface, and do not automatically perform simulations. Meeting these conditions may require some modifications in the published code of the model. Once such a “standalone” version of the model is available, the tests of HippoUnit can be run by adapting and using the example Jupyter notebooks described in , without any further coding required from the user. In principle, neural models developed using other software tools can also be tested by HippoUnit; however, this requires the re-implementation by the user of the interface functions that allow HippoUnit to run the necessary simulations and record their output (see the Methods section for more details). In the current tests of HippoUnit, once all the necessary simulations have been performed and the responses of the model have been recorded, electrophysiological features are extracted from the voltage traces, and the discrepancy between the model’s behavior and the experiment is computed by comparing the feature values with those extracted from the experimental data (see Methods). Biological variability is taken into account by measuring the difference between the feature value for the model and the mean of the feature in the experiments in units of the standard deviation for that particular feature observed in the experiments. For simplicity, we refer to the result of this comparison as the feature score; however, we note that there are many possible sources of such discrepancy including, among others, experimental artefacts and noise, shortcomings of the models, and differences between the conditions assumed by the models and those in the actual experiments (see the Discussion for more details). The final score of a given test achieved by a given model is given by the average (or, in some cases, the sum) of the feature scores for all the features evaluated by the test. While the main output of the tests is the final score, which allows the quantitative comparison of the models’ behavior to experimental data, it is important to emphasize that it should never be blindly accepted. A high final score does not necessarily mean that the model is bad–it may also indicate an issue with the data, a mismatch between experimental conditions and modeling assumptions, or some problem with the implementation of the test itself (see the Discussion for further details). For this reason, and also to provide more insight into how the scores were obtained, the tests of HippoUnit typically provide a number of other useful outputs (see Methods), including figures that visualize the model’s behavior through traces and plot the feature and feature score values compared to the experimental data. It is always strongly recommended to look at the traces and other figures to get a fuller picture of the model’s response to the stimuli, which helps with the correct interpretation of validation results. Such closer inspection also makes it possible to detect possible test failures, when the extraction of certain features does not work correctly for a given model. HippoUnit can also take advantage of the parallel execution capabilities of modern computers. When tests require multiple simulations of the same model using different settings (e.g., different stimulation intensities or different stimulus locations in the cell), these simulations are run in parallel, which can make the validation process substantially faster, depending on the available computing resources. One convenient way of running a test on a model is to use an interactive computational notebook, such as the Jupyter Notebook , which enables the combination of program codes to be run (we used Python code to access the functionality of HippoUnit), the resulting outputs (e.g. figures, tables, text) and commentary or explanatory text in a single document. Therefore, we demonstrate the usage of HippoUnit through this method (See and https://github.com/KaliLab/HippoUnit_demo ). Comparison of the behavior of rat hippocampal CA1 pyramidal cell models selected from the literature We selected six different publications containing models of rat hippocampal CA1 pyramidal cells whose implementations for the NEURON simulator were available in the ModelDB database. Our aim was to compare the behavior of every model to the experimental target data using the tests of HippoUnit, which also allowed us to compare the models to each other, and to test their generalization performance in paradigms that they were not originally designed to capture. These models differ in their complexity regarding the number and types of ion channels that they contain, and they were built for different purposes. The Golding et al., 2001 model was developed to show the dichotomy of the back-propagation efficacy and the amplitudes of the back-propagating action potentials at distal trunk regions in CA1 pyramidal cells and to make predictions on the possible causes of this behavior. It contains only the most important ion channels (Na, K DR , K A ) needed to reproduce the generation and propagation of action potentials. The Katz et al., 2009 model is based on the Golding et al. 2001 model and was built to investigate the functional consequences of the distribution of strength and density of synapses on the apical dendrites that they observed experimentally, for the mode of dendritic integration. The Migliore et al., 2011 model was used to study schizophrenic behavior. It is based on earlier models of the same modeling group, which were used to investigate the initiation and propagation of action potentials in oblique dendrites, and have been validated against different electrophysiological data. The Poirazi et al., 2003 model was designed to clarify the issues about the integrative properties of thin apical dendrites that may arise from the different and sometimes conflicting interpretations of available experimental data. This is a quite complex model in the sense that it contains a large number of different types of ion channels, whose properties were adjusted to fit in vitro experimental data, and it also contains four types of synaptic receptors. The Bianchi et al., 2012 model was designed to investigate the mechanisms behind depolarization block observed experimentally in the somatic spiking behavior of rat CA1 pyramidal cells. It was developed by combining and modifying the Shah et al., 2008 and the Poirazi et al. 2003 models . The former of these was developed to show the significance of axonal M-type potassium channels. The Gómez González et al., 2011 model is based on the Poirazi et al. 2003 model and it was modified to replicate the experimental data of on the nonlinear signal integration of radial oblique dendrites when the inputs arrive in a short time window. A common property of these models is that their parameters were set using manual procedures with the aim of reproducing the behavior of real rat CA1 PCs in one or a few specific paradigms. As some of them were built by modifying and further developing previous models, these share the same morphology (see ). On the other hand, the model of Gómez González et al. 2011 was adjusted to 5 different morphologies, which were all tested. In the case of the Golding et al. 2001 model, we tested three different versions (shown in Figs 8A, 8B and 9A of the corresponding paper ) that differ in the distribution of the sodium and the A-type potassium channels, and therefore the back-propagation efficacy of the action potentials. The morphologies and characteristic voltage responses of all the models used in this comparison are displayed in . Running the tests of HippoUnit on these models we took into account the original settings of the simulations of the models, and set the v_init (the initial voltage when the simulation starts), and the celsius (the temperature at which the simulation is done) variables accordingly. For the Bianchi et al 2012 model we used variable time step integration during all the simulations, as it was done in the original modeling study. For the other models a fixed time step were used (dt = 0.025 ms). Somatic Features Test Using the Somatic Features Test of HippoUnit, we compared the behavior of the models to features extracted from the patch clamp dataset, as each of the tested models was apparently constructed using experimental data obtained from patch clamp recordings as a reference. After performing a review of the relevant literature, we concluded that the patch clamp dataset is in good agreement with experimental observations available in the literature (see in Methods), and will be used as a representative example in this study. In the patch clamp recordings, both the depolarizing and the hyperpolarizing current injections were 300 ms long and 0.05, 0.1, 0.15, 0.2, 0.25 nA in amplitude. Because during these recordings the cells were stimulated with relatively low amplitude current injections, some of the examined models (Migliore et al. 2011, Gómez González et al. 2011 n125 morphology) did not fire even for the highest amplitude tested. Some other models started to fire for higher current intensities than it was observed experimentally. In these cases the features that describe action potential shape or timing properties cannot be evaluated for the given model (for the current amplitudes affected). Therefore, besides the final score achieved by the models on this test (the average Z-score for the successfully evaluated features–see Methods for details) that shows the discrepancy of the models’ behavior and the experimental observations regarding the successfully evaluated features, we also consider the proportion of the successfully evaluated features as an important measure of how closely the model matches this specific experimental dataset. This information, along with the names of the features that cannot be evaluated for the given model, are provided as outputs of the test, and should be considered when making conclusions on the model’s performance. This is another example where looking at only the final score may not be enough to determine whether the model meets the requirements of the user, and shows how the other outputs of the tests can help the interpretation of the results. shows how the extracted feature values of the somatic response traces of the different models fit the experimental values. It is clear that the behavior of the different models is very diverse. Each model captures some of the experimental features but shows a larger discrepancy for others. The resting membrane potential ( voltage_base ) for all of the models was apparently adjusted to a more hyperpolarized value than in the experimental recordings we used for our comparison, and most of the models also return to a lower voltage value after the step stimuli ( steady_state_voltage ). An exception is the Poirazi et al. 2003 model, where the decay time constant after the stimulus is unusually high (this feature is not included in , but the slow decay can be seen in the example trace in , and detailed data are available here: https://github.com/KaliLab/HippoUnit_demo ). The voltage threshold for action potential generation ( AP_begin_voltage ) is lower than the experimental value for most of the models (that were able to generate action potentials in response to the examined current intensities), but it is higher than the experimental value for most versions of the Gómez González et al. 2011 model. For negative current steps most of the models gets more hyperpolarized ( voltage_deflection ) (the most extreme is the Gómez González et al. 2011 model with the n129 morphology), while the Gómez González et al. 2011 model with the n125 morphology and the Migliore et al. 2011 model get less hyperpolarized than it was observed experimentally. The sag amplitudes are also quite high for the Gómez González et al. 2011 n129, and n130 models, while the Katz et al. 2009, and all versions of the Golding et al. 2001 models basically have no hyperpolarizing sag. It is quite conspicuous how much the amplitude of the action potentials ( APlast_amp , AP_amplitude , AP2_amp ) differs in the Gómez González et al. 2011 models from the experimental values and from the other models as well. The Katz et al. 2009 and one of the versions (“ ”) of the Golding et al. 2001 model have slightly too high action potential amplitudes, and these models have relatively small action potential width ( AP_width ). On the other hand, the rising phase ( AP_rise_time , AP_rise_rate ) of the Katz et al. 2009 model appears to be too slow. Looking at the inverse interspike interval ( ISI ) values, it can be seen that the experimental spike trains show adaptation in the ISIs, meaning that the first ISI is smaller (the inverse ISI is higher) than the last ISI for the same current injection amplitude. This behavior can be observed in the case of the Katz et al. 2009 model, three versions (n128, n129, n130 morphology) of the Gómez González et al. 2011 model, but cannot really be seen in the Bianchi et al. 2011, the Poirazi et al. 2003 and the three versions of the Golding et al. 2001 models. At first look it may seem contradictory that in the case of the Gómez González et al. 2011 model version n129 morphology the spike counts are quite low, while the mean frequency and the inverse ISI values are high. This is because the soma of this model does not fire over the whole period of the stimulation, but starts firing at higher frequencies, then stops firing for rest of the stimulus (see ). The Katz et al. 2009 model fires quite a high number of action potentials ( Spikecount ) compared to the experimental data, at a high frequency. In the experimental recordings there is a delay before the first action potential is generated, which becomes shorter with increasing current intensity (indicated by the inv_time_to_first_spike feature that becomes larger with increasing input intensity). In most of the models this behavior can be observed, albeit to different degrees. The Katz et al. 2009 model has the shortest delays (highest inv_time_to_first_spike values), but the effect is still visible. To quantify the difference between the experimental dataset and the simulated output of the models, these were compared using the feature-based error function (Z-Score) described above to calculate the feature score. shows the mean scores of the model features whose absolute values are illustrated in (averaged over the different current step amplitudes examined), while indicates the number of successfully evaluated features out of the number of features that were attempted to be evaluated. From it is even more clearly visible that each model fits some experimental features well but does not capture others. For example, it is quite noticeable in that most of the versions of the Gómez González et al. 2011 model (greenish dots) perform well for features describing action potential timing (upper part of the figure, e.g., ISIs , mean_frequency , spikecount ), but get higher feature scores for features of action potential shape (lower part of the figure, e.g., AP_rise_rate , AP_rise_time , AP_fall_rate , AP_fall_time , AP amplitudes ). Conversely, the Katz et al. 2009 model achieved better scores for AP shape features than for features describing AP timing. It is also worth noting that none of the feature scores for the model of Migliore et al. 2011 was higher than 4; however, looking at it can be seen that less than half of the experimental features were successfully evaluated in this model, which is because it does not fire action potentials for the current injection amplitudes examined here. As mentioned above the proportion of the successfully evaluated features is also an important measure of how well the behavior of the models fits the specific experimental observations, and should be taken into account. Depolarization Block Test In the Depolarization Block Test three features are evaluated. Two of them examine the threshold current intensity to reach depolarization block. The I_maxNumAP feature is the current intensity at which the model fires the maximum number of action potentials, and the I_below_depol_block feature is the current intensity one step before the model enters depolarization block. Both are compared to the experimental I th feature because, in the experiment , the number of spikes increased monotonically with increasing current intensity up to the current amplitude where the cell entered depolarization block during the stimulus, which led to a drop in the number of action potentials. By contrast, we experienced that some models started to fire fewer spikes for higher current intensities while still firing over the whole period of the current step stimulus, i.e., without entering depolarization block. Therefore, we introduced the two separate features for the threshold current. If these two feature values are not equal, a penalty is added to the score. The third evaluated feature is V eq , the equilibrium potential during the depolarization block, which is calculated as the average of the membrane potential over the last 100 ms of a current pulse with amplitude 50 pA above I_maxNumAP (or 50 pA above I_below_depol_block if its value is not equal to I_maxNumAP ). Each model has a value for the I_maxNumAP feature, while those models that do not enter depolarization block are not supposed to have a value for the I_below_depol_block feature and the Veq feature. The results from applying the Depolarization Block Test to the models from ModelDB are shown in . According to the test, four of the models entered depolarization block. However, by looking at the actual voltage traces provided by the test, it becomes apparent that only the Bianchi et al. 2011 model behaves correctly (which was developed to show this behavior). The other three models actually managed to “cheat” the test. In the case of the Katz et al. 2009 and the Golding et al. 2001 “ ” models, the APs get smaller and smaller with increasing stimulus amplitude until they get so small that they do not reach the threshold for action potential detection; therefore, these APs are not counted by the test and V eq is also calculated. The Gómez González et al. 2011 model adjusted to the n129 morphology does not fire during the whole period of the current stimulation for a wide range of current amplitudes (see ). Increasing the intensity of the current injection it fires an increasing number of spikes, but always stops after a while before the end of the stimulus. On the other hand, there is a certain current intensity after which the model starts to fire fewer action potentials, and which is thus detected as I_maxNumAP by the test. Because no action potentials can be detected during the last 100 ms of the somatic response one step above the detected “threshold” current intensity, the model is declared to have entered depolarization block, and a V eq value is also extracted. In principle, it would be desirable to modify the test so that it correctly rejects the three models above. However, the models described above shows so similar behavior to depolarization block that is hard to distinguish using automatic methods. Furthermore, we have made substantial efforts to make the test more general and applicable to a wide variety of models with different behavior, and we are concerned that defining and adding further criteria to the test to deal with these specific cases would be an ad hoc solution, and would possibly cause further ‘cheats’ when applied to other models with unexpected behavior. These cases underline the importance of critically evaluating the full output (especially the figures of the recorded voltage traces) of the tests rather than blindly accepting the final scores provided. Back-propagating AP Test This test first finds all the dendritic segments that belong to the main apical dendrite of the model and which are 50, 150, 250, 350 ± 20 μm from the soma, respectively. Then a train of action potentials of frequency around 15 Hz is triggered in the soma by injecting a step current of appropriate amplitude (as determined by the test), and the amplitudes of the first and last action potentials in the train are measured at the selected locations. In the Bianchi et al. 2012 and the Poirazi et al. 2003 models (which share the same morphology, see ) no suitable trunk locations could be found in the most proximal (50 ± 20 μm) and most distal (350 ± 20 μm) regions. This is because this morphology has quite long dendritic sections that are divided into a small number of segments. In particular, the first trunk section (apical_dendrite[0]) originates from the soma, is 102.66 μm long, and has only two segments. The center of one of them is 25.67 μm far from the soma, while the other is already 77 μm away from the soma. None of these segments belongs to the 50 ± 20 μm range, and therefore they are not selected by the test. The n123 morphology of the Gómez González et al. 2011 model has the same shape , but in this case the segments are different, and therefore it does not share the same problem. At the remaining, successfully evaluated distance ranges in the apical trunk of the Bianchi et al. 2012 model, action potentials propagate very actively, barely attenuating. For the AP1_amp and APlast_amp features at these distances, this model has the highest feature score , while the Poirazi et al. 2003 model performs quite well. The Golding et al. 2001 model was designed to investigate how the distribution of ion channels can affect the back-propagation efficacy in the trunk. The two versions of the Golding et al. 2001 model (“ ” and “ ” versions) which are supposed to be weakly propagating according to the corresponding paper , are also weakly propagating according to the test. However, the difference between their strongly and weakly propagating feature scores is not too large , which is probably caused by the much smaller standard deviation value of the experimental data for the weakly propagating case. Although the amplitudes of the first action potentials of these two models fit the experimental data relatively well, they start to decline slightly closer to the soma than it was observed experimentally, as the amplitudes are already very small at 250 ± 20 μm . (In the data corresponding to these two versions of the model are almost completely overlapping for more distal regions.) The amplitudes for the last action potential fit the data well, except in the most proximal regions (see the relatively high feature score in or the detailed results here: https://github.com/KaliLab/HippoUnit_demo ). For all versions of the Golding et al. 2001 model, AP amplitudes are too high at the most proximal distance range. As for the strongly propagating version of the Golding et al. 2001 model (“ ” version), the amplitude of the first action potential is too high at the proximal locations, but further it fits the data well. The amplitude of the last action potential remains too high even at more distal locations. It is worth noting that, in the corresponding paper , they only examined a single action potential triggered by a 5 ms long input in their simulations, and did not examine or compare to their data the properties of the last action potential in a longer spike train. Finally, we note that in all versions of the Golding et al. 2001 model a spike train with frequency around 23 Hz was evoked and examined as it turned out to be difficult to set the frequency closer to 15 Hz. The different versions of the Gómez González et al. 2011 model behave qualitatively similarly in this test, although there were smaller quantitative differences. In almost all versions the amplitudes of the first action potential in the dendrites are slightly too low at the most proximal locations but fit the experimental data better at further locations. The exceptions are the versions with the n128 and n129 morphologies, which have lower first action potential amplitudes at the furthest locations, but not low enough to be considered as weak propagating. The amplitudes for the last action potential are too high at the distal regions but fit better at the proximal ones. The only exception is the one with morphology n129, where the last action potential attenuates more at further locations and fits the data better. In the case of the Katz et al. 2009 model, a spike train with frequency around 40 Hz was examined, as the firing frequency increases so suddenly with increasing current intensity in this model that no frequency closer to 15 Hz could be adjusted. In this model the last action potential propagates too strongly, while the dendritic amplitudes for the first action potential are close to the experimental values. In the Migliore et al. 2011 model the amplitudes for the last action potential are too high, while the amplitude of the first back-propagating action potential is too low at locations in the 250 ± 20 μm and 350 ± 20 μm distance ranges. Finally, all the models that we examined were found to be strongly propagating by the test, with the exception of those versions of the Golding et al. 2001 model that were explicitly developed to be weakly propagating. PSP Attenuation Test In this test the extent of the attenuation of the amplitude of an excitatory post-synaptic potential (EPSP) is examined as it propagates towards the soma from different input locations in the apical trunk. The Katz et al. 2009, the Bianchi et al. 2012, and all versions of the Golding et al. 2001 models perform quite well in this test . The various versions of the Golding et al. 2001 model are almost identical in this respect, which is not surprising as they differ only in the distribution of the sodium and A-type potassium channels. This shows that, as we would expect, these properties do not have much effect on the propagation of relatively low-amplitude signals such as unitary PSPs. Interestingly, the different versions of the Gómez González et al. 2011 model, with different morphologies, behave quite differently, which shows that this behavior can depend very much on the morphology of the dendritic tree. Oblique Integration Test This test probes the integration properties of the radial oblique dendrites of rat CA1 pyramidal cell models. The test is based on the experimental results described in . In this study, the somatic voltage response was recorded while synaptic inputs in single oblique dendrites were activated in different spatio-temporal combinations using glutamate uncaging. The main finding was that a sufficiently high number of synchronously activated and spatially clustered inputs produced a supralinear response consisting of a fast (Na) and a slow (NMDA) component, while asynchronously activated inputs summed linearly or sublinearly. This test selects all the radial oblique dendrites of the model that meet the experimental criteria: they are terminal dendrites (they have no child sections) and are at most 120 μm from the soma. Then the selected dendrites are stimulated in a proximal and in a distal region (separately) using an increasing number of clustered, synchronous or asynchronous synaptic inputs to get the voltage responses of the model, and extract the features of dendritic integration. The synaptic inputs are not unitary inputs, i.e., their strength is not equivalent to the strength of one synapse in the real cell; instead, the strength is adjusted in a way that 5 synchronous inputs are needed to trigger a dendritic action potential. The intensity of the laser used for glutamate uncaging was set in a similar way in the experiments . Most of the features were extracted at this just-suprathreshold level of input. We noticed that in some cases the strength of the synapse is not set correctly by the test; for example, it may happen that an actual dendritic spike does not reach the spike detection threshold in amplitude, or sometimes the EPSP may reach the threshold for spike detection without actual spike generation. The user has the ability to set the threshold used by eFEL for spike detection, but sometimes a single threshold may not work even for the different oblique dendrites (and proximal and distal locations in the same dendrites) of a single model. For consistency, we used the same spike detection threshold of -20 mV for all the models. The synaptic stimulus contains an AMPA and an NMDA receptor-mediated component. As the default synapse, HippoUnit uses the Exp2Syn double exponential synapse built into NEURON for the AMPA component, and its own built-in NMDA receptor model, whose parameters were set according to experimental data from the literature (see the Methods section for more details). In those models that originally do not have any synaptic component (the Bianchi et al 2011 model and all versions of the Golding et al. 2001 model) this default synapse was used. Both the Katz et al. 2009 and the Migliore et al. 2011 models used the Exp2Syn in their simulations, so in their case the time constants of this function were set to the values used in the original publications. As these models did not contain NMDA receptors, the default NMDA receptor model and the default AMPA/NMDA ratio of HippoUnit were used. The Gómez González et al 2011 and the Poirazi et al. 2003 models have their own AMPA and NMDA receptor models and their own AMPA/NMDA ratio values to be tested with. As shown by the averaged “measured EPSP vs expected EPSP” curves in , all three versions of the Golding et al. 2001 model have a jump in the amplitude of the somatic response at the threshold input level, which is the result of the generation of dendritic spikes. However, even these larger average responses do not reach the supralinear region, as it would be expected according to the experimental observations . The reason for this discrepancy is that a dendritic spike was generated in the simulations in only a subset of the stimulated dendrites; in the rest of the dendrites tested, the amplitude of the EPSPs went above the spike detection threshold during the adjustment of the synaptic weight without actually triggering a dendritic spike, which led to the corresponding synaptic strength being incorrectly set for that particular dendrite. Averaging over the results for locations with and without dendritic spikes led to an overall sublinear integration profile. The Migliore et al. 2011 model performs quite well on this test. In this case, seven dendrites could be tested out of the ten dendrites within the correct distance range because, in the others, the dendritic spike at the threshold input level also elicited a somatic action potential, and therefore these dendrites were excluded from further testing. In the Katz et al. 2009 model all the selected dendritic locations could be tested, and in most of them the synaptic strength could be adjusted appropriately. For a few dendrites, some input levels higher than the threshold for dendritic spike generation also triggered somatic action potentials. This effect causes the high supralinearity in the “measured EPSP vs expected EPSP” curve in , but has no effect on the extracted features. In the Bianchi et al. 2012 model only one dendrite could be selected, in which very high amplitude dendritic spikes were evoked by the synaptic inputs, making the signal integration highly supralinear. In the Poirazi et al. 2003 model also only one dendrite could be selected based on its distance from the soma; furthermore, only the distal location could be tested even in this dendrite, as at the proximal location the dendritic action potential at the threshold input level generated a somatic action potential. However, at the distal location, the synaptic strength could not be set correctly. For the synaptic strength chosen by the test, the actual threshold input level where a dendritic spike is first generated is at 4 inputs, but this dendritic AP is too small in amplitude to be detected, and the response to 5 inputs is recognized as the first dendritic spike instead. Therefore, the features that should be extracted at the threshold input level are instead extracted from the voltage response to 5 inputs. In this model this results in a reduced supralinearity value, as this feature is calculated one input level higher than the actual threshold. In addition, for even higher input levels dendritic bursts can be observed, which causes large supralinearity values in the “measured EPSP vs expected EPSP” curve in , but this does not affect the feature values. Models from Gómez González et al. 2011 were expected to be particularly relevant for this test, as these models were tuned to fit the same data set on which this test is based. However, we encountered an important issue when comparing our test results for these models to the results shown in the paper . In particular, the paper clearly indicates which dendrites were examined, and it is stated that those are at maximum 150 μm from the soma. However, when we measured the distance of these locations from the soma by following the path along the dendrites (as it is done by the test of HippoUnit), we often found it to be larger than 150 μm. We note that when the distance was measured in 3D coordinates rather than along the dendrites, all the dendrites used by Gómez González et al. 2011 appeared to be within 150 μm of the soma, so we assume that this definition was used in the paper. As we consider the path distance to be more meaningful than Euclidean distance in this context, and this was also the criterion used in the experimental study, we consistently use path distance in HippoUnit to find the relevant dendritic segments. Nevertheless, this difference in the selection of dendrites should be kept in mind when the results of this validation for models of Gómez González et al. 2011 are evaluated. In two versions of the Gómez González et al. 2011 model (those that were adjusted to the n123 and n125 morphologies) only one oblique dendrite matched the experimental criteria and could therefore be selected, and these are not among those that were studied by the developers of the model. In each of these cases the dendritic spike at the proximal location at the input threshold level triggered a somatic action potential, and therefore only the distal location could be tested. In the case of the n125 morphology, the dendritic spikes that appear first for just-suprathreshold input are so small in amplitude that they do not reach the spike detection threshold (-20 mV), and are thus not detected. Therefore, the automatically adjusted synaptic weight is larger than the appropriate value would be, which results in larger somatic EPSPs than expected (see ). With this synaptic weight, the first dendritic spike and therefore the jump to the supralinear region in the “measured EPSP vs expected EPSP” curve is for 4 synaptic inputs instead of 5. This is also the case in one of the two selected dendrites of the version of this model with the n128 morphology. Similarly to the Poirazi et al. 2003 model, this results in a lower degree of nonlinearity at threshold feature value, than it would be if the feature were extracted at the actual threshold input level (4 inputs) instead of the one which the test attempted to adjust (5 inputs). The suprathreshold nonlinearity feature has a high value because at that input level (6 inputs), somatic action potentials are triggered. In the version of the Gómez González et al. 2011 model that uses the n129 morphology, 10 oblique dendrites could be selected for testing (none of them is among those that its developers used) but only 4 could be tested because, for the rest, the dendritic spike at the threshold input level already elicits a somatic action potential. The synaptic weights required to set the threshold input level to 5 are not found correctly in most cases; the actual threshold input level is at 4 or 3. Suprathreshold nonlinearity is high, because at that input level (6 inputs) somatic action potentials are triggered for some of the examined dendritic locations. The version of the Gómez González et al. 2011 model that uses the n130 morphology achieves the best (lowest) final score on this test. In this model many oblique dendrites could be selected and tested, including two (179, 189) that the developers used in their simulations . In most cases the synaptic weights are nicely found to set the threshold input level to 5 synapses. For some dendrites there are somatic action potentials at higher input levels, but that does not affect the features. The value of the time to peak feature for each model is much smaller than the experimental value . This is because in each of the models the maximum amplitude of the somatic EPSP is determined by the fast component, caused by the appearance of the dendritic sodium spikes, while in the experimental observation this is rather shaped by the slow NMDA component following the sodium spike. Overall characterization and model comparison based on all tests of HippoUnit In summary, using HippoUnit, we compared the behavior of several rat hippocampal CA1 pyramidal cell models available on ModelDB in several distinct domains, and found that all of these models match experimental results well in some domains (typically those that they were originally built to capture) but fit the experimental observations less precisely in others. summarizes the final scores achieved by the different models on the various tests (lower scores indicate a better match in all cases). Perhaps a bit surprisingly, the different versions of the Golding et al. 2001 model showed a good match to the experimental data in all of the tests (except for the Depolarization Block Test), even though these are the simplest ones among the models in the sense that they contain the smallest number of different types of ion channels. On the other hand, these models do not perform outstandingly well on the Back-propagating Action Potential Test, although they were developed to study the mechanisms behind (the dichotomy of) action potential back-propagation, which is evaluated by this test based on the data that were published together with these models . The most probable reason for this surprising observation is that, in the original study , only a few features of the model’s response were compared with the experimental results. HippoUnit tested the behavior of the model based on a larger set of experimental features from the original study, and was therefore able to uncover differences between the model’s response and the experimental data on features for which the model was not evaluated in the source publication. The Bianchi et al. 2012 model is the only one that can produce real depolarization block within the range of input strengths examined by the corresponding test. The success of this model in this test is not surprising because this is the only model that was tuned to reproduce this behavior; on the other hand, the failure of the other models in this respect clearly shows that proper depolarization block requires some combination of mechanisms that are at least partially distinct from those that allow good performance in the other tests. The Bianchi et al. 2012 model achieves a relatively high final score only on the Back-propagating Action Potential Test, as action potentials seem to propagate too actively in its dendrites, leading to high AP amplitudes even in more distal compartments. The Gómez González et al. 2011 models were developed to capture the same experimental observations on dendritic integration that are tested by the Oblique Integration Test of HippoUnit, but, somewhat surprisingly, some of its versions achieved quite high feature scores on this test, while others perform quite well. This is partly caused by the fact that HippoUnit often selects different dendritic sections for testing from those that were studied by the developers of these models (see above for details). The output of HippoUnit shows that the different oblique dendrites of these models can show quite diverse behavior, and beyond those studied in the corresponding paper , other oblique dendrites do not necessarily match the experimental observations. Some of its versions also perform relatively poorly on the PSP-Attenuation Test, similar to the Migliore et al. 2011 and the Poirazi et al. 2003 models. The Katz et al. 2009 model is not outstandingly good in any of the tests, but still achieves relatively good final scores everywhere (although its apparent good performance on the Depolarization Block Test is misleading—see detailed explanation above). The model files that were used to test the models described above, the detailed validation results (all the output files of HippoUnit), and the Jupyter Notebooks that show how to run the tests of HippoUnit on these models are available in the following Github repository: https://github.com/KaliLab/HippoUnit_demo . Application of HippoUnit to models built using automated parameter optimization within the human brain project Besides enabling a detailed comparison of published models, HippoUnit can also be used to monitor the performance of new models at various stages of model development. Here, we illustrate this by showing how we have used HippoUnit within the HBP to systematically validate detailed multi-compartmental models of hippocampal neurons developed using multi-objective parameter optimization methods implemented by the open source Blue Brain Python Optimization Library (BluePyOpt ). To this end, we extended HippoUnit to allow it to handle the output of optimization performed by BluePyOpt (see Methods). Models of rat CA1 pyramidal cells were optimized using target feature data extracted from sharp electrode recordings . Then, using the Somatic Features Test of HippoUnit, we compared the behavior of the models to features extracted from this sharp electrode dataset. However, while only a subset of the features extracted by eFEL was used in the optimization (mostly those that describe the rate and timing of the spikes; e.g., the different inter-spike interval (ISI), time to last/first spike, mean frequency features), we considered all the eFEL features that could be successfully extracted from the data during validation. In addition, sharp electrode measurements were also available for several types of interneurons in the rat hippocampal CA1 region, and models of these interneurons were also constructed using similar automated methods . Using the appropriate observation file and the stimulus file belonging to it, the Somatic Features Test of HippoUnit can also be applied to these models to evaluate their somatic spiking features. The other tests of HippoUnit are currently not applicable to interneurons, mostly due to the lack of appropriate target data. We applied the tests of HippoUnit to the version of the models published in , and to a later version (v4) described in Ecker et al. (2020), which was intended to further improve the dendritic behavior of the models, as this is critical for their proper functioning in the network. The two sets of models were created using the same morphology files and similar optimization methods and protocols. These new optimizations differed mainly in the allowed range for the density of the sodium channels in the dendrites. For the pyramidal cell models a new feature was also introduced in the parameter optimization that constrains the amplitudes of back-propagating action potentials in the main apical dendrite. The new interneuron models also had an exponentially decreasing (rather than constant) density of Na channels, and A-type K channels with more hyperpolarized activation in their dendrites. For more details on the models, see the original publications . After running all the tests of HippoUnit on both sets of models generated by BluePyOpt, we performed a comparison of the old and the new versions of the models by doing a statistical analysis of the final scores achieved by the models of the same cell type on the different tests. In the median, the interquartile range and the full range of the final scores achieved by the two versions of the model set are compared. According to the results of the Wilcoxon signed-rank test the new version of the models achieved significantly better scores on the Back-propagating Action Potential test (p = 0.0046), on the Oblique Integration Test (p = 0.0033), and on the PSP Attenuation Test (p = 0.0107), which is the result of reduced dendritic excitability. Moreover, in most of the other cases the behavior of the models improved slightly (but not significantly) with the new version. Only in the case of the Somatic Features test applied to bAC interneurons did the new models perform slightly worse (but still quite well), and this difference was not significant (p = 0.75). These results show the importance of model validation performed against experimental findings, especially those not considered when building the model, in every iteration during the process of model development. This approach can greatly facilitate the construction of models that perform well in a variety of contexts, help avoid model regression, and guide the model building process towards a more robust and general implementation. Integration of HippoUnit into the Validation Framework and the Brain Simulation Platform of the human brain project The HBP is developing scientific infrastructure to facilitate advances in neuroscience, medicine, and computing . One component of this research infrastructure is the Brain Simulation Platform (BSP) ( https://bsp.humanbrainproject.eu ), an online collaborative platform that supports the construction and simulation of neural models at various scales. As we argued above, systematic, automated validation of models is a critical prerequisite of collaborative model development. Accordingly, the BSP includes a software framework for quantitative model validation and testing that explicitly supports applying a given validation test to different models and storing the results . The framework consists of a web service, and a set of test suites, which are Python modules based on the SciUnit package. As we discussed earlier, SciUnit uses the concept of capabilities, which are standardized interfaces between the models to be tested and the validation tests. By defining the capabilities to which models must adhere, individual validation tests can be implemented independently of any specific model and used to validate any compatible model despite differences in their internal structures, the language and/or the simulator used. Each test must include a specification of the required model capabilities, the location of the reference (experimental) dataset, and data analysis code to transform the recorded variables (e.g., membrane potential) into feature values that allow the simulation results to be directly and quantitatively compared to the experimental data through statistical analysis. The web services framework supports the management of models, tests, and validation results. It is accessible via web apps within the HBP Collaboratory, and also through a Python client. The framework makes it possible to permanently record, examine and reproduce validation results, and enables tracking the evolution of models over time, as well as comparison against other models in the domain. Every test of HippoUnit described in this paper has been individually registered in the Validation Framework. The JSON files containing the target experimental data for each test are stored (besides the HippoUnit_demo GitHub repository) in storage containers at the Swiss National Supercomputing Centre (CSCS), where they are publicly available. The location of the corresponding data file is associated with each registered test, so that the data are loaded automatically when the test is run on a model via the Validation Framework. As the Somatic Features Test of HippoUnit was used to compare models against five different data sets (data from sharp electrode measurements in pyramidal cells and interneurons belonging to three different electronic types, and data obtained from patch clamp recordings in pyramidal cells), these are considered to be and have been registered as five separate tests in the Validation Framework. All the models that were tested and compared in this study (including the CA1 pyramidal cell models from the literature and the BluePyOpt optimized CA1 pyramidal cells and interneurons of the HBP) have been registered and are available in the Model Catalog of the Validation Framework with their locations in the CSCS storage linked to them. In addition to the modifications that were needed to make the models compatible with testing with HippoUnit (described in the section “Methods–Models from literature”), the versions of the models uploaded to the CSCS container also contain an __init__.py file. This file implements a python class that inherits all the functions of the ModelLoader class of HippoUnit without modification. Its role is to make the validation of these models via the Framework more straightforward by defining and setting the parameters of the ModelLoader class (such as the path to the HOC and NMODL files, the name of the section lists, etc.) that otherwise need to be set after instantiating the ModelLoader (see the HippoUnit_demo GitHub repository: https://github.com/KaliLab/HippoUnit_demo/tree/master/jupyter_notebooks ). The validation results discussed in this paper have also been registered in the Validation Framework, with all their related files (output figures and JSON files) linked to them. These can be accessed using the Model Validation app of the framework. The Brain Simulation Platform of the HBP contains several online ‘Use Cases’, which are available on the platform and help the users to try and use the various established pipelines. The Use Case called ‘Hippocampus Single Cell Model Validation’ can be used to apply the tests of HippoUnit to models that were built using automated parameter optimization within the HBP. The Brain Simulation Platform also hosts interactive “Live Paper” documents that refer to published papers related to the models or software tools on the Platform. Live Papers provide links that make it possible to visualize or download results and data discussed in the respective paper, and even to run the associated simulations on the Platform. We have created a Live Paper ( https://humanbrainproject.github.io/hbp-bsp-live-papers/2020/saray_et_al_2020/saray_et_al_2020.html ) showing the results of the study presented in this paper in more detail. This interactive document provides links to all the output figures and data files resulting from the validation of the models from literature discussed here. This provides a more detailed insight into their behavior individually. Moreover, as part of this Live Paper a HippoUnit Use Case is also available in the form of a Jupyter Notebook, which guides the user through running the validation tests of HippoUnit on the models from literature that are already registered in the Framework, and makes it possible to reproduce the results presented here.
HippoUnit ( https://github.com/KaliLab/hippounit ) is an open source test suite for the automatic and quantitative evaluation of the behavior of neural single cell models. The tests of HippoUnit automatically perform simulations that mimic common electrophysiological protocols on neuronal models to compare their behavior with quantitative experimental data using various feature-based error functions. Current validation tests cover somatic (subthreshold and spiking) behavior as well as signal propagation and integration in the dendrites. These tests were chosen because they collectively cover diverse functional aspects of cellular behavior that have been thoroughly investigated in experimental and modeling studies, and particularly because the necessary experimental data were available in sufficient quality and quantity. However, we note that the currently implemented tests, even in combination, probably do not fully constrain the behavior of the cell under all physiological conditions, and thus the test suite can be further improved by including additional tests and more experimental data. The tests were developed using data and models for rat hippocampal CA1 pyramidal cells. However, most of the tests are directly applicable to or can be adapted for other cell types if the necessary experimental data are available; examples of this will be presented in later sections. HippoUnit is implemented in the Python programming language, and is based on the SciUnit framework for testing scientific models. The current version of HippoUnit is capable of handling single cell models implemented in the NEURON simulator, provided that they do not apply any runtime modification, do not have a built-in graphical user interface, and do not automatically perform simulations. Meeting these conditions may require some modifications in the published code of the model. Once such a “standalone” version of the model is available, the tests of HippoUnit can be run by adapting and using the example Jupyter notebooks described in , without any further coding required from the user. In principle, neural models developed using other software tools can also be tested by HippoUnit; however, this requires the re-implementation by the user of the interface functions that allow HippoUnit to run the necessary simulations and record their output (see the Methods section for more details). In the current tests of HippoUnit, once all the necessary simulations have been performed and the responses of the model have been recorded, electrophysiological features are extracted from the voltage traces, and the discrepancy between the model’s behavior and the experiment is computed by comparing the feature values with those extracted from the experimental data (see Methods). Biological variability is taken into account by measuring the difference between the feature value for the model and the mean of the feature in the experiments in units of the standard deviation for that particular feature observed in the experiments. For simplicity, we refer to the result of this comparison as the feature score; however, we note that there are many possible sources of such discrepancy including, among others, experimental artefacts and noise, shortcomings of the models, and differences between the conditions assumed by the models and those in the actual experiments (see the Discussion for more details). The final score of a given test achieved by a given model is given by the average (or, in some cases, the sum) of the feature scores for all the features evaluated by the test. While the main output of the tests is the final score, which allows the quantitative comparison of the models’ behavior to experimental data, it is important to emphasize that it should never be blindly accepted. A high final score does not necessarily mean that the model is bad–it may also indicate an issue with the data, a mismatch between experimental conditions and modeling assumptions, or some problem with the implementation of the test itself (see the Discussion for further details). For this reason, and also to provide more insight into how the scores were obtained, the tests of HippoUnit typically provide a number of other useful outputs (see Methods), including figures that visualize the model’s behavior through traces and plot the feature and feature score values compared to the experimental data. It is always strongly recommended to look at the traces and other figures to get a fuller picture of the model’s response to the stimuli, which helps with the correct interpretation of validation results. Such closer inspection also makes it possible to detect possible test failures, when the extraction of certain features does not work correctly for a given model. HippoUnit can also take advantage of the parallel execution capabilities of modern computers. When tests require multiple simulations of the same model using different settings (e.g., different stimulation intensities or different stimulus locations in the cell), these simulations are run in parallel, which can make the validation process substantially faster, depending on the available computing resources. One convenient way of running a test on a model is to use an interactive computational notebook, such as the Jupyter Notebook , which enables the combination of program codes to be run (we used Python code to access the functionality of HippoUnit), the resulting outputs (e.g. figures, tables, text) and commentary or explanatory text in a single document. Therefore, we demonstrate the usage of HippoUnit through this method (See and https://github.com/KaliLab/HippoUnit_demo ).
We selected six different publications containing models of rat hippocampal CA1 pyramidal cells whose implementations for the NEURON simulator were available in the ModelDB database. Our aim was to compare the behavior of every model to the experimental target data using the tests of HippoUnit, which also allowed us to compare the models to each other, and to test their generalization performance in paradigms that they were not originally designed to capture. These models differ in their complexity regarding the number and types of ion channels that they contain, and they were built for different purposes. The Golding et al., 2001 model was developed to show the dichotomy of the back-propagation efficacy and the amplitudes of the back-propagating action potentials at distal trunk regions in CA1 pyramidal cells and to make predictions on the possible causes of this behavior. It contains only the most important ion channels (Na, K DR , K A ) needed to reproduce the generation and propagation of action potentials. The Katz et al., 2009 model is based on the Golding et al. 2001 model and was built to investigate the functional consequences of the distribution of strength and density of synapses on the apical dendrites that they observed experimentally, for the mode of dendritic integration. The Migliore et al., 2011 model was used to study schizophrenic behavior. It is based on earlier models of the same modeling group, which were used to investigate the initiation and propagation of action potentials in oblique dendrites, and have been validated against different electrophysiological data. The Poirazi et al., 2003 model was designed to clarify the issues about the integrative properties of thin apical dendrites that may arise from the different and sometimes conflicting interpretations of available experimental data. This is a quite complex model in the sense that it contains a large number of different types of ion channels, whose properties were adjusted to fit in vitro experimental data, and it also contains four types of synaptic receptors. The Bianchi et al., 2012 model was designed to investigate the mechanisms behind depolarization block observed experimentally in the somatic spiking behavior of rat CA1 pyramidal cells. It was developed by combining and modifying the Shah et al., 2008 and the Poirazi et al. 2003 models . The former of these was developed to show the significance of axonal M-type potassium channels. The Gómez González et al., 2011 model is based on the Poirazi et al. 2003 model and it was modified to replicate the experimental data of on the nonlinear signal integration of radial oblique dendrites when the inputs arrive in a short time window. A common property of these models is that their parameters were set using manual procedures with the aim of reproducing the behavior of real rat CA1 PCs in one or a few specific paradigms. As some of them were built by modifying and further developing previous models, these share the same morphology (see ). On the other hand, the model of Gómez González et al. 2011 was adjusted to 5 different morphologies, which were all tested. In the case of the Golding et al. 2001 model, we tested three different versions (shown in Figs 8A, 8B and 9A of the corresponding paper ) that differ in the distribution of the sodium and the A-type potassium channels, and therefore the back-propagation efficacy of the action potentials. The morphologies and characteristic voltage responses of all the models used in this comparison are displayed in . Running the tests of HippoUnit on these models we took into account the original settings of the simulations of the models, and set the v_init (the initial voltage when the simulation starts), and the celsius (the temperature at which the simulation is done) variables accordingly. For the Bianchi et al 2012 model we used variable time step integration during all the simulations, as it was done in the original modeling study. For the other models a fixed time step were used (dt = 0.025 ms). Somatic Features Test Using the Somatic Features Test of HippoUnit, we compared the behavior of the models to features extracted from the patch clamp dataset, as each of the tested models was apparently constructed using experimental data obtained from patch clamp recordings as a reference. After performing a review of the relevant literature, we concluded that the patch clamp dataset is in good agreement with experimental observations available in the literature (see in Methods), and will be used as a representative example in this study. In the patch clamp recordings, both the depolarizing and the hyperpolarizing current injections were 300 ms long and 0.05, 0.1, 0.15, 0.2, 0.25 nA in amplitude. Because during these recordings the cells were stimulated with relatively low amplitude current injections, some of the examined models (Migliore et al. 2011, Gómez González et al. 2011 n125 morphology) did not fire even for the highest amplitude tested. Some other models started to fire for higher current intensities than it was observed experimentally. In these cases the features that describe action potential shape or timing properties cannot be evaluated for the given model (for the current amplitudes affected). Therefore, besides the final score achieved by the models on this test (the average Z-score for the successfully evaluated features–see Methods for details) that shows the discrepancy of the models’ behavior and the experimental observations regarding the successfully evaluated features, we also consider the proportion of the successfully evaluated features as an important measure of how closely the model matches this specific experimental dataset. This information, along with the names of the features that cannot be evaluated for the given model, are provided as outputs of the test, and should be considered when making conclusions on the model’s performance. This is another example where looking at only the final score may not be enough to determine whether the model meets the requirements of the user, and shows how the other outputs of the tests can help the interpretation of the results. shows how the extracted feature values of the somatic response traces of the different models fit the experimental values. It is clear that the behavior of the different models is very diverse. Each model captures some of the experimental features but shows a larger discrepancy for others. The resting membrane potential ( voltage_base ) for all of the models was apparently adjusted to a more hyperpolarized value than in the experimental recordings we used for our comparison, and most of the models also return to a lower voltage value after the step stimuli ( steady_state_voltage ). An exception is the Poirazi et al. 2003 model, where the decay time constant after the stimulus is unusually high (this feature is not included in , but the slow decay can be seen in the example trace in , and detailed data are available here: https://github.com/KaliLab/HippoUnit_demo ). The voltage threshold for action potential generation ( AP_begin_voltage ) is lower than the experimental value for most of the models (that were able to generate action potentials in response to the examined current intensities), but it is higher than the experimental value for most versions of the Gómez González et al. 2011 model. For negative current steps most of the models gets more hyperpolarized ( voltage_deflection ) (the most extreme is the Gómez González et al. 2011 model with the n129 morphology), while the Gómez González et al. 2011 model with the n125 morphology and the Migliore et al. 2011 model get less hyperpolarized than it was observed experimentally. The sag amplitudes are also quite high for the Gómez González et al. 2011 n129, and n130 models, while the Katz et al. 2009, and all versions of the Golding et al. 2001 models basically have no hyperpolarizing sag. It is quite conspicuous how much the amplitude of the action potentials ( APlast_amp , AP_amplitude , AP2_amp ) differs in the Gómez González et al. 2011 models from the experimental values and from the other models as well. The Katz et al. 2009 and one of the versions (“ ”) of the Golding et al. 2001 model have slightly too high action potential amplitudes, and these models have relatively small action potential width ( AP_width ). On the other hand, the rising phase ( AP_rise_time , AP_rise_rate ) of the Katz et al. 2009 model appears to be too slow. Looking at the inverse interspike interval ( ISI ) values, it can be seen that the experimental spike trains show adaptation in the ISIs, meaning that the first ISI is smaller (the inverse ISI is higher) than the last ISI for the same current injection amplitude. This behavior can be observed in the case of the Katz et al. 2009 model, three versions (n128, n129, n130 morphology) of the Gómez González et al. 2011 model, but cannot really be seen in the Bianchi et al. 2011, the Poirazi et al. 2003 and the three versions of the Golding et al. 2001 models. At first look it may seem contradictory that in the case of the Gómez González et al. 2011 model version n129 morphology the spike counts are quite low, while the mean frequency and the inverse ISI values are high. This is because the soma of this model does not fire over the whole period of the stimulation, but starts firing at higher frequencies, then stops firing for rest of the stimulus (see ). The Katz et al. 2009 model fires quite a high number of action potentials ( Spikecount ) compared to the experimental data, at a high frequency. In the experimental recordings there is a delay before the first action potential is generated, which becomes shorter with increasing current intensity (indicated by the inv_time_to_first_spike feature that becomes larger with increasing input intensity). In most of the models this behavior can be observed, albeit to different degrees. The Katz et al. 2009 model has the shortest delays (highest inv_time_to_first_spike values), but the effect is still visible. To quantify the difference between the experimental dataset and the simulated output of the models, these were compared using the feature-based error function (Z-Score) described above to calculate the feature score. shows the mean scores of the model features whose absolute values are illustrated in (averaged over the different current step amplitudes examined), while indicates the number of successfully evaluated features out of the number of features that were attempted to be evaluated. From it is even more clearly visible that each model fits some experimental features well but does not capture others. For example, it is quite noticeable in that most of the versions of the Gómez González et al. 2011 model (greenish dots) perform well for features describing action potential timing (upper part of the figure, e.g., ISIs , mean_frequency , spikecount ), but get higher feature scores for features of action potential shape (lower part of the figure, e.g., AP_rise_rate , AP_rise_time , AP_fall_rate , AP_fall_time , AP amplitudes ). Conversely, the Katz et al. 2009 model achieved better scores for AP shape features than for features describing AP timing. It is also worth noting that none of the feature scores for the model of Migliore et al. 2011 was higher than 4; however, looking at it can be seen that less than half of the experimental features were successfully evaluated in this model, which is because it does not fire action potentials for the current injection amplitudes examined here. As mentioned above the proportion of the successfully evaluated features is also an important measure of how well the behavior of the models fits the specific experimental observations, and should be taken into account. Depolarization Block Test In the Depolarization Block Test three features are evaluated. Two of them examine the threshold current intensity to reach depolarization block. The I_maxNumAP feature is the current intensity at which the model fires the maximum number of action potentials, and the I_below_depol_block feature is the current intensity one step before the model enters depolarization block. Both are compared to the experimental I th feature because, in the experiment , the number of spikes increased monotonically with increasing current intensity up to the current amplitude where the cell entered depolarization block during the stimulus, which led to a drop in the number of action potentials. By contrast, we experienced that some models started to fire fewer spikes for higher current intensities while still firing over the whole period of the current step stimulus, i.e., without entering depolarization block. Therefore, we introduced the two separate features for the threshold current. If these two feature values are not equal, a penalty is added to the score. The third evaluated feature is V eq , the equilibrium potential during the depolarization block, which is calculated as the average of the membrane potential over the last 100 ms of a current pulse with amplitude 50 pA above I_maxNumAP (or 50 pA above I_below_depol_block if its value is not equal to I_maxNumAP ). Each model has a value for the I_maxNumAP feature, while those models that do not enter depolarization block are not supposed to have a value for the I_below_depol_block feature and the Veq feature. The results from applying the Depolarization Block Test to the models from ModelDB are shown in . According to the test, four of the models entered depolarization block. However, by looking at the actual voltage traces provided by the test, it becomes apparent that only the Bianchi et al. 2011 model behaves correctly (which was developed to show this behavior). The other three models actually managed to “cheat” the test. In the case of the Katz et al. 2009 and the Golding et al. 2001 “ ” models, the APs get smaller and smaller with increasing stimulus amplitude until they get so small that they do not reach the threshold for action potential detection; therefore, these APs are not counted by the test and V eq is also calculated. The Gómez González et al. 2011 model adjusted to the n129 morphology does not fire during the whole period of the current stimulation for a wide range of current amplitudes (see ). Increasing the intensity of the current injection it fires an increasing number of spikes, but always stops after a while before the end of the stimulus. On the other hand, there is a certain current intensity after which the model starts to fire fewer action potentials, and which is thus detected as I_maxNumAP by the test. Because no action potentials can be detected during the last 100 ms of the somatic response one step above the detected “threshold” current intensity, the model is declared to have entered depolarization block, and a V eq value is also extracted. In principle, it would be desirable to modify the test so that it correctly rejects the three models above. However, the models described above shows so similar behavior to depolarization block that is hard to distinguish using automatic methods. Furthermore, we have made substantial efforts to make the test more general and applicable to a wide variety of models with different behavior, and we are concerned that defining and adding further criteria to the test to deal with these specific cases would be an ad hoc solution, and would possibly cause further ‘cheats’ when applied to other models with unexpected behavior. These cases underline the importance of critically evaluating the full output (especially the figures of the recorded voltage traces) of the tests rather than blindly accepting the final scores provided. Back-propagating AP Test This test first finds all the dendritic segments that belong to the main apical dendrite of the model and which are 50, 150, 250, 350 ± 20 μm from the soma, respectively. Then a train of action potentials of frequency around 15 Hz is triggered in the soma by injecting a step current of appropriate amplitude (as determined by the test), and the amplitudes of the first and last action potentials in the train are measured at the selected locations. In the Bianchi et al. 2012 and the Poirazi et al. 2003 models (which share the same morphology, see ) no suitable trunk locations could be found in the most proximal (50 ± 20 μm) and most distal (350 ± 20 μm) regions. This is because this morphology has quite long dendritic sections that are divided into a small number of segments. In particular, the first trunk section (apical_dendrite[0]) originates from the soma, is 102.66 μm long, and has only two segments. The center of one of them is 25.67 μm far from the soma, while the other is already 77 μm away from the soma. None of these segments belongs to the 50 ± 20 μm range, and therefore they are not selected by the test. The n123 morphology of the Gómez González et al. 2011 model has the same shape , but in this case the segments are different, and therefore it does not share the same problem. At the remaining, successfully evaluated distance ranges in the apical trunk of the Bianchi et al. 2012 model, action potentials propagate very actively, barely attenuating. For the AP1_amp and APlast_amp features at these distances, this model has the highest feature score , while the Poirazi et al. 2003 model performs quite well. The Golding et al. 2001 model was designed to investigate how the distribution of ion channels can affect the back-propagation efficacy in the trunk. The two versions of the Golding et al. 2001 model (“ ” and “ ” versions) which are supposed to be weakly propagating according to the corresponding paper , are also weakly propagating according to the test. However, the difference between their strongly and weakly propagating feature scores is not too large , which is probably caused by the much smaller standard deviation value of the experimental data for the weakly propagating case. Although the amplitudes of the first action potentials of these two models fit the experimental data relatively well, they start to decline slightly closer to the soma than it was observed experimentally, as the amplitudes are already very small at 250 ± 20 μm . (In the data corresponding to these two versions of the model are almost completely overlapping for more distal regions.) The amplitudes for the last action potential fit the data well, except in the most proximal regions (see the relatively high feature score in or the detailed results here: https://github.com/KaliLab/HippoUnit_demo ). For all versions of the Golding et al. 2001 model, AP amplitudes are too high at the most proximal distance range. As for the strongly propagating version of the Golding et al. 2001 model (“ ” version), the amplitude of the first action potential is too high at the proximal locations, but further it fits the data well. The amplitude of the last action potential remains too high even at more distal locations. It is worth noting that, in the corresponding paper , they only examined a single action potential triggered by a 5 ms long input in their simulations, and did not examine or compare to their data the properties of the last action potential in a longer spike train. Finally, we note that in all versions of the Golding et al. 2001 model a spike train with frequency around 23 Hz was evoked and examined as it turned out to be difficult to set the frequency closer to 15 Hz. The different versions of the Gómez González et al. 2011 model behave qualitatively similarly in this test, although there were smaller quantitative differences. In almost all versions the amplitudes of the first action potential in the dendrites are slightly too low at the most proximal locations but fit the experimental data better at further locations. The exceptions are the versions with the n128 and n129 morphologies, which have lower first action potential amplitudes at the furthest locations, but not low enough to be considered as weak propagating. The amplitudes for the last action potential are too high at the distal regions but fit better at the proximal ones. The only exception is the one with morphology n129, where the last action potential attenuates more at further locations and fits the data better. In the case of the Katz et al. 2009 model, a spike train with frequency around 40 Hz was examined, as the firing frequency increases so suddenly with increasing current intensity in this model that no frequency closer to 15 Hz could be adjusted. In this model the last action potential propagates too strongly, while the dendritic amplitudes for the first action potential are close to the experimental values. In the Migliore et al. 2011 model the amplitudes for the last action potential are too high, while the amplitude of the first back-propagating action potential is too low at locations in the 250 ± 20 μm and 350 ± 20 μm distance ranges. Finally, all the models that we examined were found to be strongly propagating by the test, with the exception of those versions of the Golding et al. 2001 model that were explicitly developed to be weakly propagating. PSP Attenuation Test In this test the extent of the attenuation of the amplitude of an excitatory post-synaptic potential (EPSP) is examined as it propagates towards the soma from different input locations in the apical trunk. The Katz et al. 2009, the Bianchi et al. 2012, and all versions of the Golding et al. 2001 models perform quite well in this test . The various versions of the Golding et al. 2001 model are almost identical in this respect, which is not surprising as they differ only in the distribution of the sodium and A-type potassium channels. This shows that, as we would expect, these properties do not have much effect on the propagation of relatively low-amplitude signals such as unitary PSPs. Interestingly, the different versions of the Gómez González et al. 2011 model, with different morphologies, behave quite differently, which shows that this behavior can depend very much on the morphology of the dendritic tree. Oblique Integration Test This test probes the integration properties of the radial oblique dendrites of rat CA1 pyramidal cell models. The test is based on the experimental results described in . In this study, the somatic voltage response was recorded while synaptic inputs in single oblique dendrites were activated in different spatio-temporal combinations using glutamate uncaging. The main finding was that a sufficiently high number of synchronously activated and spatially clustered inputs produced a supralinear response consisting of a fast (Na) and a slow (NMDA) component, while asynchronously activated inputs summed linearly or sublinearly. This test selects all the radial oblique dendrites of the model that meet the experimental criteria: they are terminal dendrites (they have no child sections) and are at most 120 μm from the soma. Then the selected dendrites are stimulated in a proximal and in a distal region (separately) using an increasing number of clustered, synchronous or asynchronous synaptic inputs to get the voltage responses of the model, and extract the features of dendritic integration. The synaptic inputs are not unitary inputs, i.e., their strength is not equivalent to the strength of one synapse in the real cell; instead, the strength is adjusted in a way that 5 synchronous inputs are needed to trigger a dendritic action potential. The intensity of the laser used for glutamate uncaging was set in a similar way in the experiments . Most of the features were extracted at this just-suprathreshold level of input. We noticed that in some cases the strength of the synapse is not set correctly by the test; for example, it may happen that an actual dendritic spike does not reach the spike detection threshold in amplitude, or sometimes the EPSP may reach the threshold for spike detection without actual spike generation. The user has the ability to set the threshold used by eFEL for spike detection, but sometimes a single threshold may not work even for the different oblique dendrites (and proximal and distal locations in the same dendrites) of a single model. For consistency, we used the same spike detection threshold of -20 mV for all the models. The synaptic stimulus contains an AMPA and an NMDA receptor-mediated component. As the default synapse, HippoUnit uses the Exp2Syn double exponential synapse built into NEURON for the AMPA component, and its own built-in NMDA receptor model, whose parameters were set according to experimental data from the literature (see the Methods section for more details). In those models that originally do not have any synaptic component (the Bianchi et al 2011 model and all versions of the Golding et al. 2001 model) this default synapse was used. Both the Katz et al. 2009 and the Migliore et al. 2011 models used the Exp2Syn in their simulations, so in their case the time constants of this function were set to the values used in the original publications. As these models did not contain NMDA receptors, the default NMDA receptor model and the default AMPA/NMDA ratio of HippoUnit were used. The Gómez González et al 2011 and the Poirazi et al. 2003 models have their own AMPA and NMDA receptor models and their own AMPA/NMDA ratio values to be tested with. As shown by the averaged “measured EPSP vs expected EPSP” curves in , all three versions of the Golding et al. 2001 model have a jump in the amplitude of the somatic response at the threshold input level, which is the result of the generation of dendritic spikes. However, even these larger average responses do not reach the supralinear region, as it would be expected according to the experimental observations . The reason for this discrepancy is that a dendritic spike was generated in the simulations in only a subset of the stimulated dendrites; in the rest of the dendrites tested, the amplitude of the EPSPs went above the spike detection threshold during the adjustment of the synaptic weight without actually triggering a dendritic spike, which led to the corresponding synaptic strength being incorrectly set for that particular dendrite. Averaging over the results for locations with and without dendritic spikes led to an overall sublinear integration profile. The Migliore et al. 2011 model performs quite well on this test. In this case, seven dendrites could be tested out of the ten dendrites within the correct distance range because, in the others, the dendritic spike at the threshold input level also elicited a somatic action potential, and therefore these dendrites were excluded from further testing. In the Katz et al. 2009 model all the selected dendritic locations could be tested, and in most of them the synaptic strength could be adjusted appropriately. For a few dendrites, some input levels higher than the threshold for dendritic spike generation also triggered somatic action potentials. This effect causes the high supralinearity in the “measured EPSP vs expected EPSP” curve in , but has no effect on the extracted features. In the Bianchi et al. 2012 model only one dendrite could be selected, in which very high amplitude dendritic spikes were evoked by the synaptic inputs, making the signal integration highly supralinear. In the Poirazi et al. 2003 model also only one dendrite could be selected based on its distance from the soma; furthermore, only the distal location could be tested even in this dendrite, as at the proximal location the dendritic action potential at the threshold input level generated a somatic action potential. However, at the distal location, the synaptic strength could not be set correctly. For the synaptic strength chosen by the test, the actual threshold input level where a dendritic spike is first generated is at 4 inputs, but this dendritic AP is too small in amplitude to be detected, and the response to 5 inputs is recognized as the first dendritic spike instead. Therefore, the features that should be extracted at the threshold input level are instead extracted from the voltage response to 5 inputs. In this model this results in a reduced supralinearity value, as this feature is calculated one input level higher than the actual threshold. In addition, for even higher input levels dendritic bursts can be observed, which causes large supralinearity values in the “measured EPSP vs expected EPSP” curve in , but this does not affect the feature values. Models from Gómez González et al. 2011 were expected to be particularly relevant for this test, as these models were tuned to fit the same data set on which this test is based. However, we encountered an important issue when comparing our test results for these models to the results shown in the paper . In particular, the paper clearly indicates which dendrites were examined, and it is stated that those are at maximum 150 μm from the soma. However, when we measured the distance of these locations from the soma by following the path along the dendrites (as it is done by the test of HippoUnit), we often found it to be larger than 150 μm. We note that when the distance was measured in 3D coordinates rather than along the dendrites, all the dendrites used by Gómez González et al. 2011 appeared to be within 150 μm of the soma, so we assume that this definition was used in the paper. As we consider the path distance to be more meaningful than Euclidean distance in this context, and this was also the criterion used in the experimental study, we consistently use path distance in HippoUnit to find the relevant dendritic segments. Nevertheless, this difference in the selection of dendrites should be kept in mind when the results of this validation for models of Gómez González et al. 2011 are evaluated. In two versions of the Gómez González et al. 2011 model (those that were adjusted to the n123 and n125 morphologies) only one oblique dendrite matched the experimental criteria and could therefore be selected, and these are not among those that were studied by the developers of the model. In each of these cases the dendritic spike at the proximal location at the input threshold level triggered a somatic action potential, and therefore only the distal location could be tested. In the case of the n125 morphology, the dendritic spikes that appear first for just-suprathreshold input are so small in amplitude that they do not reach the spike detection threshold (-20 mV), and are thus not detected. Therefore, the automatically adjusted synaptic weight is larger than the appropriate value would be, which results in larger somatic EPSPs than expected (see ). With this synaptic weight, the first dendritic spike and therefore the jump to the supralinear region in the “measured EPSP vs expected EPSP” curve is for 4 synaptic inputs instead of 5. This is also the case in one of the two selected dendrites of the version of this model with the n128 morphology. Similarly to the Poirazi et al. 2003 model, this results in a lower degree of nonlinearity at threshold feature value, than it would be if the feature were extracted at the actual threshold input level (4 inputs) instead of the one which the test attempted to adjust (5 inputs). The suprathreshold nonlinearity feature has a high value because at that input level (6 inputs), somatic action potentials are triggered. In the version of the Gómez González et al. 2011 model that uses the n129 morphology, 10 oblique dendrites could be selected for testing (none of them is among those that its developers used) but only 4 could be tested because, for the rest, the dendritic spike at the threshold input level already elicits a somatic action potential. The synaptic weights required to set the threshold input level to 5 are not found correctly in most cases; the actual threshold input level is at 4 or 3. Suprathreshold nonlinearity is high, because at that input level (6 inputs) somatic action potentials are triggered for some of the examined dendritic locations. The version of the Gómez González et al. 2011 model that uses the n130 morphology achieves the best (lowest) final score on this test. In this model many oblique dendrites could be selected and tested, including two (179, 189) that the developers used in their simulations . In most cases the synaptic weights are nicely found to set the threshold input level to 5 synapses. For some dendrites there are somatic action potentials at higher input levels, but that does not affect the features. The value of the time to peak feature for each model is much smaller than the experimental value . This is because in each of the models the maximum amplitude of the somatic EPSP is determined by the fast component, caused by the appearance of the dendritic sodium spikes, while in the experimental observation this is rather shaped by the slow NMDA component following the sodium spike. Overall characterization and model comparison based on all tests of HippoUnit In summary, using HippoUnit, we compared the behavior of several rat hippocampal CA1 pyramidal cell models available on ModelDB in several distinct domains, and found that all of these models match experimental results well in some domains (typically those that they were originally built to capture) but fit the experimental observations less precisely in others. summarizes the final scores achieved by the different models on the various tests (lower scores indicate a better match in all cases). Perhaps a bit surprisingly, the different versions of the Golding et al. 2001 model showed a good match to the experimental data in all of the tests (except for the Depolarization Block Test), even though these are the simplest ones among the models in the sense that they contain the smallest number of different types of ion channels. On the other hand, these models do not perform outstandingly well on the Back-propagating Action Potential Test, although they were developed to study the mechanisms behind (the dichotomy of) action potential back-propagation, which is evaluated by this test based on the data that were published together with these models . The most probable reason for this surprising observation is that, in the original study , only a few features of the model’s response were compared with the experimental results. HippoUnit tested the behavior of the model based on a larger set of experimental features from the original study, and was therefore able to uncover differences between the model’s response and the experimental data on features for which the model was not evaluated in the source publication. The Bianchi et al. 2012 model is the only one that can produce real depolarization block within the range of input strengths examined by the corresponding test. The success of this model in this test is not surprising because this is the only model that was tuned to reproduce this behavior; on the other hand, the failure of the other models in this respect clearly shows that proper depolarization block requires some combination of mechanisms that are at least partially distinct from those that allow good performance in the other tests. The Bianchi et al. 2012 model achieves a relatively high final score only on the Back-propagating Action Potential Test, as action potentials seem to propagate too actively in its dendrites, leading to high AP amplitudes even in more distal compartments. The Gómez González et al. 2011 models were developed to capture the same experimental observations on dendritic integration that are tested by the Oblique Integration Test of HippoUnit, but, somewhat surprisingly, some of its versions achieved quite high feature scores on this test, while others perform quite well. This is partly caused by the fact that HippoUnit often selects different dendritic sections for testing from those that were studied by the developers of these models (see above for details). The output of HippoUnit shows that the different oblique dendrites of these models can show quite diverse behavior, and beyond those studied in the corresponding paper , other oblique dendrites do not necessarily match the experimental observations. Some of its versions also perform relatively poorly on the PSP-Attenuation Test, similar to the Migliore et al. 2011 and the Poirazi et al. 2003 models. The Katz et al. 2009 model is not outstandingly good in any of the tests, but still achieves relatively good final scores everywhere (although its apparent good performance on the Depolarization Block Test is misleading—see detailed explanation above). The model files that were used to test the models described above, the detailed validation results (all the output files of HippoUnit), and the Jupyter Notebooks that show how to run the tests of HippoUnit on these models are available in the following Github repository: https://github.com/KaliLab/HippoUnit_demo .
Using the Somatic Features Test of HippoUnit, we compared the behavior of the models to features extracted from the patch clamp dataset, as each of the tested models was apparently constructed using experimental data obtained from patch clamp recordings as a reference. After performing a review of the relevant literature, we concluded that the patch clamp dataset is in good agreement with experimental observations available in the literature (see in Methods), and will be used as a representative example in this study. In the patch clamp recordings, both the depolarizing and the hyperpolarizing current injections were 300 ms long and 0.05, 0.1, 0.15, 0.2, 0.25 nA in amplitude. Because during these recordings the cells were stimulated with relatively low amplitude current injections, some of the examined models (Migliore et al. 2011, Gómez González et al. 2011 n125 morphology) did not fire even for the highest amplitude tested. Some other models started to fire for higher current intensities than it was observed experimentally. In these cases the features that describe action potential shape or timing properties cannot be evaluated for the given model (for the current amplitudes affected). Therefore, besides the final score achieved by the models on this test (the average Z-score for the successfully evaluated features–see Methods for details) that shows the discrepancy of the models’ behavior and the experimental observations regarding the successfully evaluated features, we also consider the proportion of the successfully evaluated features as an important measure of how closely the model matches this specific experimental dataset. This information, along with the names of the features that cannot be evaluated for the given model, are provided as outputs of the test, and should be considered when making conclusions on the model’s performance. This is another example where looking at only the final score may not be enough to determine whether the model meets the requirements of the user, and shows how the other outputs of the tests can help the interpretation of the results. shows how the extracted feature values of the somatic response traces of the different models fit the experimental values. It is clear that the behavior of the different models is very diverse. Each model captures some of the experimental features but shows a larger discrepancy for others. The resting membrane potential ( voltage_base ) for all of the models was apparently adjusted to a more hyperpolarized value than in the experimental recordings we used for our comparison, and most of the models also return to a lower voltage value after the step stimuli ( steady_state_voltage ). An exception is the Poirazi et al. 2003 model, where the decay time constant after the stimulus is unusually high (this feature is not included in , but the slow decay can be seen in the example trace in , and detailed data are available here: https://github.com/KaliLab/HippoUnit_demo ). The voltage threshold for action potential generation ( AP_begin_voltage ) is lower than the experimental value for most of the models (that were able to generate action potentials in response to the examined current intensities), but it is higher than the experimental value for most versions of the Gómez González et al. 2011 model. For negative current steps most of the models gets more hyperpolarized ( voltage_deflection ) (the most extreme is the Gómez González et al. 2011 model with the n129 morphology), while the Gómez González et al. 2011 model with the n125 morphology and the Migliore et al. 2011 model get less hyperpolarized than it was observed experimentally. The sag amplitudes are also quite high for the Gómez González et al. 2011 n129, and n130 models, while the Katz et al. 2009, and all versions of the Golding et al. 2001 models basically have no hyperpolarizing sag. It is quite conspicuous how much the amplitude of the action potentials ( APlast_amp , AP_amplitude , AP2_amp ) differs in the Gómez González et al. 2011 models from the experimental values and from the other models as well. The Katz et al. 2009 and one of the versions (“ ”) of the Golding et al. 2001 model have slightly too high action potential amplitudes, and these models have relatively small action potential width ( AP_width ). On the other hand, the rising phase ( AP_rise_time , AP_rise_rate ) of the Katz et al. 2009 model appears to be too slow. Looking at the inverse interspike interval ( ISI ) values, it can be seen that the experimental spike trains show adaptation in the ISIs, meaning that the first ISI is smaller (the inverse ISI is higher) than the last ISI for the same current injection amplitude. This behavior can be observed in the case of the Katz et al. 2009 model, three versions (n128, n129, n130 morphology) of the Gómez González et al. 2011 model, but cannot really be seen in the Bianchi et al. 2011, the Poirazi et al. 2003 and the three versions of the Golding et al. 2001 models. At first look it may seem contradictory that in the case of the Gómez González et al. 2011 model version n129 morphology the spike counts are quite low, while the mean frequency and the inverse ISI values are high. This is because the soma of this model does not fire over the whole period of the stimulation, but starts firing at higher frequencies, then stops firing for rest of the stimulus (see ). The Katz et al. 2009 model fires quite a high number of action potentials ( Spikecount ) compared to the experimental data, at a high frequency. In the experimental recordings there is a delay before the first action potential is generated, which becomes shorter with increasing current intensity (indicated by the inv_time_to_first_spike feature that becomes larger with increasing input intensity). In most of the models this behavior can be observed, albeit to different degrees. The Katz et al. 2009 model has the shortest delays (highest inv_time_to_first_spike values), but the effect is still visible. To quantify the difference between the experimental dataset and the simulated output of the models, these were compared using the feature-based error function (Z-Score) described above to calculate the feature score. shows the mean scores of the model features whose absolute values are illustrated in (averaged over the different current step amplitudes examined), while indicates the number of successfully evaluated features out of the number of features that were attempted to be evaluated. From it is even more clearly visible that each model fits some experimental features well but does not capture others. For example, it is quite noticeable in that most of the versions of the Gómez González et al. 2011 model (greenish dots) perform well for features describing action potential timing (upper part of the figure, e.g., ISIs , mean_frequency , spikecount ), but get higher feature scores for features of action potential shape (lower part of the figure, e.g., AP_rise_rate , AP_rise_time , AP_fall_rate , AP_fall_time , AP amplitudes ). Conversely, the Katz et al. 2009 model achieved better scores for AP shape features than for features describing AP timing. It is also worth noting that none of the feature scores for the model of Migliore et al. 2011 was higher than 4; however, looking at it can be seen that less than half of the experimental features were successfully evaluated in this model, which is because it does not fire action potentials for the current injection amplitudes examined here. As mentioned above the proportion of the successfully evaluated features is also an important measure of how well the behavior of the models fits the specific experimental observations, and should be taken into account.
In the Depolarization Block Test three features are evaluated. Two of them examine the threshold current intensity to reach depolarization block. The I_maxNumAP feature is the current intensity at which the model fires the maximum number of action potentials, and the I_below_depol_block feature is the current intensity one step before the model enters depolarization block. Both are compared to the experimental I th feature because, in the experiment , the number of spikes increased monotonically with increasing current intensity up to the current amplitude where the cell entered depolarization block during the stimulus, which led to a drop in the number of action potentials. By contrast, we experienced that some models started to fire fewer spikes for higher current intensities while still firing over the whole period of the current step stimulus, i.e., without entering depolarization block. Therefore, we introduced the two separate features for the threshold current. If these two feature values are not equal, a penalty is added to the score. The third evaluated feature is V eq , the equilibrium potential during the depolarization block, which is calculated as the average of the membrane potential over the last 100 ms of a current pulse with amplitude 50 pA above I_maxNumAP (or 50 pA above I_below_depol_block if its value is not equal to I_maxNumAP ). Each model has a value for the I_maxNumAP feature, while those models that do not enter depolarization block are not supposed to have a value for the I_below_depol_block feature and the Veq feature. The results from applying the Depolarization Block Test to the models from ModelDB are shown in . According to the test, four of the models entered depolarization block. However, by looking at the actual voltage traces provided by the test, it becomes apparent that only the Bianchi et al. 2011 model behaves correctly (which was developed to show this behavior). The other three models actually managed to “cheat” the test. In the case of the Katz et al. 2009 and the Golding et al. 2001 “ ” models, the APs get smaller and smaller with increasing stimulus amplitude until they get so small that they do not reach the threshold for action potential detection; therefore, these APs are not counted by the test and V eq is also calculated. The Gómez González et al. 2011 model adjusted to the n129 morphology does not fire during the whole period of the current stimulation for a wide range of current amplitudes (see ). Increasing the intensity of the current injection it fires an increasing number of spikes, but always stops after a while before the end of the stimulus. On the other hand, there is a certain current intensity after which the model starts to fire fewer action potentials, and which is thus detected as I_maxNumAP by the test. Because no action potentials can be detected during the last 100 ms of the somatic response one step above the detected “threshold” current intensity, the model is declared to have entered depolarization block, and a V eq value is also extracted. In principle, it would be desirable to modify the test so that it correctly rejects the three models above. However, the models described above shows so similar behavior to depolarization block that is hard to distinguish using automatic methods. Furthermore, we have made substantial efforts to make the test more general and applicable to a wide variety of models with different behavior, and we are concerned that defining and adding further criteria to the test to deal with these specific cases would be an ad hoc solution, and would possibly cause further ‘cheats’ when applied to other models with unexpected behavior. These cases underline the importance of critically evaluating the full output (especially the figures of the recorded voltage traces) of the tests rather than blindly accepting the final scores provided.
This test first finds all the dendritic segments that belong to the main apical dendrite of the model and which are 50, 150, 250, 350 ± 20 μm from the soma, respectively. Then a train of action potentials of frequency around 15 Hz is triggered in the soma by injecting a step current of appropriate amplitude (as determined by the test), and the amplitudes of the first and last action potentials in the train are measured at the selected locations. In the Bianchi et al. 2012 and the Poirazi et al. 2003 models (which share the same morphology, see ) no suitable trunk locations could be found in the most proximal (50 ± 20 μm) and most distal (350 ± 20 μm) regions. This is because this morphology has quite long dendritic sections that are divided into a small number of segments. In particular, the first trunk section (apical_dendrite[0]) originates from the soma, is 102.66 μm long, and has only two segments. The center of one of them is 25.67 μm far from the soma, while the other is already 77 μm away from the soma. None of these segments belongs to the 50 ± 20 μm range, and therefore they are not selected by the test. The n123 morphology of the Gómez González et al. 2011 model has the same shape , but in this case the segments are different, and therefore it does not share the same problem. At the remaining, successfully evaluated distance ranges in the apical trunk of the Bianchi et al. 2012 model, action potentials propagate very actively, barely attenuating. For the AP1_amp and APlast_amp features at these distances, this model has the highest feature score , while the Poirazi et al. 2003 model performs quite well. The Golding et al. 2001 model was designed to investigate how the distribution of ion channels can affect the back-propagation efficacy in the trunk. The two versions of the Golding et al. 2001 model (“ ” and “ ” versions) which are supposed to be weakly propagating according to the corresponding paper , are also weakly propagating according to the test. However, the difference between their strongly and weakly propagating feature scores is not too large , which is probably caused by the much smaller standard deviation value of the experimental data for the weakly propagating case. Although the amplitudes of the first action potentials of these two models fit the experimental data relatively well, they start to decline slightly closer to the soma than it was observed experimentally, as the amplitudes are already very small at 250 ± 20 μm . (In the data corresponding to these two versions of the model are almost completely overlapping for more distal regions.) The amplitudes for the last action potential fit the data well, except in the most proximal regions (see the relatively high feature score in or the detailed results here: https://github.com/KaliLab/HippoUnit_demo ). For all versions of the Golding et al. 2001 model, AP amplitudes are too high at the most proximal distance range. As for the strongly propagating version of the Golding et al. 2001 model (“ ” version), the amplitude of the first action potential is too high at the proximal locations, but further it fits the data well. The amplitude of the last action potential remains too high even at more distal locations. It is worth noting that, in the corresponding paper , they only examined a single action potential triggered by a 5 ms long input in their simulations, and did not examine or compare to their data the properties of the last action potential in a longer spike train. Finally, we note that in all versions of the Golding et al. 2001 model a spike train with frequency around 23 Hz was evoked and examined as it turned out to be difficult to set the frequency closer to 15 Hz. The different versions of the Gómez González et al. 2011 model behave qualitatively similarly in this test, although there were smaller quantitative differences. In almost all versions the amplitudes of the first action potential in the dendrites are slightly too low at the most proximal locations but fit the experimental data better at further locations. The exceptions are the versions with the n128 and n129 morphologies, which have lower first action potential amplitudes at the furthest locations, but not low enough to be considered as weak propagating. The amplitudes for the last action potential are too high at the distal regions but fit better at the proximal ones. The only exception is the one with morphology n129, where the last action potential attenuates more at further locations and fits the data better. In the case of the Katz et al. 2009 model, a spike train with frequency around 40 Hz was examined, as the firing frequency increases so suddenly with increasing current intensity in this model that no frequency closer to 15 Hz could be adjusted. In this model the last action potential propagates too strongly, while the dendritic amplitudes for the first action potential are close to the experimental values. In the Migliore et al. 2011 model the amplitudes for the last action potential are too high, while the amplitude of the first back-propagating action potential is too low at locations in the 250 ± 20 μm and 350 ± 20 μm distance ranges. Finally, all the models that we examined were found to be strongly propagating by the test, with the exception of those versions of the Golding et al. 2001 model that were explicitly developed to be weakly propagating.
In this test the extent of the attenuation of the amplitude of an excitatory post-synaptic potential (EPSP) is examined as it propagates towards the soma from different input locations in the apical trunk. The Katz et al. 2009, the Bianchi et al. 2012, and all versions of the Golding et al. 2001 models perform quite well in this test . The various versions of the Golding et al. 2001 model are almost identical in this respect, which is not surprising as they differ only in the distribution of the sodium and A-type potassium channels. This shows that, as we would expect, these properties do not have much effect on the propagation of relatively low-amplitude signals such as unitary PSPs. Interestingly, the different versions of the Gómez González et al. 2011 model, with different morphologies, behave quite differently, which shows that this behavior can depend very much on the morphology of the dendritic tree.
This test probes the integration properties of the radial oblique dendrites of rat CA1 pyramidal cell models. The test is based on the experimental results described in . In this study, the somatic voltage response was recorded while synaptic inputs in single oblique dendrites were activated in different spatio-temporal combinations using glutamate uncaging. The main finding was that a sufficiently high number of synchronously activated and spatially clustered inputs produced a supralinear response consisting of a fast (Na) and a slow (NMDA) component, while asynchronously activated inputs summed linearly or sublinearly. This test selects all the radial oblique dendrites of the model that meet the experimental criteria: they are terminal dendrites (they have no child sections) and are at most 120 μm from the soma. Then the selected dendrites are stimulated in a proximal and in a distal region (separately) using an increasing number of clustered, synchronous or asynchronous synaptic inputs to get the voltage responses of the model, and extract the features of dendritic integration. The synaptic inputs are not unitary inputs, i.e., their strength is not equivalent to the strength of one synapse in the real cell; instead, the strength is adjusted in a way that 5 synchronous inputs are needed to trigger a dendritic action potential. The intensity of the laser used for glutamate uncaging was set in a similar way in the experiments . Most of the features were extracted at this just-suprathreshold level of input. We noticed that in some cases the strength of the synapse is not set correctly by the test; for example, it may happen that an actual dendritic spike does not reach the spike detection threshold in amplitude, or sometimes the EPSP may reach the threshold for spike detection without actual spike generation. The user has the ability to set the threshold used by eFEL for spike detection, but sometimes a single threshold may not work even for the different oblique dendrites (and proximal and distal locations in the same dendrites) of a single model. For consistency, we used the same spike detection threshold of -20 mV for all the models. The synaptic stimulus contains an AMPA and an NMDA receptor-mediated component. As the default synapse, HippoUnit uses the Exp2Syn double exponential synapse built into NEURON for the AMPA component, and its own built-in NMDA receptor model, whose parameters were set according to experimental data from the literature (see the Methods section for more details). In those models that originally do not have any synaptic component (the Bianchi et al 2011 model and all versions of the Golding et al. 2001 model) this default synapse was used. Both the Katz et al. 2009 and the Migliore et al. 2011 models used the Exp2Syn in their simulations, so in their case the time constants of this function were set to the values used in the original publications. As these models did not contain NMDA receptors, the default NMDA receptor model and the default AMPA/NMDA ratio of HippoUnit were used. The Gómez González et al 2011 and the Poirazi et al. 2003 models have their own AMPA and NMDA receptor models and their own AMPA/NMDA ratio values to be tested with. As shown by the averaged “measured EPSP vs expected EPSP” curves in , all three versions of the Golding et al. 2001 model have a jump in the amplitude of the somatic response at the threshold input level, which is the result of the generation of dendritic spikes. However, even these larger average responses do not reach the supralinear region, as it would be expected according to the experimental observations . The reason for this discrepancy is that a dendritic spike was generated in the simulations in only a subset of the stimulated dendrites; in the rest of the dendrites tested, the amplitude of the EPSPs went above the spike detection threshold during the adjustment of the synaptic weight without actually triggering a dendritic spike, which led to the corresponding synaptic strength being incorrectly set for that particular dendrite. Averaging over the results for locations with and without dendritic spikes led to an overall sublinear integration profile. The Migliore et al. 2011 model performs quite well on this test. In this case, seven dendrites could be tested out of the ten dendrites within the correct distance range because, in the others, the dendritic spike at the threshold input level also elicited a somatic action potential, and therefore these dendrites were excluded from further testing. In the Katz et al. 2009 model all the selected dendritic locations could be tested, and in most of them the synaptic strength could be adjusted appropriately. For a few dendrites, some input levels higher than the threshold for dendritic spike generation also triggered somatic action potentials. This effect causes the high supralinearity in the “measured EPSP vs expected EPSP” curve in , but has no effect on the extracted features. In the Bianchi et al. 2012 model only one dendrite could be selected, in which very high amplitude dendritic spikes were evoked by the synaptic inputs, making the signal integration highly supralinear. In the Poirazi et al. 2003 model also only one dendrite could be selected based on its distance from the soma; furthermore, only the distal location could be tested even in this dendrite, as at the proximal location the dendritic action potential at the threshold input level generated a somatic action potential. However, at the distal location, the synaptic strength could not be set correctly. For the synaptic strength chosen by the test, the actual threshold input level where a dendritic spike is first generated is at 4 inputs, but this dendritic AP is too small in amplitude to be detected, and the response to 5 inputs is recognized as the first dendritic spike instead. Therefore, the features that should be extracted at the threshold input level are instead extracted from the voltage response to 5 inputs. In this model this results in a reduced supralinearity value, as this feature is calculated one input level higher than the actual threshold. In addition, for even higher input levels dendritic bursts can be observed, which causes large supralinearity values in the “measured EPSP vs expected EPSP” curve in , but this does not affect the feature values. Models from Gómez González et al. 2011 were expected to be particularly relevant for this test, as these models were tuned to fit the same data set on which this test is based. However, we encountered an important issue when comparing our test results for these models to the results shown in the paper . In particular, the paper clearly indicates which dendrites were examined, and it is stated that those are at maximum 150 μm from the soma. However, when we measured the distance of these locations from the soma by following the path along the dendrites (as it is done by the test of HippoUnit), we often found it to be larger than 150 μm. We note that when the distance was measured in 3D coordinates rather than along the dendrites, all the dendrites used by Gómez González et al. 2011 appeared to be within 150 μm of the soma, so we assume that this definition was used in the paper. As we consider the path distance to be more meaningful than Euclidean distance in this context, and this was also the criterion used in the experimental study, we consistently use path distance in HippoUnit to find the relevant dendritic segments. Nevertheless, this difference in the selection of dendrites should be kept in mind when the results of this validation for models of Gómez González et al. 2011 are evaluated. In two versions of the Gómez González et al. 2011 model (those that were adjusted to the n123 and n125 morphologies) only one oblique dendrite matched the experimental criteria and could therefore be selected, and these are not among those that were studied by the developers of the model. In each of these cases the dendritic spike at the proximal location at the input threshold level triggered a somatic action potential, and therefore only the distal location could be tested. In the case of the n125 morphology, the dendritic spikes that appear first for just-suprathreshold input are so small in amplitude that they do not reach the spike detection threshold (-20 mV), and are thus not detected. Therefore, the automatically adjusted synaptic weight is larger than the appropriate value would be, which results in larger somatic EPSPs than expected (see ). With this synaptic weight, the first dendritic spike and therefore the jump to the supralinear region in the “measured EPSP vs expected EPSP” curve is for 4 synaptic inputs instead of 5. This is also the case in one of the two selected dendrites of the version of this model with the n128 morphology. Similarly to the Poirazi et al. 2003 model, this results in a lower degree of nonlinearity at threshold feature value, than it would be if the feature were extracted at the actual threshold input level (4 inputs) instead of the one which the test attempted to adjust (5 inputs). The suprathreshold nonlinearity feature has a high value because at that input level (6 inputs), somatic action potentials are triggered. In the version of the Gómez González et al. 2011 model that uses the n129 morphology, 10 oblique dendrites could be selected for testing (none of them is among those that its developers used) but only 4 could be tested because, for the rest, the dendritic spike at the threshold input level already elicits a somatic action potential. The synaptic weights required to set the threshold input level to 5 are not found correctly in most cases; the actual threshold input level is at 4 or 3. Suprathreshold nonlinearity is high, because at that input level (6 inputs) somatic action potentials are triggered for some of the examined dendritic locations. The version of the Gómez González et al. 2011 model that uses the n130 morphology achieves the best (lowest) final score on this test. In this model many oblique dendrites could be selected and tested, including two (179, 189) that the developers used in their simulations . In most cases the synaptic weights are nicely found to set the threshold input level to 5 synapses. For some dendrites there are somatic action potentials at higher input levels, but that does not affect the features. The value of the time to peak feature for each model is much smaller than the experimental value . This is because in each of the models the maximum amplitude of the somatic EPSP is determined by the fast component, caused by the appearance of the dendritic sodium spikes, while in the experimental observation this is rather shaped by the slow NMDA component following the sodium spike.
In summary, using HippoUnit, we compared the behavior of several rat hippocampal CA1 pyramidal cell models available on ModelDB in several distinct domains, and found that all of these models match experimental results well in some domains (typically those that they were originally built to capture) but fit the experimental observations less precisely in others. summarizes the final scores achieved by the different models on the various tests (lower scores indicate a better match in all cases). Perhaps a bit surprisingly, the different versions of the Golding et al. 2001 model showed a good match to the experimental data in all of the tests (except for the Depolarization Block Test), even though these are the simplest ones among the models in the sense that they contain the smallest number of different types of ion channels. On the other hand, these models do not perform outstandingly well on the Back-propagating Action Potential Test, although they were developed to study the mechanisms behind (the dichotomy of) action potential back-propagation, which is evaluated by this test based on the data that were published together with these models . The most probable reason for this surprising observation is that, in the original study , only a few features of the model’s response were compared with the experimental results. HippoUnit tested the behavior of the model based on a larger set of experimental features from the original study, and was therefore able to uncover differences between the model’s response and the experimental data on features for which the model was not evaluated in the source publication. The Bianchi et al. 2012 model is the only one that can produce real depolarization block within the range of input strengths examined by the corresponding test. The success of this model in this test is not surprising because this is the only model that was tuned to reproduce this behavior; on the other hand, the failure of the other models in this respect clearly shows that proper depolarization block requires some combination of mechanisms that are at least partially distinct from those that allow good performance in the other tests. The Bianchi et al. 2012 model achieves a relatively high final score only on the Back-propagating Action Potential Test, as action potentials seem to propagate too actively in its dendrites, leading to high AP amplitudes even in more distal compartments. The Gómez González et al. 2011 models were developed to capture the same experimental observations on dendritic integration that are tested by the Oblique Integration Test of HippoUnit, but, somewhat surprisingly, some of its versions achieved quite high feature scores on this test, while others perform quite well. This is partly caused by the fact that HippoUnit often selects different dendritic sections for testing from those that were studied by the developers of these models (see above for details). The output of HippoUnit shows that the different oblique dendrites of these models can show quite diverse behavior, and beyond those studied in the corresponding paper , other oblique dendrites do not necessarily match the experimental observations. Some of its versions also perform relatively poorly on the PSP-Attenuation Test, similar to the Migliore et al. 2011 and the Poirazi et al. 2003 models. The Katz et al. 2009 model is not outstandingly good in any of the tests, but still achieves relatively good final scores everywhere (although its apparent good performance on the Depolarization Block Test is misleading—see detailed explanation above). The model files that were used to test the models described above, the detailed validation results (all the output files of HippoUnit), and the Jupyter Notebooks that show how to run the tests of HippoUnit on these models are available in the following Github repository: https://github.com/KaliLab/HippoUnit_demo .
Besides enabling a detailed comparison of published models, HippoUnit can also be used to monitor the performance of new models at various stages of model development. Here, we illustrate this by showing how we have used HippoUnit within the HBP to systematically validate detailed multi-compartmental models of hippocampal neurons developed using multi-objective parameter optimization methods implemented by the open source Blue Brain Python Optimization Library (BluePyOpt ). To this end, we extended HippoUnit to allow it to handle the output of optimization performed by BluePyOpt (see Methods). Models of rat CA1 pyramidal cells were optimized using target feature data extracted from sharp electrode recordings . Then, using the Somatic Features Test of HippoUnit, we compared the behavior of the models to features extracted from this sharp electrode dataset. However, while only a subset of the features extracted by eFEL was used in the optimization (mostly those that describe the rate and timing of the spikes; e.g., the different inter-spike interval (ISI), time to last/first spike, mean frequency features), we considered all the eFEL features that could be successfully extracted from the data during validation. In addition, sharp electrode measurements were also available for several types of interneurons in the rat hippocampal CA1 region, and models of these interneurons were also constructed using similar automated methods . Using the appropriate observation file and the stimulus file belonging to it, the Somatic Features Test of HippoUnit can also be applied to these models to evaluate their somatic spiking features. The other tests of HippoUnit are currently not applicable to interneurons, mostly due to the lack of appropriate target data. We applied the tests of HippoUnit to the version of the models published in , and to a later version (v4) described in Ecker et al. (2020), which was intended to further improve the dendritic behavior of the models, as this is critical for their proper functioning in the network. The two sets of models were created using the same morphology files and similar optimization methods and protocols. These new optimizations differed mainly in the allowed range for the density of the sodium channels in the dendrites. For the pyramidal cell models a new feature was also introduced in the parameter optimization that constrains the amplitudes of back-propagating action potentials in the main apical dendrite. The new interneuron models also had an exponentially decreasing (rather than constant) density of Na channels, and A-type K channels with more hyperpolarized activation in their dendrites. For more details on the models, see the original publications . After running all the tests of HippoUnit on both sets of models generated by BluePyOpt, we performed a comparison of the old and the new versions of the models by doing a statistical analysis of the final scores achieved by the models of the same cell type on the different tests. In the median, the interquartile range and the full range of the final scores achieved by the two versions of the model set are compared. According to the results of the Wilcoxon signed-rank test the new version of the models achieved significantly better scores on the Back-propagating Action Potential test (p = 0.0046), on the Oblique Integration Test (p = 0.0033), and on the PSP Attenuation Test (p = 0.0107), which is the result of reduced dendritic excitability. Moreover, in most of the other cases the behavior of the models improved slightly (but not significantly) with the new version. Only in the case of the Somatic Features test applied to bAC interneurons did the new models perform slightly worse (but still quite well), and this difference was not significant (p = 0.75). These results show the importance of model validation performed against experimental findings, especially those not considered when building the model, in every iteration during the process of model development. This approach can greatly facilitate the construction of models that perform well in a variety of contexts, help avoid model regression, and guide the model building process towards a more robust and general implementation.
The HBP is developing scientific infrastructure to facilitate advances in neuroscience, medicine, and computing . One component of this research infrastructure is the Brain Simulation Platform (BSP) ( https://bsp.humanbrainproject.eu ), an online collaborative platform that supports the construction and simulation of neural models at various scales. As we argued above, systematic, automated validation of models is a critical prerequisite of collaborative model development. Accordingly, the BSP includes a software framework for quantitative model validation and testing that explicitly supports applying a given validation test to different models and storing the results . The framework consists of a web service, and a set of test suites, which are Python modules based on the SciUnit package. As we discussed earlier, SciUnit uses the concept of capabilities, which are standardized interfaces between the models to be tested and the validation tests. By defining the capabilities to which models must adhere, individual validation tests can be implemented independently of any specific model and used to validate any compatible model despite differences in their internal structures, the language and/or the simulator used. Each test must include a specification of the required model capabilities, the location of the reference (experimental) dataset, and data analysis code to transform the recorded variables (e.g., membrane potential) into feature values that allow the simulation results to be directly and quantitatively compared to the experimental data through statistical analysis. The web services framework supports the management of models, tests, and validation results. It is accessible via web apps within the HBP Collaboratory, and also through a Python client. The framework makes it possible to permanently record, examine and reproduce validation results, and enables tracking the evolution of models over time, as well as comparison against other models in the domain. Every test of HippoUnit described in this paper has been individually registered in the Validation Framework. The JSON files containing the target experimental data for each test are stored (besides the HippoUnit_demo GitHub repository) in storage containers at the Swiss National Supercomputing Centre (CSCS), where they are publicly available. The location of the corresponding data file is associated with each registered test, so that the data are loaded automatically when the test is run on a model via the Validation Framework. As the Somatic Features Test of HippoUnit was used to compare models against five different data sets (data from sharp electrode measurements in pyramidal cells and interneurons belonging to three different electronic types, and data obtained from patch clamp recordings in pyramidal cells), these are considered to be and have been registered as five separate tests in the Validation Framework. All the models that were tested and compared in this study (including the CA1 pyramidal cell models from the literature and the BluePyOpt optimized CA1 pyramidal cells and interneurons of the HBP) have been registered and are available in the Model Catalog of the Validation Framework with their locations in the CSCS storage linked to them. In addition to the modifications that were needed to make the models compatible with testing with HippoUnit (described in the section “Methods–Models from literature”), the versions of the models uploaded to the CSCS container also contain an __init__.py file. This file implements a python class that inherits all the functions of the ModelLoader class of HippoUnit without modification. Its role is to make the validation of these models via the Framework more straightforward by defining and setting the parameters of the ModelLoader class (such as the path to the HOC and NMODL files, the name of the section lists, etc.) that otherwise need to be set after instantiating the ModelLoader (see the HippoUnit_demo GitHub repository: https://github.com/KaliLab/HippoUnit_demo/tree/master/jupyter_notebooks ). The validation results discussed in this paper have also been registered in the Validation Framework, with all their related files (output figures and JSON files) linked to them. These can be accessed using the Model Validation app of the framework. The Brain Simulation Platform of the HBP contains several online ‘Use Cases’, which are available on the platform and help the users to try and use the various established pipelines. The Use Case called ‘Hippocampus Single Cell Model Validation’ can be used to apply the tests of HippoUnit to models that were built using automated parameter optimization within the HBP. The Brain Simulation Platform also hosts interactive “Live Paper” documents that refer to published papers related to the models or software tools on the Platform. Live Papers provide links that make it possible to visualize or download results and data discussed in the respective paper, and even to run the associated simulations on the Platform. We have created a Live Paper ( https://humanbrainproject.github.io/hbp-bsp-live-papers/2020/saray_et_al_2020/saray_et_al_2020.html ) showing the results of the study presented in this paper in more detail. This interactive document provides links to all the output figures and data files resulting from the validation of the models from literature discussed here. This provides a more detailed insight into their behavior individually. Moreover, as part of this Live Paper a HippoUnit Use Case is also available in the form of a Jupyter Notebook, which guides the user through running the validation tests of HippoUnit on the models from literature that are already registered in the Framework, and makes it possible to reproduce the results presented here.
Applications of the HippoUnit test suite In this article, we have described the design, usage, and some initial applications of HippoUnit, a software tool that enables the automated comparison of the physiological properties of models of hippocampal neurons with the corresponding experimental results. HippoUnit, together with its possible extensions and other similar tools, allows the rapid, systematic evaluation and comparison of neuronal models in multiple domains. By providing the software tools and examples for effective model validation, we hope to encourage the modeling community to use more systematic testing during model development, with the aim of making the process of model building more efficient, reproducible and transparent. One important use case for the application of HippoUnit is the evaluation and comparison of existing models. We demonstrated this by using HippoUnit to test and compare the behavior of several models of rat CA1 pyramidal neurons available on ModelDB in several distinct domains against electrophysiological data available in the literature (or shared by collaborators). Besides providing independent and standardized verification of the behavior of the models, the results also allow researchers to judge which existing models show a good match to the experimental data in the domains that they care about, and thus to decide whether they could re-use one of the existing models in their own research. Besides enabling the comparison of different models regarding how well they match a particular dataset, the tests of HippoUnit also allow one to determine the match between a particular model and several datasets of the same type. As experimental results can be heavily influenced by recording conditions and protocols, and also depend on factors such as the strain, age, and sex of the animal, it is important to find out whether the same model can simultaneously capture the outcome of different experiments, and if not, how closely it is able to match the different datasets. As an example, it would be possible using the Somatic Features Test of HippoUnit to compare the somatic behavior of a particular model to features extracted from both patch clamp and sharp electrode recordings and determine which of these is captured better by the model. HippoUnit is also a useful tool during model development. In a typical data-driven modeling scenario, researchers decide which aspects of model behavior are relevant for them, find experimental data that constrain these behaviors, then use some of these data to build the model, and use the rest of the data to validate the model. HippoUnit and similar test suites make it possible to define quantitative criteria for declaring a model valid (ideally before modeling starts), and to apply these criteria consistently throughout model development. We demonstrated this approach through the example of detailed single cell models of rat CA1 pyramidal cells and interneurons optimized within the HBP. Furthermore, several authors have argued for the benefits of creating “community models” through the iterative refinement of models in an open collaboration of multiple research teams. Such consensus models would aim to capture a wide range of experimental observations, and may be expected to generalize (within limits) to novel modeling scenarios. A prerequisite for this type of collaborative model development is an agreement on which experimental results will be used to constrain and validate the models. Automated test suites provide the means to systematically check models with respect to all the relevant experimental data, with the aim of tracking progress and avoiding “regression,” whereby previously correct model behavior is corrupted by further tuning. Finally, the tests of HippoUnit have been integrated into the recently developed Validation Framework of the HBP, which makes it possible to collect neural models and validation tests, and supports the application of the registered tests to the registered models. Most importantly, it makes it possible to save the validation results and link them to the models in the Model Catalog, making them publicly available and traceable for the modeling community. Interpreting the results of HippoUnit It is important to emphasize that a high final score on a given validation test using a particular experimental dataset does not mean that the model is not good enough or cannot be useful for a variety of purposes (including the ones it was originally developed for). The discrepancy between the target data and the model’s behavior, as quantified by the validation tests, may be due to several different reasons. First, all experimental data contain noise and may have systematic biases associated with the experimental methods employed. Sometimes the experimental protocol is not described in sufficient detail to allow its faithful reproduction in the simulations. It may also occur that a model is based on experimental data that were obtained under conditions that are substantially different from the conditions for the measurement of the validation target dataset. Using different recording techniques, such as sharp electrode or patch clamp recordings or the different circumstances of the experiments (e.g., the strain, age, and sex of the animal, or the temperature during measurement) can heavily affect the experimental results. Furthermore, the post-processing of the recorded electrophysiological data can also alter the results. For these reasons, probably no single model should be expected to achieve an arbitrarily low score on all of the validation tests developed for a particular cell type. Keeping this in mind, it is important that the modelers decide which properties of the cell type are relevant for them, and what experimental conditions they aim to mimic. Validation results should be interpreted or taken into account accordingly, and the tests themselves may need to be adapted. The interpretation of the results is aided by several additional outputs of the tests besides the final score. The traces, the extracted feature values as well as the feature scores are saved into output files and also plotted for visualization. This information is intended to help determine the strengths and weaknesses of the model and evaluate its usefulness according to the needs of the user. The issue of neuronal variability also deserves consideration in this context. The morphology, biophysical parameters, and physiological behavior of neurons is known to be non-uniform, even within a single cell type, and this variability may be important for the proper functioning and robustness of neural circuits. Recent models of neuronal networks have also started to take into account this variability . The tests of HippoUnit account for experimental variability by measuring the distance of the feature values of the model from the experimental mean (the feature score) in units of the experimental standard deviation. This means that any feature score less than about 1 actually corresponds to behavior which may be considered “typical” in the experiments (within one standard deviation of the mean), and a feature score of 2 or 3 may still be considered acceptable for any single model. In fact, even higher values of the feature score may sometimes be consistent with the data if the experimental distribution is long-tailed rather than normal. However, such high values of the feature score certainly deserve attention as they signal a large deviation from the typical behavior observed in the experiments. Furthermore, the acceptable feature score will generally depend on the goal of the modeling study. In particular, a study which intends to construct and examine a single model of typical experimental behavior should aim to keep all the relevant feature scores relatively low. On the other hand, when modeling entire populations of neurons, one should be prepared to accept a wider range of feature scores in some members of the model population, although the majority of the cells (corresponding to typical members of the experimental population) should still display relatively low scores. In fact, when modeling populations of neurons, one would ideally aim to match the actual distribution of neuronal features (including the mean, standard deviation, and possibly higher moments as well), and the distribution of feature scores (and actual feature values) from the relevant tests of HippoUnit actually provides the information that is necessary to compare the variability of the experimental and model cell populations. Uniform model formats reduce the costs of validation Although HippoUnit is built in a way that its tests are, in principle, model-agnostic, so that the implementation of the tests does not depend on model implementation, it still required a considerable effort to create the standalone versions of the models from literature to be tested, even though all of the selected models were developed for the NEURON simulator. This is because each model has a different file structure and internal logic that needs to be understood in order to create an equivalent standalone version. When the section lists of the main dendritic types do not exist, the user needs to create them by extensively analyzing the morphology and even doing some coding. In order to reduce the costs of systematic validation, models would need to be expressed in a format that is uniform and easy to test. As HippoUnit already has its capability functions implemented in a way that it is able to handle models developed in NEURON, the only requirement for such models is that they should contain a HOC file that describes the morphology (including the section lists for the main dendritic types of the dendritic tree) and all the biophysical parameters of the model, without any additional simulations, GUIs or run-time modifications. Currently, such a standalone version of the models is not made available routinely in publications or on-line databases, but could be added by the creators of the models with relatively little effort. On the other hand, applying the tests of HippoUnit to models built in other languages requires the re-implementation of the capability functions that are responsible for running the simulations on the model (see Methods). In order to save the user from this effort, it would be useful to publish neuronal models in a standard and uniform format that is simulator independent and allows general use in a variety of paradigms. This would allow an easier and more transparent process of community model development and validation, as it avoids the need of reimplementation of parts of software tools (such as validation suites), and the creation of new, (potentially) non-traced software versions. This approach is already initiated for neurons and neuronal networks by the developers of NeuroML , NineML , PyNN , Sonata , and Brian . Once a large set of models becomes available in these standardized formats, it will be straightforward to extend HippoUnit (and other similar test suites) to handle these models. Extensibility of HippoUnit Although we were aiming to develop a test suite that is as comprehensive as possible, and that captures the most typical and basic properties of the rat hippocampal CA1 pyramidal cell, the list of features that can be tested by HippoUnit is far from complete. Upon availability of the appropriate quantitative experimental data, new tests addressing additional properties of the CA1 pyramidal cell could be included, for example, on the signal integration of the basal or the more distal apical dendrites, or on action potential initiation and propagation in the axon. Therefore, we implemented HippoUnit in a way that makes it possible to extend it by adding new tests. As HippoUnit is based on the SciUnit package it inherits SciUnits’s modular structure. This means that a test is usually composed of four main classes: the test class, the model class, the capabilities class and the score class (as described in more detail in the Methods section). Thanks to this structure it is easy to extend HippoUnit with new tests by implementing them in new test classes and adding the capabilities and scores needed. The methods of the new capabilities can be implemented in the ModelLoader class, which is a generalized Model class for models built in the NEURON simulator, or in a newly created Model class specific to the model to be tested. Adding new tests to HippoUnit requires adding the corresponding target experimental data as well in the form of a JSON file. The way the JSON files are created depends on the nature and source of the experimental data. In some cases the data may be explicitly provided in the text of the papers (as for the Oblique Integration and the Depolarization Block tests), therefore their JSON files are easy to make manually. Most typically, the data have to be processed to get the desired feature mean and standard deviation values and create the JSON file. In these cases it is worth writing a script that does this automatically. Some examples on how this was done for the current tests of HippoUnit are available here: https://github.com/sasaray/HippoUnit_demo/tree/master/target_features/Examples_on_creating_JSON_files/ . As HippoUnit is open-source and is shared on GitHub, it is possible for other developers, modelers or scientists to modify or extend the test suite working on their own forks of the repository. If they would like to directly contribute to HippoUnit, a ‘pull request’ can be created to the main repository. Generalization possibilities of the tests of HippoUnit In the current version of HippoUnit most of the validation tests can only be used to test models of rat hippocampal CA1 pyramidal cells, as the observation data come from electrophysiological measurements of this cell type and the tests were designed to follow the experimental protocols of the papers from which these data derive. However, with small modifications most of the tests can be used for other cell types, or with slightly different stimulation protocols, if there are experimental data available for the features or properties tested. The Somatic Features Test can be used for any cell type and with any current step injection protocol even in its current form using the appropriate data and configuration files. These two files must be in agreement with each other; in particular, the configuration file should contain the parameters of the step current protocols (delay, duration, amplitude) used in the experiments from which the feature values in the data file derive. In this study this test was used with two different experimental protocols (sharp electrode measurements and patch clamp recordings that used different current step amplitudes and durations), and for testing four different cell types (rat hippocampal CA1 PC and interneurons). In the current version of the Depolarization Block Test the properties of the stimulus (delay, duration, and amplitudes) are hard-coded to reproduce the experimental protocol used in a study of CA1 PCs . However, the test could be easily modified to read these parameters from a configuration file like in the case of other tests, and then the test could be applied to other cell types if data from similar experimental measurements are available. As the Back-propagating AP Test examines the back-propagation efficacy of action potentials in the main apical dendrite (trunk), it is mainly suitable for testing pyramidal cell models; however, it can be used for PC models from other hippocampal or cortical regions, potentially using different distance ranges of the recording sites. If different distances are used, the feature names (‘AP1_amp_X’ and ‘APlast_amp_X’, where X is the recording distance) in the observation data file and the recording distances given in the stimuli file must be in agreement. Furthermore, it would also be possible to set a section list of other dendritic types instead of the trunk to be examined by the test. This way, models of other cell types (with dendritic trees qualitatively different from those of PCs) could also be tested. The frequency range of the spike train (10–20 Hz, preferring values closest to 15 Hz) is currently hard-coded in the function that automatically finds the appropriate current amplitude, but the implementation could be made more flexible in this case as well. The PSP Attenuation Test is quite general. Both the distances and tolerance values that determine the stimulation locations on the dendrites and the properties of the synaptic stimuli are given using the configuration file. Here again the feature names in the observation data file (‘attenuation_soma/dend_x_um’, where x is the distance from the soma) must fit the distances of the stimulation locations in the configuration file when one uses the tests with data from a different cell type or experimental protocol. Similarly to the Back-propagating AP Test the PSP Attenuation Test also examines the main apical dendrite (trunk), but could be altered to use section lists of other dendritic types. The Oblique Integration Test is very specific to the experimental protocol of . There is no configuration file used here, but the synaptic parameters (of the ModelLoader class) and the number of synapses to which the model should first generate a dendritic spike (‘threshold_index’ parameter of the test class) can be adjusted by the user after instantiating the ModelLoader and the test classes respectively. The time intervals between the inputs (synchronous (0.1 ms), asynchronous (2.0 ms)) are currently hard-coded in the test. HippoUnit has been used mainly to test models of rat hippocampal CA1 pyramidal cells as described above. However, having the appropriate observation data, most of its tests could easily be adapted to test models of different cell types, even in cases when the experimental protocol is slightly different from the currently implemented ones. The extent to which a test needs to be modified in order to test models of other cell types depends on how much the behavior of the new cell type differs from the behavior of rat CA1 pyramidal cells, and to what extent the protocol of the experiment differs from the ones we used as the bases of comparison in the current study.
In this article, we have described the design, usage, and some initial applications of HippoUnit, a software tool that enables the automated comparison of the physiological properties of models of hippocampal neurons with the corresponding experimental results. HippoUnit, together with its possible extensions and other similar tools, allows the rapid, systematic evaluation and comparison of neuronal models in multiple domains. By providing the software tools and examples for effective model validation, we hope to encourage the modeling community to use more systematic testing during model development, with the aim of making the process of model building more efficient, reproducible and transparent. One important use case for the application of HippoUnit is the evaluation and comparison of existing models. We demonstrated this by using HippoUnit to test and compare the behavior of several models of rat CA1 pyramidal neurons available on ModelDB in several distinct domains against electrophysiological data available in the literature (or shared by collaborators). Besides providing independent and standardized verification of the behavior of the models, the results also allow researchers to judge which existing models show a good match to the experimental data in the domains that they care about, and thus to decide whether they could re-use one of the existing models in their own research. Besides enabling the comparison of different models regarding how well they match a particular dataset, the tests of HippoUnit also allow one to determine the match between a particular model and several datasets of the same type. As experimental results can be heavily influenced by recording conditions and protocols, and also depend on factors such as the strain, age, and sex of the animal, it is important to find out whether the same model can simultaneously capture the outcome of different experiments, and if not, how closely it is able to match the different datasets. As an example, it would be possible using the Somatic Features Test of HippoUnit to compare the somatic behavior of a particular model to features extracted from both patch clamp and sharp electrode recordings and determine which of these is captured better by the model. HippoUnit is also a useful tool during model development. In a typical data-driven modeling scenario, researchers decide which aspects of model behavior are relevant for them, find experimental data that constrain these behaviors, then use some of these data to build the model, and use the rest of the data to validate the model. HippoUnit and similar test suites make it possible to define quantitative criteria for declaring a model valid (ideally before modeling starts), and to apply these criteria consistently throughout model development. We demonstrated this approach through the example of detailed single cell models of rat CA1 pyramidal cells and interneurons optimized within the HBP. Furthermore, several authors have argued for the benefits of creating “community models” through the iterative refinement of models in an open collaboration of multiple research teams. Such consensus models would aim to capture a wide range of experimental observations, and may be expected to generalize (within limits) to novel modeling scenarios. A prerequisite for this type of collaborative model development is an agreement on which experimental results will be used to constrain and validate the models. Automated test suites provide the means to systematically check models with respect to all the relevant experimental data, with the aim of tracking progress and avoiding “regression,” whereby previously correct model behavior is corrupted by further tuning. Finally, the tests of HippoUnit have been integrated into the recently developed Validation Framework of the HBP, which makes it possible to collect neural models and validation tests, and supports the application of the registered tests to the registered models. Most importantly, it makes it possible to save the validation results and link them to the models in the Model Catalog, making them publicly available and traceable for the modeling community.
It is important to emphasize that a high final score on a given validation test using a particular experimental dataset does not mean that the model is not good enough or cannot be useful for a variety of purposes (including the ones it was originally developed for). The discrepancy between the target data and the model’s behavior, as quantified by the validation tests, may be due to several different reasons. First, all experimental data contain noise and may have systematic biases associated with the experimental methods employed. Sometimes the experimental protocol is not described in sufficient detail to allow its faithful reproduction in the simulations. It may also occur that a model is based on experimental data that were obtained under conditions that are substantially different from the conditions for the measurement of the validation target dataset. Using different recording techniques, such as sharp electrode or patch clamp recordings or the different circumstances of the experiments (e.g., the strain, age, and sex of the animal, or the temperature during measurement) can heavily affect the experimental results. Furthermore, the post-processing of the recorded electrophysiological data can also alter the results. For these reasons, probably no single model should be expected to achieve an arbitrarily low score on all of the validation tests developed for a particular cell type. Keeping this in mind, it is important that the modelers decide which properties of the cell type are relevant for them, and what experimental conditions they aim to mimic. Validation results should be interpreted or taken into account accordingly, and the tests themselves may need to be adapted. The interpretation of the results is aided by several additional outputs of the tests besides the final score. The traces, the extracted feature values as well as the feature scores are saved into output files and also plotted for visualization. This information is intended to help determine the strengths and weaknesses of the model and evaluate its usefulness according to the needs of the user. The issue of neuronal variability also deserves consideration in this context. The morphology, biophysical parameters, and physiological behavior of neurons is known to be non-uniform, even within a single cell type, and this variability may be important for the proper functioning and robustness of neural circuits. Recent models of neuronal networks have also started to take into account this variability . The tests of HippoUnit account for experimental variability by measuring the distance of the feature values of the model from the experimental mean (the feature score) in units of the experimental standard deviation. This means that any feature score less than about 1 actually corresponds to behavior which may be considered “typical” in the experiments (within one standard deviation of the mean), and a feature score of 2 or 3 may still be considered acceptable for any single model. In fact, even higher values of the feature score may sometimes be consistent with the data if the experimental distribution is long-tailed rather than normal. However, such high values of the feature score certainly deserve attention as they signal a large deviation from the typical behavior observed in the experiments. Furthermore, the acceptable feature score will generally depend on the goal of the modeling study. In particular, a study which intends to construct and examine a single model of typical experimental behavior should aim to keep all the relevant feature scores relatively low. On the other hand, when modeling entire populations of neurons, one should be prepared to accept a wider range of feature scores in some members of the model population, although the majority of the cells (corresponding to typical members of the experimental population) should still display relatively low scores. In fact, when modeling populations of neurons, one would ideally aim to match the actual distribution of neuronal features (including the mean, standard deviation, and possibly higher moments as well), and the distribution of feature scores (and actual feature values) from the relevant tests of HippoUnit actually provides the information that is necessary to compare the variability of the experimental and model cell populations.
Although HippoUnit is built in a way that its tests are, in principle, model-agnostic, so that the implementation of the tests does not depend on model implementation, it still required a considerable effort to create the standalone versions of the models from literature to be tested, even though all of the selected models were developed for the NEURON simulator. This is because each model has a different file structure and internal logic that needs to be understood in order to create an equivalent standalone version. When the section lists of the main dendritic types do not exist, the user needs to create them by extensively analyzing the morphology and even doing some coding. In order to reduce the costs of systematic validation, models would need to be expressed in a format that is uniform and easy to test. As HippoUnit already has its capability functions implemented in a way that it is able to handle models developed in NEURON, the only requirement for such models is that they should contain a HOC file that describes the morphology (including the section lists for the main dendritic types of the dendritic tree) and all the biophysical parameters of the model, without any additional simulations, GUIs or run-time modifications. Currently, such a standalone version of the models is not made available routinely in publications or on-line databases, but could be added by the creators of the models with relatively little effort. On the other hand, applying the tests of HippoUnit to models built in other languages requires the re-implementation of the capability functions that are responsible for running the simulations on the model (see Methods). In order to save the user from this effort, it would be useful to publish neuronal models in a standard and uniform format that is simulator independent and allows general use in a variety of paradigms. This would allow an easier and more transparent process of community model development and validation, as it avoids the need of reimplementation of parts of software tools (such as validation suites), and the creation of new, (potentially) non-traced software versions. This approach is already initiated for neurons and neuronal networks by the developers of NeuroML , NineML , PyNN , Sonata , and Brian . Once a large set of models becomes available in these standardized formats, it will be straightforward to extend HippoUnit (and other similar test suites) to handle these models.
Although we were aiming to develop a test suite that is as comprehensive as possible, and that captures the most typical and basic properties of the rat hippocampal CA1 pyramidal cell, the list of features that can be tested by HippoUnit is far from complete. Upon availability of the appropriate quantitative experimental data, new tests addressing additional properties of the CA1 pyramidal cell could be included, for example, on the signal integration of the basal or the more distal apical dendrites, or on action potential initiation and propagation in the axon. Therefore, we implemented HippoUnit in a way that makes it possible to extend it by adding new tests. As HippoUnit is based on the SciUnit package it inherits SciUnits’s modular structure. This means that a test is usually composed of four main classes: the test class, the model class, the capabilities class and the score class (as described in more detail in the Methods section). Thanks to this structure it is easy to extend HippoUnit with new tests by implementing them in new test classes and adding the capabilities and scores needed. The methods of the new capabilities can be implemented in the ModelLoader class, which is a generalized Model class for models built in the NEURON simulator, or in a newly created Model class specific to the model to be tested. Adding new tests to HippoUnit requires adding the corresponding target experimental data as well in the form of a JSON file. The way the JSON files are created depends on the nature and source of the experimental data. In some cases the data may be explicitly provided in the text of the papers (as for the Oblique Integration and the Depolarization Block tests), therefore their JSON files are easy to make manually. Most typically, the data have to be processed to get the desired feature mean and standard deviation values and create the JSON file. In these cases it is worth writing a script that does this automatically. Some examples on how this was done for the current tests of HippoUnit are available here: https://github.com/sasaray/HippoUnit_demo/tree/master/target_features/Examples_on_creating_JSON_files/ . As HippoUnit is open-source and is shared on GitHub, it is possible for other developers, modelers or scientists to modify or extend the test suite working on their own forks of the repository. If they would like to directly contribute to HippoUnit, a ‘pull request’ can be created to the main repository.
In the current version of HippoUnit most of the validation tests can only be used to test models of rat hippocampal CA1 pyramidal cells, as the observation data come from electrophysiological measurements of this cell type and the tests were designed to follow the experimental protocols of the papers from which these data derive. However, with small modifications most of the tests can be used for other cell types, or with slightly different stimulation protocols, if there are experimental data available for the features or properties tested. The Somatic Features Test can be used for any cell type and with any current step injection protocol even in its current form using the appropriate data and configuration files. These two files must be in agreement with each other; in particular, the configuration file should contain the parameters of the step current protocols (delay, duration, amplitude) used in the experiments from which the feature values in the data file derive. In this study this test was used with two different experimental protocols (sharp electrode measurements and patch clamp recordings that used different current step amplitudes and durations), and for testing four different cell types (rat hippocampal CA1 PC and interneurons). In the current version of the Depolarization Block Test the properties of the stimulus (delay, duration, and amplitudes) are hard-coded to reproduce the experimental protocol used in a study of CA1 PCs . However, the test could be easily modified to read these parameters from a configuration file like in the case of other tests, and then the test could be applied to other cell types if data from similar experimental measurements are available. As the Back-propagating AP Test examines the back-propagation efficacy of action potentials in the main apical dendrite (trunk), it is mainly suitable for testing pyramidal cell models; however, it can be used for PC models from other hippocampal or cortical regions, potentially using different distance ranges of the recording sites. If different distances are used, the feature names (‘AP1_amp_X’ and ‘APlast_amp_X’, where X is the recording distance) in the observation data file and the recording distances given in the stimuli file must be in agreement. Furthermore, it would also be possible to set a section list of other dendritic types instead of the trunk to be examined by the test. This way, models of other cell types (with dendritic trees qualitatively different from those of PCs) could also be tested. The frequency range of the spike train (10–20 Hz, preferring values closest to 15 Hz) is currently hard-coded in the function that automatically finds the appropriate current amplitude, but the implementation could be made more flexible in this case as well. The PSP Attenuation Test is quite general. Both the distances and tolerance values that determine the stimulation locations on the dendrites and the properties of the synaptic stimuli are given using the configuration file. Here again the feature names in the observation data file (‘attenuation_soma/dend_x_um’, where x is the distance from the soma) must fit the distances of the stimulation locations in the configuration file when one uses the tests with data from a different cell type or experimental protocol. Similarly to the Back-propagating AP Test the PSP Attenuation Test also examines the main apical dendrite (trunk), but could be altered to use section lists of other dendritic types. The Oblique Integration Test is very specific to the experimental protocol of . There is no configuration file used here, but the synaptic parameters (of the ModelLoader class) and the number of synapses to which the model should first generate a dendritic spike (‘threshold_index’ parameter of the test class) can be adjusted by the user after instantiating the ModelLoader and the test classes respectively. The time intervals between the inputs (synchronous (0.1 ms), asynchronous (2.0 ms)) are currently hard-coded in the test. HippoUnit has been used mainly to test models of rat hippocampal CA1 pyramidal cells as described above. However, having the appropriate observation data, most of its tests could easily be adapted to test models of different cell types, even in cases when the experimental protocol is slightly different from the currently implemented ones. The extent to which a test needs to be modified in order to test models of other cell types depends on how much the behavior of the new cell type differs from the behavior of rat CA1 pyramidal cells, and to what extent the protocol of the experiment differs from the ones we used as the bases of comparison in the current study.
S1 Appendix Example of running the Somatic Features Test of HippoUnit using a Jupyter notebook. (DOCX) Click here for additional data file.
|
A Novel Machine Learning Framework for Comparison of Viral COVID-19–Related Sina Weibo and Twitter Posts: Workflow Development and Content Analysis | c6672946-1a30-411a-bf78-ad131597b68d | 7790734 | Health Communication[mh] | Social media platforms are important communication channels for public engagement of various health issues . Through social media, the public can not only receive information from health agencies and news outlets about various health issues but also actively participate in web-based discussions with peers and influencers to exchange opinions about these issues . Social media platforms have been adopted in various health campaigns by both health agencies and concerned groups, including promotion of vaccination , exercise and healthy lifestyles, and smoking cessation . During health emergencies, especially global infectious disease pandemics, social media has been used substantially by both individuals and organizations. Social media platforms were frequently used during previous public health emergencies of international concern (PHEICs), such as the 2014 Ebola outbreak and the 2016 Zika pandemic . Social media has also been intensively used during the current COVID-19 pandemic; COVID-19 is currently the most mentioned keyword across all major social media platforms worldwide. Therefore, social media can be used for infodemiology studies to better understand public concerns and make informed decisions regarding the COVID-19 pandemic as well. Health emergencies are seldom an isolated health or medical issue. Pandemics, including the current COVID-19 pandemic, are almost always intermingled with complicated interactions of underlying societal and cultural factors that vary within and among countries. Consequently, discussions of these pandemics on social media include content not restricted to health, as observed during the 2014 Ebola and 2016 Zika epidemics . During the current COVID-19 pandemic, it has also been demonstrated that various social and political issues are associated with the pandemic, including different views on nonpharmaceutical interventions (NPIs) such as mask-wearing, social distancing, and stay-at-home-orders . To extract and analyze various content features in social media posts, natural language processing (NLP) methods such as linguistic inquiry and word count (LIWC) are usually applied . However, although LIWC can cover a broad spectrum of topic features, it was not specifically designed for health-related topics. LIWC places more emphasis on psychological processes . In addition, LIWC was developed almost exclusively in the Western sociocultural context and may not work well when analyzing discussions outside Western societies. During the COVID-19 pandemic, many discussions have been taking place on social media platforms in non–English-speaking regions, such as the Sina Weibo platform in China . Alternative data-driven computational linguistic/NLP algorithms aim to deliver more natural insights directly from data, bypass various human assumptions, overcome lack of inclusiveness of features, and reduce potential bias . Examples of commonly used techniques include word embedding, such as word2vec and doc2vec . However, completely data-driven techniques can result in a lack of interpretability. For instance, the exact meanings of vectors resulting from the doc2vec algorithm are unclear, and it is usually used for classification purposes . Similar to LIWC, it is still challenging to use the Chinese language as an input into these data-driven algorithms without extensive data preprocessing, which may result in a loss of subtlety of the content of the original Chinese post. Because of these technical challenges, especially the lack of universally designed content analysis and feature extraction analytical workflow, few studies have compared social media discussion across different socio-cultural backgrounds with regard to the COVID-19 pandemic . Cross-platform and cross-culture studies are infrequent and generally observational . Therefore, we suggest that there is an emergent need to develop a more interpretable and universal content analytical workflow across a wide sociocultural spectrum during the current COVID-19 pandemic and future pandemics. Developing this analytical workflow will vastly expand our fine-grained understanding and characterization of the content features of discussions on health issues worldwide. Until such a workflow is achieved, we will not be able to effectively compare and contrast health communication patterns on different social media platforms worldwide. As such, we propose the following two major objectives in this study: Develop a content feature extraction and coding scheme to characterize discussions about the current COVID-19 pandemic on major social media platforms across socio-cultural backgrounds (Twitter and Sina Weibo); Compare and contrast content features of the most shared viral social media posts on Twitter and Sina Weibo through a comprehensive analytical workflow with state-of-the-art machine learning techniques.
Retrieval of Social Media Posts We acquired social media posts on both Sina Weibo (colloquially referred to as Weibo hereafter) and Twitter from January 6 to April 15, 2020, for a total of 100 days. The reasons we used the same sampling period for the two social media platforms were as follows. 1) It made the sampling process consistent and directly comparable; 2) this study focused more on sociocultural than specific geospatial locations. Weibo is almost exclusively used by Chinese users, while Twitter users cover a much wider range of geospatial regions. Given the very different sizes and patterns of the epidemic in different countries, we suggested that having a consistent sampling period could reduce confounding factors such as actual outbreak size and its influence on public perception of COVID-19. The Weibo posts were acquired via the application programming interface (API) of Hong Kong Baptist University in Python. We downloaded all Weibo posts during the sampling period without further sampling. There were around 4 million Weibo posts acquired and archived. The tweets were acquired directly from Twitter via a contract between the School of Data Science, the University of North Carolina at Charlotte, and Twitter. The tweets were not retrieved by the commonly used Twitter API or other commercial APIs. The tweets were a 1% sample; however, given the adequately large sample size (more than 10 million tweets), we believe that this sample is a good representation of public discourse regarding the ongoing pandemic on Twitter. The keywords used to retrieve social media posts were COVID19 , nCOV19 , SARSCoV2 , their variants ( novel pneumonia , SARS , SARS2 , COVID , coronavirus ), and other related medical/health terms ( GGO , PHEIC , pandemic ). Inappropriate, derogative, and discriminating terms such as WuhanVirus , WuhanPenumonia , and ChinaVirus were also included to increase the sample size for research purposes. Both original posts and reposts were retrieved if they included the search terms. Identification of Viral Posts “Viral” posts were defined as those with large numbers of shares (also known as “reposts,” “retweets,” etc) on different social media platforms. There are other ways to define viral posts, such as number of comments or number of likes. However, comments may not align with the content and intention of the original posts, while liking would not necessarily propagate the original post on social media. Sharing through reposting or retweeting indicates that the user acknowledged the value of the original post and actively participated in its dissemination on social media. Therefore, the number of shares was used to define viral posts. Nevertheless, the three types of potential definitions of “viral” post were highly correlated (Pearson correlation coefficient ρ>0.8 for each pair of viral post definitions). For example, it was very common for a highly shared COVID-19 post to receive many likes and comments as well. Therefore, we suggest that focusing on one definition of “viral” post was able to provide sufficient insights for the other two definitions. To avoid oversampling during certain days when a cluster of viral posts occurred (ie, numerous posts occurred on the same day), we identified and selected the 5 most shared posts on Weibo and the 5 most shared posts on twitter throughout the sampling period. Practically, we ordered the posts by original posting date first and then ranked them based on the number of shares they received on each day. Due to the fast pace of social media, most viral posts received a majority of their reposts within a short period of time, and the overall lifespan of a viral post usually lasted less than 48 hours . Eventually, a total of 1000 viral COVID-19 social media posts were selected, 500 for Weibo and 500 for Twitter. Because of the relatively large sample size and size of content feature set (discussed next), we believe the sample size is adequate to provide accurate, granular characterization of viral social media posts regarding COVID-19. Extraction, Annotation, and Quantification of Content Features In this study, we developed a relatively novel and comprehensive content analysis workflow to characterize and quantify various content features of health-related social media posts. The creation of content features went through two rounds of iteration. In the first round, we used an open-coding approach to identify an initial set of features by manually analyzing a set of 200 randomly selected social media posts. Then, we randomly selected another set of 800 posts, combined them with the 200 posts, split them into 5 subsets (200 each), and asked five student coders to analyze them independently. The student coders were provided with the list of initial features and were all bilingual, with fluency in both Chinese and English. Moreover, the coders were asked to create new features if they were missing from the existing list. Finally, we refined the list of features based on our review, comparison, and evaluation of the coding results. A few new content features were discussed and added in this round. In the second round, we leveraged the refined features in screening, evaluating, characterizing, and validating a test set of 50 randomly selected posts by our research team. Note that posts in this set were not necessarily viral posts. As discussed later in this paper, randomly selecting posts increases the coverage of various topic contents in the posts. We performed several iterations of intercoder reliability analysis, discussions, and refinements to ensure clarity and consistency in the definition and coding scheme of the features. The intercoder reliability (kappa value) threshold was set as 0.8 before deploying more comprehensive coding. The coding scheme can be described concisely as follows. Each feature was 0-1 binary coded (ie, a post either had or did not have the specific content feature). This coding scheme is more objective and easier to interpret than LIWC because it only considers the presence of a specific content feature. In addition, because of the objectivity of the coding scheme, there is no need to translate the social media posts, as the subtlety in the original post may be lost during the translation process. The final version included a total of 77 content features that were grouped into 6 major categories, each major category with more specific features. The six major categories included clinical and epidemiological features (eg, mentioning any symptoms or signs, transmission, or diagnosis and testing); countermeasures and COVID-19–related resources (eg, mentioning face masks, other medical supplies, or disinfection); policies and politics (eg, mentioning social distancing or stay-at-home-orders); public reactions and societal impact (eg, preparedness, remote working, or college education); spatial scales (eg, local, state/provincial, national, or international); and social issues (eg, discrimination against certain countries, violence, uncivil language). Note that these content features were not mutually exclusive, and a post could have multiple features under the same or different major categories at the same time as long as the post contained the specific contents. For example, a single post could mention symptoms, diagnosis, risk factors, and clinical consequences. In addition, these content features were universally developed and objective; therefore, they could be applied in different sociocultural backgrounds without the need of translation, which is required in LIWC. The complete descriptions of these major categories and further specific contents within each major category are provided in . After the comprehensive coding scheme was established and the list of 77 content features was defined, we then coded the 1000 posts according to the coding scheme. For each post, the output was a 77-element 0-1 binary vector. A 1 indicated that the post mentioned the corresponding content feature, while a 0 indicated that the specific post did not mention that content feature. In general, the more 1’s (and hence, the fewer 0’s), the more diverse the topics contained in the post. Fewer 1’s indicated more focused topics in the post. The final output for the analytical workflow was a 1000 × 77 binary matrix that could be further divided into two 500 × 77 binary matrices representing viral Twitter and Sina Weibo groups, respectively. Descriptive Analysis of Viral COVID-19 Posts Across Social Media Platforms We applied descriptive analysis to quantify and contrast the prevalence of content features in the most viral COVID-19 posts across the social media platforms Weibo and Twitter. The prevalence was defined as the percentage of number of 1’s across all the sampled posts in each content feature. Prevalence was bounded between 0 (ie, none of the sampled posts mentioned the content feature) and 1 (ie, all posts mentioned the content feature). A larger prevalence indicated that the corresponding content feature was more frequently mentioned in the viral social media posts regarding COVID-19. We further applied a two-sample z test to investigate whether there was statistically significant differences in the two prevalence measures of the same content feature between Weibo and Twitter. Because the data were 0-1 binary instead of continuous, the z test was more appropriate than the t test or Kolmogorov-Smirnov test. The content features that had the most distinct prevalence measures between the two social media platforms were identified based on the z test. In addition to comparing different social media posts, we also studied the associations between content features on different social media platforms. Pairwise Pearson correlation was calculated between each pair of content features in both Twitter and Sina Weibo posts. Pairs with statistically significant associations ( P <.05) were identified. These analyses provide a comprehensive characterization on how viral COVID-19 content features are distributed and correlated differently on the two major social media platforms. Unsupervised Learning of Viral COVID-19 Posts Across Social Media Platforms To further investigate the distributions and relationships among multiple content features simultaneously, we applied the t distribution stochastic neighbor embedding ( t -SNE) technique. t -SNE is a machine learning dimension reduction algorithm. In contrast to the more commonly used principal component analysis technique, t -SNE can handle data that are not normally distributed, as presented in this study (ie, binary data) and is also commonly used in other studies involving large and heterogeneous data (eg, bioinformatics data ). Performing t -SNE provides a clear visualization of associations among content features in 2D space instead of the original complex 77-dimensional feature space. t -SNE dimension reduction paved the way for subsequent clustering analysis. In this study, we applied unsupervised machine learning k-means clustering . Note that we created 6 major categories of content features for our own manual content coding effort. These 6 categories were based on our observation and discussion about the COVID-19 pandemic and public discourse on social media. Data-driven clustering analysis (also known as unsupervised learning), on the other hand, enables the data to “speak for themselves” (hence, “unsupervised”). Data-driven clustering provides a new angle of identifying possible aggregations of content features. For example, frequently concurrent content features may not necessarily be clustered under the same major manually created categories. k -means clustering does not require a priori information from researchers on how the features should be grouped; therefore, it reduces potential bias. The optimal k value to perform k -means clustering was determined by computing and inspecting the total within sum of squares (TWSS) with a wide range of k values from 1 to 20. Although larger k values are usually associated with smaller TWSSs, they increase the difficulty of interpreting the clusters. We examined and contrasted the clustering patterns of content features in the most viral COVID-19 posts on Twitter and Weibo. The complete workflow of extracting and analyzing viral COVID-19 posts on different social media platforms is conceptualized and presented in . All analytical codes were developed in R 4.0.2 (R Project) with supporting packages of Rtsne , tidyverse , cluster , factoextra , gridExtra , wordcloud , tm , corrplot , and ggplot2 . The codes and data are freely available upon request.
We acquired social media posts on both Sina Weibo (colloquially referred to as Weibo hereafter) and Twitter from January 6 to April 15, 2020, for a total of 100 days. The reasons we used the same sampling period for the two social media platforms were as follows. 1) It made the sampling process consistent and directly comparable; 2) this study focused more on sociocultural than specific geospatial locations. Weibo is almost exclusively used by Chinese users, while Twitter users cover a much wider range of geospatial regions. Given the very different sizes and patterns of the epidemic in different countries, we suggested that having a consistent sampling period could reduce confounding factors such as actual outbreak size and its influence on public perception of COVID-19. The Weibo posts were acquired via the application programming interface (API) of Hong Kong Baptist University in Python. We downloaded all Weibo posts during the sampling period without further sampling. There were around 4 million Weibo posts acquired and archived. The tweets were acquired directly from Twitter via a contract between the School of Data Science, the University of North Carolina at Charlotte, and Twitter. The tweets were not retrieved by the commonly used Twitter API or other commercial APIs. The tweets were a 1% sample; however, given the adequately large sample size (more than 10 million tweets), we believe that this sample is a good representation of public discourse regarding the ongoing pandemic on Twitter. The keywords used to retrieve social media posts were COVID19 , nCOV19 , SARSCoV2 , their variants ( novel pneumonia , SARS , SARS2 , COVID , coronavirus ), and other related medical/health terms ( GGO , PHEIC , pandemic ). Inappropriate, derogative, and discriminating terms such as WuhanVirus , WuhanPenumonia , and ChinaVirus were also included to increase the sample size for research purposes. Both original posts and reposts were retrieved if they included the search terms.
“Viral” posts were defined as those with large numbers of shares (also known as “reposts,” “retweets,” etc) on different social media platforms. There are other ways to define viral posts, such as number of comments or number of likes. However, comments may not align with the content and intention of the original posts, while liking would not necessarily propagate the original post on social media. Sharing through reposting or retweeting indicates that the user acknowledged the value of the original post and actively participated in its dissemination on social media. Therefore, the number of shares was used to define viral posts. Nevertheless, the three types of potential definitions of “viral” post were highly correlated (Pearson correlation coefficient ρ>0.8 for each pair of viral post definitions). For example, it was very common for a highly shared COVID-19 post to receive many likes and comments as well. Therefore, we suggest that focusing on one definition of “viral” post was able to provide sufficient insights for the other two definitions. To avoid oversampling during certain days when a cluster of viral posts occurred (ie, numerous posts occurred on the same day), we identified and selected the 5 most shared posts on Weibo and the 5 most shared posts on twitter throughout the sampling period. Practically, we ordered the posts by original posting date first and then ranked them based on the number of shares they received on each day. Due to the fast pace of social media, most viral posts received a majority of their reposts within a short period of time, and the overall lifespan of a viral post usually lasted less than 48 hours . Eventually, a total of 1000 viral COVID-19 social media posts were selected, 500 for Weibo and 500 for Twitter. Because of the relatively large sample size and size of content feature set (discussed next), we believe the sample size is adequate to provide accurate, granular characterization of viral social media posts regarding COVID-19.
In this study, we developed a relatively novel and comprehensive content analysis workflow to characterize and quantify various content features of health-related social media posts. The creation of content features went through two rounds of iteration. In the first round, we used an open-coding approach to identify an initial set of features by manually analyzing a set of 200 randomly selected social media posts. Then, we randomly selected another set of 800 posts, combined them with the 200 posts, split them into 5 subsets (200 each), and asked five student coders to analyze them independently. The student coders were provided with the list of initial features and were all bilingual, with fluency in both Chinese and English. Moreover, the coders were asked to create new features if they were missing from the existing list. Finally, we refined the list of features based on our review, comparison, and evaluation of the coding results. A few new content features were discussed and added in this round. In the second round, we leveraged the refined features in screening, evaluating, characterizing, and validating a test set of 50 randomly selected posts by our research team. Note that posts in this set were not necessarily viral posts. As discussed later in this paper, randomly selecting posts increases the coverage of various topic contents in the posts. We performed several iterations of intercoder reliability analysis, discussions, and refinements to ensure clarity and consistency in the definition and coding scheme of the features. The intercoder reliability (kappa value) threshold was set as 0.8 before deploying more comprehensive coding. The coding scheme can be described concisely as follows. Each feature was 0-1 binary coded (ie, a post either had or did not have the specific content feature). This coding scheme is more objective and easier to interpret than LIWC because it only considers the presence of a specific content feature. In addition, because of the objectivity of the coding scheme, there is no need to translate the social media posts, as the subtlety in the original post may be lost during the translation process. The final version included a total of 77 content features that were grouped into 6 major categories, each major category with more specific features. The six major categories included clinical and epidemiological features (eg, mentioning any symptoms or signs, transmission, or diagnosis and testing); countermeasures and COVID-19–related resources (eg, mentioning face masks, other medical supplies, or disinfection); policies and politics (eg, mentioning social distancing or stay-at-home-orders); public reactions and societal impact (eg, preparedness, remote working, or college education); spatial scales (eg, local, state/provincial, national, or international); and social issues (eg, discrimination against certain countries, violence, uncivil language). Note that these content features were not mutually exclusive, and a post could have multiple features under the same or different major categories at the same time as long as the post contained the specific contents. For example, a single post could mention symptoms, diagnosis, risk factors, and clinical consequences. In addition, these content features were universally developed and objective; therefore, they could be applied in different sociocultural backgrounds without the need of translation, which is required in LIWC. The complete descriptions of these major categories and further specific contents within each major category are provided in . After the comprehensive coding scheme was established and the list of 77 content features was defined, we then coded the 1000 posts according to the coding scheme. For each post, the output was a 77-element 0-1 binary vector. A 1 indicated that the post mentioned the corresponding content feature, while a 0 indicated that the specific post did not mention that content feature. In general, the more 1’s (and hence, the fewer 0’s), the more diverse the topics contained in the post. Fewer 1’s indicated more focused topics in the post. The final output for the analytical workflow was a 1000 × 77 binary matrix that could be further divided into two 500 × 77 binary matrices representing viral Twitter and Sina Weibo groups, respectively.
We applied descriptive analysis to quantify and contrast the prevalence of content features in the most viral COVID-19 posts across the social media platforms Weibo and Twitter. The prevalence was defined as the percentage of number of 1’s across all the sampled posts in each content feature. Prevalence was bounded between 0 (ie, none of the sampled posts mentioned the content feature) and 1 (ie, all posts mentioned the content feature). A larger prevalence indicated that the corresponding content feature was more frequently mentioned in the viral social media posts regarding COVID-19. We further applied a two-sample z test to investigate whether there was statistically significant differences in the two prevalence measures of the same content feature between Weibo and Twitter. Because the data were 0-1 binary instead of continuous, the z test was more appropriate than the t test or Kolmogorov-Smirnov test. The content features that had the most distinct prevalence measures between the two social media platforms were identified based on the z test. In addition to comparing different social media posts, we also studied the associations between content features on different social media platforms. Pairwise Pearson correlation was calculated between each pair of content features in both Twitter and Sina Weibo posts. Pairs with statistically significant associations ( P <.05) were identified. These analyses provide a comprehensive characterization on how viral COVID-19 content features are distributed and correlated differently on the two major social media platforms.
To further investigate the distributions and relationships among multiple content features simultaneously, we applied the t distribution stochastic neighbor embedding ( t -SNE) technique. t -SNE is a machine learning dimension reduction algorithm. In contrast to the more commonly used principal component analysis technique, t -SNE can handle data that are not normally distributed, as presented in this study (ie, binary data) and is also commonly used in other studies involving large and heterogeneous data (eg, bioinformatics data ). Performing t -SNE provides a clear visualization of associations among content features in 2D space instead of the original complex 77-dimensional feature space. t -SNE dimension reduction paved the way for subsequent clustering analysis. In this study, we applied unsupervised machine learning k-means clustering . Note that we created 6 major categories of content features for our own manual content coding effort. These 6 categories were based on our observation and discussion about the COVID-19 pandemic and public discourse on social media. Data-driven clustering analysis (also known as unsupervised learning), on the other hand, enables the data to “speak for themselves” (hence, “unsupervised”). Data-driven clustering provides a new angle of identifying possible aggregations of content features. For example, frequently concurrent content features may not necessarily be clustered under the same major manually created categories. k -means clustering does not require a priori information from researchers on how the features should be grouped; therefore, it reduces potential bias. The optimal k value to perform k -means clustering was determined by computing and inspecting the total within sum of squares (TWSS) with a wide range of k values from 1 to 20. Although larger k values are usually associated with smaller TWSSs, they increase the difficulty of interpreting the clusters. We examined and contrasted the clustering patterns of content features in the most viral COVID-19 posts on Twitter and Weibo. The complete workflow of extracting and analyzing viral COVID-19 posts on different social media platforms is conceptualized and presented in . All analytical codes were developed in R 4.0.2 (R Project) with supporting packages of Rtsne , tidyverse , cluster , factoextra , gridExtra , wordcloud , tm , corrplot , and ggplot2 . The codes and data are freely available upon request.
Description of Viral COVID-19–Related Social Media Posts on Sina Weibo and Twitter The most prevalent content features in Twitter (which has mostly Western users) were health agency (eg, CDC [US Centers for Disease Control and Prevention], NIH [National Institutes of Health], and WHO [World Health Organization]; 37.0%), violence (mostly related to domestic violence due to stay-at-home orders; 20.4%), international relationships (14.8%), misinformation (eg, mentioning misinformation , disinformation , hoax , fake news ; 11.2%), stay-at-home order (11.0%), and vaccine (10.8%). The 10 most frequently mentioned content features on Twitter, along with their prevalence and ranking, are shown in (top panel). In general, prevalent COVID-19 content features on Twitter did not directly focus on the disease itself and the epidemic but rather on policies, politics, and other secondary societal issues, such as violence and discrimination. This finding reinforced the notion that COVID-19, like many large pandemics and emerging health issues, is not an isolated medical issue and is intertwined with complicated sociopolitical aspects. In particular, 2020 was a US presidential election year. Therefore, it was not surprising that US President Donald Trump and other former and current US office holders (eg, President Barack Obama, Vice President Joe Biden, Majority Leader Mitch McConnell, and House Speaker Nancy Pelosi) were frequently mentioned in COVID-19–related viral tweets. Given the partisan nature of the US political system, the Republican Party and Democratic Party were also consistently mentioned with COVID-19, mostly with the distinct views and countermeasures of these parties related to the pandemic. The most mentioned nonpolitician celebrity was Bill Gates, and mentions of his name were usually associated with content features of vaccines and misinformation (mostly vaccine-related conspiracy theories). Discrimination toward Chinese people, Asian Americans, and Asian people in general was also frequently mentioned. Note that these were content features and may not reflect actual discrimination and negative sentiments against these groups in the tweets. In fact, many viral tweets that mentioned discrimination features were advocating for the elimination of discrimination and xenophobia. In comparison, the most prevalent content features in Weibo were research (18%), transmission (17.8%), cases (17%), healthcare personnel (15.8%), and testing (12.8%). The top 10 most mentioned content features on Weibo, along with their prevalence and ranking, are shown in (bottom panel). Compared to Twitter users, Weibo users (who are mostly Chinese) were more likely to engage in discussion of disease-related content features. Among the 10 most common content features, only celebrity was not directly related to the disease itself. In other words, Chinese Weibo users tended to focus on COVID-19 as a health and medical issue rather than on the associated societal and political issues discussed in Western societies. Viral Weibo posts were much more likely to mention health personnel and pay tribute to health care workers. Research on the SARS-CoV-2 pathogen and its transmission among human populations were also frequently mentioned, demonstrating the public interest in the state-of-the-art understanding of the emerging health crisis. Because China experienced the original 2003 severe acute respiratory syndrome (SARS) outbreak, which was caused by severe acute respiratory syndrome coronavirus 1 (SARS-CoV-1), and COVID-19 was caused by a similar coronavirus (SARS-CoV-2), the history of the 2003 SARS outbreak was a recurrent theme in COVID-19 Weibo posts. The celebrities mentioned in posts related to COVID-19 on Weibo were also very different from those on Twitter. In general, viral Weibo posts mentioned pop culture idols (eg, singers, other performing artists, and sports stars), and the sentiment was almost always positive (eg, mentions of financial, resource, and emotional support for COVID-19–impacted regions and people provided by these celebrities). These results showed vastly different content features covered in viral posts between Weibo and Twitter, which reflected the vast differences in perception of COVID-19 in the corresponding two major sociocultural systems. In general, Twitter users (who mostly live in Western countries) were highly engaged in discussions with countermeasures, politics, and policies related to the COVID-19 pandemic. In comparison, Weibo users (mostly Chinese) tended to focus more on the disease itself, but not exclusively. Among the top 10 features, the only overlapping content feature between the two platforms was the local situation. Therefore, these findings reveal substantially different focuses on the COVID-19 pandemic in Chinese and Western societies, which were reflected in the most viral social media posts in cyberspace. Comparative Analysis of Content Features of Twitter and Sina Weibo We further provided a quantitative comparison of content features between the two social media platforms. Out of a total of 77 content features, 3 (4%) were absent from all of the 500 most viral tweets ( comorbidity , eHealth , and suicide ), and 6 (8%) were not present in any of the 500 most viral Weibo posts ( constitution , curfew , remote working , major religion , discrimination against gender , and discrimination against religion ). This result also implies that viral discussions of COVID-19 on Weibo had narrower but more focused content features. There was no intersection of missing features between the two major social media platforms. Two-sample z tests were used to further quantify between-platform differences for each content feature. Content features with zero prevalence (ie, never mentioned in viral social media posts on either platform) were removed to perform the z test correctly. Features having the most distinct prevalence between the two platforms were health agency (difference of prevalence [ D ]=0.25; Twitter minus Weibo; P <.001), vaccine ( D =–0.17, P <.001), shelter-in-place (or lockdown, D =–0.11, P <.001), cases ( D =0.09, P <.001), and stay-at-home order ( D =0.10, P =.002). While many of these content features were among the top 20 mentioned on both social media platforms , we also observed that local situations, the only common top 10 feature in both platforms, actually had statistically significant differences ( D =–0.11, P <.001). Local was the 6th most mentioned content feature on Weibo and the 10th on Twitter. These quantitative findings can be explained by the different sociocultural backgrounds of the users of Twitter (Western) and Weibo (Chinese). Some features were also distributed similarly between the two social media platforms (ie, P values substantially greater than .05 based on the z test). Of them, preparedness ( D <0.01, P =.90), discrimination against ethnicity ( D <0.01, P =.96), prevention ( D <0.01, P =.97), recovery ( D <0.01, P =.97), ecosystem ( D <0.01, P =.97), masks ( D <0.01, P >.99), and Trump ( D <0.01, P >.99) were the least distinct features. These features represent the common ground regarding COVID-19 between the two social media platforms and the two underlying sociocultural systems. The missing content features revealed a discrepancy between viral and nonviral discussions of COVID-19 on social media. As mentioned earlier, the comprehensive content feature coding scheme was originally developed from a random sample of posts, most of which were nonviral posts with <5 reposts. We speculated that certain controversial content features (especially those related to policy and politics on Twitter) facilitated the spread of certain posts on social media and caused them to go viral. Posts that are less controversial typically do not gain much attention and do not go viral on social media. However, we must point out that content features are only one reason that a post can go viral. Other aspects include temporality (ie, when the post was published relative to the epidemic), property of the original posting user (eg, number of followers), and the severity of the pandemic at that time and place. Significant Pearson correlations ( P <.05) are shown in for Twitter (left) and Weibo (right) posts, respectively. In general, significantly correlated content feature pairs were more abundant on Weibo than on Twitter. One possible explanation is that Twitter has a 280-character length limit for posts. Therefore, content features in each tweet were limited, and concurrent content features in the same tweet were less frequent. On the other hand, Sina Weibo allows up to 2000 characters; therefore, it is possible to include much more content in a Weibo post than in a tweet. Consequently, a Weibo post can accommodate more content features than a tweet. Viral COVID-19 tweets included an average of 2.37 content features, and viral Weibo posts contained 2.78 content features. However, most viral Weibo posts used URLs to pack in more information and keep the post concise rather than including everything in the main post content. Therefore, the 2000 character limit is only a theoretical upper limit and was rarely reached, especially for viral Weibo posts. Note that Weibo is subject to censorship toward certain content features. For example, although US President Trump was mentioned quite a few times in viral Weibo posts, President Xi of China is not an allowed topic on Weibo and Chinese cyberspace in general. Therefore, there was no equivalent content feature to Trump on Weibo. Other political figures in China, such as the governor of Hubei (Yong Ying), are generally permitted by censors to be mentioned and commented on in Weibo posts. Dimension Reduction and Clustering Analysis of Content Features The machine learning dimension reduction t -SNE results for Twitter and Weibo are shown in . These figures show how content features are distributed and associated in the reduced 2D space instead of the original 77-dimensional feature space. It is very clear that the content features have distinct distribution patterns between the two social media platforms in the reduced 2D space. This reinforces our previous findings on the variability of content features across the sociocultural spectrum. The number of optimal clusters on Twitter ( k t ) was determined as 6 from (left), while the number of optimal clusters ( k s ) on Weibo was found to be 5 from (right). Therefore, not only were content features regarding COVID-19 distributed differently between the two social media platforms, but their associations (eg, clusters) within posts were also distinct between the two platforms. Note that these clusters were identified by the data-driven unsupervised machine learning technique, and these clusters did not necessarily align with the 6 manually developed major categories. We further show the k -means clustering results of the content features on Twitter and Weibo in (left and right, respectively). The clustering patterns were substantially different between the two social media platforms. The sizes of the 6 distinct clusters on Twitter were 154, 107, 96, 62, 42, and 39. The total sum of squares (TSS) across all 6 clusters was 1402. The total within-cluster sum of squares (TWSS) was 1079, and the total between-cluster sum of squares (TBSS) was 323 on Twitter. Note that TSS = TWSS + TBSS. In comparison, the 5 cluster sizes of Weibo posts were 218, 106, 81, 67, and 28. The TSS, TWSS, and TBSS on Weibo were 1262, 1034, and 228, respectively. Therefore, all sums of squares were much smaller on Weibo than on Twitter. In addition, the two dimensions (the x- and y-axes in ) were also much smaller on Twitter (3.2% and 3%) than on Weibo (4.5% and 4%). All these results reveal that COVID-19 content features in viral Weibo posts were more similar across different posts than those in Twitter posts. Twitter showed a more diverse array of content features among different tweets.
The most prevalent content features in Twitter (which has mostly Western users) were health agency (eg, CDC [US Centers for Disease Control and Prevention], NIH [National Institutes of Health], and WHO [World Health Organization]; 37.0%), violence (mostly related to domestic violence due to stay-at-home orders; 20.4%), international relationships (14.8%), misinformation (eg, mentioning misinformation , disinformation , hoax , fake news ; 11.2%), stay-at-home order (11.0%), and vaccine (10.8%). The 10 most frequently mentioned content features on Twitter, along with their prevalence and ranking, are shown in (top panel). In general, prevalent COVID-19 content features on Twitter did not directly focus on the disease itself and the epidemic but rather on policies, politics, and other secondary societal issues, such as violence and discrimination. This finding reinforced the notion that COVID-19, like many large pandemics and emerging health issues, is not an isolated medical issue and is intertwined with complicated sociopolitical aspects. In particular, 2020 was a US presidential election year. Therefore, it was not surprising that US President Donald Trump and other former and current US office holders (eg, President Barack Obama, Vice President Joe Biden, Majority Leader Mitch McConnell, and House Speaker Nancy Pelosi) were frequently mentioned in COVID-19–related viral tweets. Given the partisan nature of the US political system, the Republican Party and Democratic Party were also consistently mentioned with COVID-19, mostly with the distinct views and countermeasures of these parties related to the pandemic. The most mentioned nonpolitician celebrity was Bill Gates, and mentions of his name were usually associated with content features of vaccines and misinformation (mostly vaccine-related conspiracy theories). Discrimination toward Chinese people, Asian Americans, and Asian people in general was also frequently mentioned. Note that these were content features and may not reflect actual discrimination and negative sentiments against these groups in the tweets. In fact, many viral tweets that mentioned discrimination features were advocating for the elimination of discrimination and xenophobia. In comparison, the most prevalent content features in Weibo were research (18%), transmission (17.8%), cases (17%), healthcare personnel (15.8%), and testing (12.8%). The top 10 most mentioned content features on Weibo, along with their prevalence and ranking, are shown in (bottom panel). Compared to Twitter users, Weibo users (who are mostly Chinese) were more likely to engage in discussion of disease-related content features. Among the 10 most common content features, only celebrity was not directly related to the disease itself. In other words, Chinese Weibo users tended to focus on COVID-19 as a health and medical issue rather than on the associated societal and political issues discussed in Western societies. Viral Weibo posts were much more likely to mention health personnel and pay tribute to health care workers. Research on the SARS-CoV-2 pathogen and its transmission among human populations were also frequently mentioned, demonstrating the public interest in the state-of-the-art understanding of the emerging health crisis. Because China experienced the original 2003 severe acute respiratory syndrome (SARS) outbreak, which was caused by severe acute respiratory syndrome coronavirus 1 (SARS-CoV-1), and COVID-19 was caused by a similar coronavirus (SARS-CoV-2), the history of the 2003 SARS outbreak was a recurrent theme in COVID-19 Weibo posts. The celebrities mentioned in posts related to COVID-19 on Weibo were also very different from those on Twitter. In general, viral Weibo posts mentioned pop culture idols (eg, singers, other performing artists, and sports stars), and the sentiment was almost always positive (eg, mentions of financial, resource, and emotional support for COVID-19–impacted regions and people provided by these celebrities). These results showed vastly different content features covered in viral posts between Weibo and Twitter, which reflected the vast differences in perception of COVID-19 in the corresponding two major sociocultural systems. In general, Twitter users (who mostly live in Western countries) were highly engaged in discussions with countermeasures, politics, and policies related to the COVID-19 pandemic. In comparison, Weibo users (mostly Chinese) tended to focus more on the disease itself, but not exclusively. Among the top 10 features, the only overlapping content feature between the two platforms was the local situation. Therefore, these findings reveal substantially different focuses on the COVID-19 pandemic in Chinese and Western societies, which were reflected in the most viral social media posts in cyberspace.
We further provided a quantitative comparison of content features between the two social media platforms. Out of a total of 77 content features, 3 (4%) were absent from all of the 500 most viral tweets ( comorbidity , eHealth , and suicide ), and 6 (8%) were not present in any of the 500 most viral Weibo posts ( constitution , curfew , remote working , major religion , discrimination against gender , and discrimination against religion ). This result also implies that viral discussions of COVID-19 on Weibo had narrower but more focused content features. There was no intersection of missing features between the two major social media platforms. Two-sample z tests were used to further quantify between-platform differences for each content feature. Content features with zero prevalence (ie, never mentioned in viral social media posts on either platform) were removed to perform the z test correctly. Features having the most distinct prevalence between the two platforms were health agency (difference of prevalence [ D ]=0.25; Twitter minus Weibo; P <.001), vaccine ( D =–0.17, P <.001), shelter-in-place (or lockdown, D =–0.11, P <.001), cases ( D =0.09, P <.001), and stay-at-home order ( D =0.10, P =.002). While many of these content features were among the top 20 mentioned on both social media platforms , we also observed that local situations, the only common top 10 feature in both platforms, actually had statistically significant differences ( D =–0.11, P <.001). Local was the 6th most mentioned content feature on Weibo and the 10th on Twitter. These quantitative findings can be explained by the different sociocultural backgrounds of the users of Twitter (Western) and Weibo (Chinese). Some features were also distributed similarly between the two social media platforms (ie, P values substantially greater than .05 based on the z test). Of them, preparedness ( D <0.01, P =.90), discrimination against ethnicity ( D <0.01, P =.96), prevention ( D <0.01, P =.97), recovery ( D <0.01, P =.97), ecosystem ( D <0.01, P =.97), masks ( D <0.01, P >.99), and Trump ( D <0.01, P >.99) were the least distinct features. These features represent the common ground regarding COVID-19 between the two social media platforms and the two underlying sociocultural systems. The missing content features revealed a discrepancy between viral and nonviral discussions of COVID-19 on social media. As mentioned earlier, the comprehensive content feature coding scheme was originally developed from a random sample of posts, most of which were nonviral posts with <5 reposts. We speculated that certain controversial content features (especially those related to policy and politics on Twitter) facilitated the spread of certain posts on social media and caused them to go viral. Posts that are less controversial typically do not gain much attention and do not go viral on social media. However, we must point out that content features are only one reason that a post can go viral. Other aspects include temporality (ie, when the post was published relative to the epidemic), property of the original posting user (eg, number of followers), and the severity of the pandemic at that time and place. Significant Pearson correlations ( P <.05) are shown in for Twitter (left) and Weibo (right) posts, respectively. In general, significantly correlated content feature pairs were more abundant on Weibo than on Twitter. One possible explanation is that Twitter has a 280-character length limit for posts. Therefore, content features in each tweet were limited, and concurrent content features in the same tweet were less frequent. On the other hand, Sina Weibo allows up to 2000 characters; therefore, it is possible to include much more content in a Weibo post than in a tweet. Consequently, a Weibo post can accommodate more content features than a tweet. Viral COVID-19 tweets included an average of 2.37 content features, and viral Weibo posts contained 2.78 content features. However, most viral Weibo posts used URLs to pack in more information and keep the post concise rather than including everything in the main post content. Therefore, the 2000 character limit is only a theoretical upper limit and was rarely reached, especially for viral Weibo posts. Note that Weibo is subject to censorship toward certain content features. For example, although US President Trump was mentioned quite a few times in viral Weibo posts, President Xi of China is not an allowed topic on Weibo and Chinese cyberspace in general. Therefore, there was no equivalent content feature to Trump on Weibo. Other political figures in China, such as the governor of Hubei (Yong Ying), are generally permitted by censors to be mentioned and commented on in Weibo posts.
The machine learning dimension reduction t -SNE results for Twitter and Weibo are shown in . These figures show how content features are distributed and associated in the reduced 2D space instead of the original 77-dimensional feature space. It is very clear that the content features have distinct distribution patterns between the two social media platforms in the reduced 2D space. This reinforces our previous findings on the variability of content features across the sociocultural spectrum. The number of optimal clusters on Twitter ( k t ) was determined as 6 from (left), while the number of optimal clusters ( k s ) on Weibo was found to be 5 from (right). Therefore, not only were content features regarding COVID-19 distributed differently between the two social media platforms, but their associations (eg, clusters) within posts were also distinct between the two platforms. Note that these clusters were identified by the data-driven unsupervised machine learning technique, and these clusters did not necessarily align with the 6 manually developed major categories. We further show the k -means clustering results of the content features on Twitter and Weibo in (left and right, respectively). The clustering patterns were substantially different between the two social media platforms. The sizes of the 6 distinct clusters on Twitter were 154, 107, 96, 62, 42, and 39. The total sum of squares (TSS) across all 6 clusters was 1402. The total within-cluster sum of squares (TWSS) was 1079, and the total between-cluster sum of squares (TBSS) was 323 on Twitter. Note that TSS = TWSS + TBSS. In comparison, the 5 cluster sizes of Weibo posts were 218, 106, 81, 67, and 28. The TSS, TWSS, and TBSS on Weibo were 1262, 1034, and 228, respectively. Therefore, all sums of squares were much smaller on Weibo than on Twitter. In addition, the two dimensions (the x- and y-axes in ) were also much smaller on Twitter (3.2% and 3%) than on Weibo (4.5% and 4%). All these results reveal that COVID-19 content features in viral Weibo posts were more similar across different posts than those in Twitter posts. Twitter showed a more diverse array of content features among different tweets.
Theoretical Innovation This study is the first of its kind to comprehensively characterize the content features of discussions regarding a large pandemic on social media across the sociocultural spectrum. We showed the vast differences in topic content features of viral social media posts in Twitter and Weibo, the two most influential social media platforms in China and the West during the COVID-19 pandemic. In general, viral social media posts in China focused on cases and prevention, which are topics that are more related to COVID-19 as a health issue. However, as a comparison, most viral tweets regarding COVID-19 focused more on policies and politics, including stay-at-home orders, President Trump , and other political figures . Through various analytical methods, social media data provided a new angle to explore and understand public discourse of the COVID-19 pandemic and associated social, political, and economic issues. Details of these discussions in virtual cyberspace may provide insights on the actual disease epidemic in the real world. For example, analyzing public perception of various NPIs, such as social distancing , mandatory mask-wearing , and stay-at-home orders, would provide an estimation of the compliance with these NPIs, which determine the case counts and epidemic trajectory in a region. This concept echoes the original idea of infodemiology, which uses a time series of social media post counts related to a health issue (eg, COVID-19) as an indicator of actual case counts . In addition to the number of social media posts, we will be able to further extract fine-grained perceptions of the risk and NPIs of COVID-19 and extend the application of infodemiology. Technical Advances To achieve an effective comparison across the sociocultural spectrum regarding the COVID-19 pandemic on social media, we developed a comprehensive content analytical workflow. This analytical workflow was specifically designed for transboundary infectious diseases (eg, outbreaks and pandemics of infectious diseases) that have complicated sociocultural contexts. Compared to the commonly used LIWC , our workflow, especially the coding scheme, has several advantages. First, LIWC is a general content analytical tool that ignores many important content features during the COVID-19 pandemic. Our coding scheme is tailored to the complicated and interacting health, social, cultural, and political nuances of transboundary infectious diseases. Therefore, our coding scheme is able to capture a much more comprehensive and detailed content features in web-based discussions regarding transboundary infectious diseases. Second, LIWC uses proprietary algorithms to calculate individual scores of different features, and the exact interpretation of the numeric values is not readily comprehensible. In contrast, our coding scheme is 0-1 binary, where 1 indicates that the content has a feature and 0 indicates that it does not. This coding is clearer than the obscure LIWC scores. In addition, LIWC scores vary substantially (from 0 to 100) among different features. Certain features that have large values in LIWC tend to dominate and overshadow other features; thus, further analysis is prone to bias. Our coding scheme is consistent, as all features have the same coding scheme. Finally, LIWC is difficult to directly apply to non–Indo-European languages; therefore, direct comparison between sociocultural contexts with LIWC is almost impossible. In contrast, our coding scheme is context-free and can be applied to virtually any language and any region. The coding scheme itself is also flexible. Researchers can easily add and modify content features as necessary when working with other health issues beyond COVID-19. The coding scheme can be retrofitted to understand communications on previous events (eg, the 2016 Zika event). We can easily add, remove, or revise corresponding content features related to the specific health issues we are exploring. Limitations of the Current Study and Future Directions This study adopts a static view of all viral social media posts for comparative analysis between two sides of the sociocultural spectrum in a given period of time. However, for a large and ongoing pandemic, time is another major influential factor that is associated with the actual progress of the pandemic. Our previous studies showed that the Zika case series was strongly associated with the Zika discussion trend on Twitter in 2016 . Similarly, future studies can be expended to explicitly characterize how various content features evolve with time in different regions. The ongoing COVID-19 pandemic case series can be predicted by certain content features (eg, regarding NPIs), similar to the previously discussed infodemiology approach. We used the number of reposts (ie, retweets or shares) as the definition of a viral social media post. One limitation is that we did not consider the possibility of automatic reposting by bots or cyborgs. Therefore, it is possible that the large number of reposts may not accurately represent and reflect the public perception of an issue. Bots and cyborgs, however, are not necessarily associated with misinformation. Bots and cyborgs can be used as tools to quickly disseminate information on social media platforms for other reasons, such as advertising. A future direction of this study is to identify other definitions of viral posts (eg, posts with a large number of likes, favorites, or comments). Viral social media posts are only one of many attributes of social media discussion. Our initial assessment showed that >75% of tweets and >80% of Weibo posts regarding COVID-19 did not receive any attention on social media. This number is similar to our previous finding that 76% of all Zika-related tweets were never retweeted . To characterize web-based public discourse related to COVID-19 and other emerging health issues accurately and comprehensively, we will continue studying these nonviral social media posts on different platforms. However, given the ever-increasing volume of social media posts, effective sampling strategies are a priority. Effective sampling is a necessity to provide a less biased depiction of content features. Data mining of nonviral posts regarding COVID-19, especially on sentiment toward NPIs, will provide a more accurate estimation of compliance with NPIs in different regions at different stages of the pandemic. We will also be able to further compare and contrast how the distributions of content features differ between viral and nonviral post groups as well as across the sociocultural spectrum. In this study, we depict how NPIs of COVID-19 have been mentioned on social media across the sociocultural spectrum. Because this study focuses on providing a neutral and objective characterization of content features in COVID-19–related discussions, it does not consider subjective sentiment toward specific NPIs. However, individual and societal perception toward NPIs can be strong influencing factors during the COVID-19 pandemic. For instance, positive sentiment toward mask-wearing and social distancing may reflect actual compliance with these NPIs in society and hence help reduce the risk of transmission. On the other hand, negative sentiment toward these NPIs may lead to noncompliance and facilitate COVID-19 transmission in the real world. In a future study, we will further integrate objective content features and corresponding sentiment and/or emotion to provide a more comprehensive understanding of public perceptions. Finally, this study relies on human coding of content features, which is substantially labor-intensive. For instance, adequate and proper training is required to achieve high intercoder reliability before each coder can perform independently. In comparison, the LIWC algorithm is automated and relatively easy to use. We are still at the early development stage of a novel analytical workflow that is similar to LIWC. We expect to develop at least a semiautomated and semisupervised machine learning method for quick and effective web-based health information processing and annotation. To achieve this ambitious goal, we envision a crowd-sourcing approach that will enable ardent citizen scientists and volunteers worldwide to help further manually code more social media posts, create an even larger corpus, and develop state-of-the-art semisupervised or supervised machine learning pipelines to automate the process. The eventual product will be able to automatically extract content features from social media posts regarding health issues and can further guide effective health communications during emergencies.
This study is the first of its kind to comprehensively characterize the content features of discussions regarding a large pandemic on social media across the sociocultural spectrum. We showed the vast differences in topic content features of viral social media posts in Twitter and Weibo, the two most influential social media platforms in China and the West during the COVID-19 pandemic. In general, viral social media posts in China focused on cases and prevention, which are topics that are more related to COVID-19 as a health issue. However, as a comparison, most viral tweets regarding COVID-19 focused more on policies and politics, including stay-at-home orders, President Trump , and other political figures . Through various analytical methods, social media data provided a new angle to explore and understand public discourse of the COVID-19 pandemic and associated social, political, and economic issues. Details of these discussions in virtual cyberspace may provide insights on the actual disease epidemic in the real world. For example, analyzing public perception of various NPIs, such as social distancing , mandatory mask-wearing , and stay-at-home orders, would provide an estimation of the compliance with these NPIs, which determine the case counts and epidemic trajectory in a region. This concept echoes the original idea of infodemiology, which uses a time series of social media post counts related to a health issue (eg, COVID-19) as an indicator of actual case counts . In addition to the number of social media posts, we will be able to further extract fine-grained perceptions of the risk and NPIs of COVID-19 and extend the application of infodemiology.
To achieve an effective comparison across the sociocultural spectrum regarding the COVID-19 pandemic on social media, we developed a comprehensive content analytical workflow. This analytical workflow was specifically designed for transboundary infectious diseases (eg, outbreaks and pandemics of infectious diseases) that have complicated sociocultural contexts. Compared to the commonly used LIWC , our workflow, especially the coding scheme, has several advantages. First, LIWC is a general content analytical tool that ignores many important content features during the COVID-19 pandemic. Our coding scheme is tailored to the complicated and interacting health, social, cultural, and political nuances of transboundary infectious diseases. Therefore, our coding scheme is able to capture a much more comprehensive and detailed content features in web-based discussions regarding transboundary infectious diseases. Second, LIWC uses proprietary algorithms to calculate individual scores of different features, and the exact interpretation of the numeric values is not readily comprehensible. In contrast, our coding scheme is 0-1 binary, where 1 indicates that the content has a feature and 0 indicates that it does not. This coding is clearer than the obscure LIWC scores. In addition, LIWC scores vary substantially (from 0 to 100) among different features. Certain features that have large values in LIWC tend to dominate and overshadow other features; thus, further analysis is prone to bias. Our coding scheme is consistent, as all features have the same coding scheme. Finally, LIWC is difficult to directly apply to non–Indo-European languages; therefore, direct comparison between sociocultural contexts with LIWC is almost impossible. In contrast, our coding scheme is context-free and can be applied to virtually any language and any region. The coding scheme itself is also flexible. Researchers can easily add and modify content features as necessary when working with other health issues beyond COVID-19. The coding scheme can be retrofitted to understand communications on previous events (eg, the 2016 Zika event). We can easily add, remove, or revise corresponding content features related to the specific health issues we are exploring.
This study adopts a static view of all viral social media posts for comparative analysis between two sides of the sociocultural spectrum in a given period of time. However, for a large and ongoing pandemic, time is another major influential factor that is associated with the actual progress of the pandemic. Our previous studies showed that the Zika case series was strongly associated with the Zika discussion trend on Twitter in 2016 . Similarly, future studies can be expended to explicitly characterize how various content features evolve with time in different regions. The ongoing COVID-19 pandemic case series can be predicted by certain content features (eg, regarding NPIs), similar to the previously discussed infodemiology approach. We used the number of reposts (ie, retweets or shares) as the definition of a viral social media post. One limitation is that we did not consider the possibility of automatic reposting by bots or cyborgs. Therefore, it is possible that the large number of reposts may not accurately represent and reflect the public perception of an issue. Bots and cyborgs, however, are not necessarily associated with misinformation. Bots and cyborgs can be used as tools to quickly disseminate information on social media platforms for other reasons, such as advertising. A future direction of this study is to identify other definitions of viral posts (eg, posts with a large number of likes, favorites, or comments). Viral social media posts are only one of many attributes of social media discussion. Our initial assessment showed that >75% of tweets and >80% of Weibo posts regarding COVID-19 did not receive any attention on social media. This number is similar to our previous finding that 76% of all Zika-related tweets were never retweeted . To characterize web-based public discourse related to COVID-19 and other emerging health issues accurately and comprehensively, we will continue studying these nonviral social media posts on different platforms. However, given the ever-increasing volume of social media posts, effective sampling strategies are a priority. Effective sampling is a necessity to provide a less biased depiction of content features. Data mining of nonviral posts regarding COVID-19, especially on sentiment toward NPIs, will provide a more accurate estimation of compliance with NPIs in different regions at different stages of the pandemic. We will also be able to further compare and contrast how the distributions of content features differ between viral and nonviral post groups as well as across the sociocultural spectrum. In this study, we depict how NPIs of COVID-19 have been mentioned on social media across the sociocultural spectrum. Because this study focuses on providing a neutral and objective characterization of content features in COVID-19–related discussions, it does not consider subjective sentiment toward specific NPIs. However, individual and societal perception toward NPIs can be strong influencing factors during the COVID-19 pandemic. For instance, positive sentiment toward mask-wearing and social distancing may reflect actual compliance with these NPIs in society and hence help reduce the risk of transmission. On the other hand, negative sentiment toward these NPIs may lead to noncompliance and facilitate COVID-19 transmission in the real world. In a future study, we will further integrate objective content features and corresponding sentiment and/or emotion to provide a more comprehensive understanding of public perceptions. Finally, this study relies on human coding of content features, which is substantially labor-intensive. For instance, adequate and proper training is required to achieve high intercoder reliability before each coder can perform independently. In comparison, the LIWC algorithm is automated and relatively easy to use. We are still at the early development stage of a novel analytical workflow that is similar to LIWC. We expect to develop at least a semiautomated and semisupervised machine learning method for quick and effective web-based health information processing and annotation. To achieve this ambitious goal, we envision a crowd-sourcing approach that will enable ardent citizen scientists and volunteers worldwide to help further manually code more social media posts, create an even larger corpus, and develop state-of-the-art semisupervised or supervised machine learning pipelines to automate the process. The eventual product will be able to automatically extract content features from social media posts regarding health issues and can further guide effective health communications during emergencies.
|
Opportunities and challenges in early diagnosis of rheumatoid arthritis in general practice | e4b42a53-aa67-4c1b-966f-238f3dfbb8d5 | 10049595 | Family Medicine[mh] | Prompt diagnosis of rheumatoid arthritis (RA), the most common form of inflammatory arthritis, is crucial to optimise long-term patient outcomes through prevention of joint damage and disability. However, early disease can be challenging to identify in primary care, especially given that RA makes up a small proportion of the musculoskeletal conditions that account for one in seven GP appointments. Patients consult with GPs a mean of four times before being referred to rheumatology services. The non-specific nature of symptoms at the earliest stages of RA is a barrier to GPs identifying patients with newly presenting RA.
The rationale for identifying early disease is to initiate treatment using disease-modifying therapies (including biologics) in the reversible stage of the disease, referred to as the RA ‘therapeutic window of opportunity’ in the 3 months following the onset of clinical synovitis. This can significantly improve clinical outcomes and health-related quality of life, with earlier disease control reducing work-related disability. However, the discovery that circulating autoantibodies, including anticitrullinated protein antibodies (ACPA), precede the clinical onset of disease provides an opportunity to identify people with musculoskeletal symptoms who are at risk of developing RA. ACPA can be identified through an anti-cyclic citrullinated peptide (anti-CCP) test. A high positive anti-CCP result is more specific for joint pathology than rheumatoid factor, and is strongly associated with the development of RA . – The international rheumatology community has adopted the term ‘pre-RA’ to retrospectively describe a phase that an individual has progressed through once it is known that they have developed RA. It is during this period that patients may present in primary care with non-specific musculoskeletal symptoms. Secondary care models in autoantibody-positive patients have evolved to predict the early development of RA before synovitis is clinically apparent. However, the applicability of these models to primary care is unknown: non-specific musculoskeletal symptoms are common in the community, and the presence of RA-related autoantibodies (ACPA) may have important differences in natural history and prognosis when identified in those with non-specific musculoskeletal symptoms compared with disease that presents with clinical synovitis. New research from the Leeds anti-CCP cohort, analysing 6780 patients from 312 general practices throughout England, demonstrated that individuals with high anti-CCP levels and joint pain in their hands/feet (without synovitis) have an increased likelihood of developing RA, compared with those with low anti-CCP levels . Targeted anti-CCP testing in general practice could identify people at high risk of developing RA, enabling referral to rheumatology services prior to the development of synovitis to facilitate monitoring, diagnosis, and rapid initiation of treatment.
Identifying pre-RA is a ‘needle in the haystack’ in primary care due to the myriad of musculoskeletal presentations. Changing the diagnostic paradigm of RA to detection prior to the onset of classical clinical synovitis requires robust evidence regarding the appropriate selection of patients ‘at risk’ of RA in primary care, and that targeted anti-CCP testing results in overall benefit, minimises harms, and is cost-effective. Research is underway to develop criteria to identify people presenting to primary care with new-onset musculoskeletal symptoms who are likely to be anti-CCP positive. Economic modelling is also exploring the cost-effectiveness of such testing, considering the workload implications within general practice and rheumatology services, the resources needed to support interpretation of test results, and pathology costs of widespread anti-CCP testing.
Even if primary care prediction models perform adequately, evidence is required regarding the clinical- and cost-effectiveness and safety of ‘pre-RA’ intervention. The benefits of treating pre-RA may include reducing the risk of clinical outcomes associated with comorbidities, such as cardiovascular disease-related mortality in RA (relative risk 1.48 [95% confidence interval = 1.36 to 1.62]). New evidence is emerging to support an earlier therapeutic window, with disease-modifying treatments to halt the biological processes and prevent the onset of RA being tested within clinical trials. There are, however, substantial adverse effects of disease-modifying therapies, and it should not be assumed that evidence on the balance of benefits and harms found for patients with RA diagnosed following presentation with typical symptoms is generalisable to the pre-RA population. ’Pre-RA’ must, therefore, be recognised as a different entity from RA. Potential harms of a strategy that will label patients as having pre-RA must be considered, such as increased anxiety, reluctance to undertake usual levels of activity due to perceived disability, or wider social implications such as increased costs of insurance policies or restriction of occupational opportunities. The scale of such harms will depend on the extent of overdiagnosis that can be expected, that is, the proportion of individuals labelled at risk who would not have gone on to develop RA . While we understand the clinical risk factors for RA development in the at-risk population, there is still potential for a high rate of false positive anti-CCP tests and it is not yet understood how frequently we should monitor these clinical risk factors. The optimal primary and secondary care service models to monitor and support patients, and the associated workload and resource implications, also require further research. Potentially modifiable lifestyle risk factors such as raised body mass index and smoking are strongly associated with the development of RA. Our recent systematic review highlighted that individuals at risk of RA have a need for more knowledge about RA and their potentially modifiable risk factors, which in turn could support their engagement with preventive interventions. However, as yet there is no clear indication that modifying these lifestyle risk factors will prevent or delay the onset of disease. Further evidence is also needed to determine if disease-modifying therapies can prevent or delay the onset of RA. Accordingly, our team are currently recruiting those with musculoskeletal symptoms who have tested positive for anti-CCP antibodies and who are at moderate or high risk of developing RA using a risk-stratification prediction model (antibody concentration more than three times the upper limit of normal plus hands/feet tenderness, and/or ≥30 minutes early-morning stiffness) to participate in a therapeutic intervention study (48-week 2 mg daily oral dose of baricitinib) to determine if it reduces the incidence of RA.
Non-specific musculoskeletal symptoms constitute a large proportion of all consultations in primary care. Testing some of these patients using anti-CCP may provide a means to identify those at risk of RA and potentially delay or prevent its onset. Before these potential benefits can be adequately realised, further research is required to evaluate and mitigate countervailing harms and costs of such a strategy, and to understand how widespread testing can be integrated into routine primary care in a way that is acceptable to GPs and patients.
|
The Double-Edged Sword of Iranian Social Media Against COVID-19 | 5f001ce3-eb0c-4a97-bb96-7f4fac030c43 | 7264456 | Health Communication[mh] | Since COVID-19 is a pandemic and its prevalence is increasing in Iran, managing the cyberspace has been regarded as a major challenge. The cyberspace looks like a double-edged sword with both positive and negative effects. Hence, national and local governments are required to monitor the cyberspace and determine the boundaries of social responsibility and professional-ethical framework of the media during crises and disasters to prevent the spread of any chaos and false information in the cyberspace. Additionally, social media and health planners and policymakers should collaborate to create realistic and scientific information in the cyberspace. Proper management of the cyberspace makes the public feel relaxed, reassures people to give a better response to COVID-19, and helps return the conditions to the pre-pandemic status as quickly as possible.
|
Microbial and organic manure fertilization alters rhizosphere bacteria and carotenoids of | 96446929-db5f-4724-b0b8-7a06ed3da78f | 11562559 | Microbiology[mh] | Soil health is crucial for plant growth because it provides a continuous living ecosystem and affects crop yield . Fertilizers are used to improve the soil environment. However, long-term chemical use usually damages the soil environment and disrupts ecosystems . For example, probiotics in the soil can be dramatically changed by chemical fertilizers . Recently, microbial fertilizers and organic manure have been developed to address these problems . The combined use of microbial fertilizers and organic manure leads to comprehensive changes in the soil ecosystem, contributing to high crop yields . The Strongreen and Yumeiren used in this study contained various strains of microorganisms and fermented organic fertilizers developed by Jiangmen Jieshi Plant Nutrition Co., Ltd. and Guangxi Yuanrun Agricultural Development Co., Ltd., respectively. They can improve soil health by stimulating probiotic growth and depressing the growth of pathogenic bacteria . Although the beneficial effects of the combined use of Strongreen and Yumeiren on soil have been determined, more details regarding its mechanism are needed to guide its practical application. The fruit color of Citrus affects the price and market competitiveness . Thus, the cause of different fruit colors has received widespread attention. The fruits have vivid colors that are mainly determined by anthocyanins and carotenoids. Anthocyanins are a class of water-soluble flavonoids that are widely distributed in vegetables and fruits . Citrus plants are a rich source of carotenoids . Carotenoids constitute a large family of compounds produced via comprehensive pathways. In addition, carotenoid concentrations are highly diverse among citrus varieties . For example, β-cryptoxanthin is enriched in Satsuma mandarin ( Citrus unshiu Marc.) while violaxanthin isomers are enriched in Valencia orange ( Citrus sinensis Osbeck) . These different carotenoid components in fruits affect the nutritional composition of the juice sacs. Therefore, the carotenoid component is important for fruit color and juice sac quality. Recent studies have shown that the rhizosphere bacterial community contributes to plant health and affects plant growth. Network interactions of the rhizosphere bacterial community can induce carotenoid production in plants. For Citrus planting, a combination of microbial fertilizer and organic manure fertilization has been widely used in recent years . This combined fertilization increases fruit yield and optimizes fruit color. Unfortunately, we still lack information about the interaction between rhizosphere bacterial community and carotenoid accumulation driven by microbial fertilizer and organic manure combined fertilization. Strongreen and Yumeiren are the two fertilizers that have been widely used in South China to promote productive traits resulting in higher yields and better quality of fruits for pitaya ( Hylocereus undatus ), banana ( Musa acuminata ), and ‘Orah’ . Based on previous planting knowledge, we presumed that the combined fertilization of Strongreen and Yumeiren might affect the rhizosphere bacterial community and lead to the accumulation of several carotenoids. In this study, we compared the fruit characteristics of Orah plants with and without combined fertilization of Strongreen and Yumeiren. The rhizosphere bacterial community and carotenoids in the two groups were determined by 16 S rRNA gene sequencing and ultra-performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS) analysis, respectively. Using bioinformatics analysis, the differences in diversity and taxa of the rhizosphere bacterial community, and carotenoid components, were identified. Finally, rhizosphere bacterial community–carotenoid pairs with high correlations were analyzed, and an interaction network was constructed. The details of the effects of combined fertilization on rhizosphere bacterial community and carotenoids in ‘Orah’ has not been unveiled yet. This study focuses on the impact of fertilization on soil microbial communities and plant carotenoid components, providing key evidence for precise combined fertilization. Field experiments and sampling The studied ‘Orah’ in this study were growing in a local orchard in Wuming town from Nanning, Guangxi, China. The plants were grown from 2017 to 2022 and grown at the same management conditions as described previously . The longitude and latitude positions of experimental place were 23°15′58″ and 108°17′23″E, respectively. The annual average temperature was 22.3 °C, the average temperature of the coldest month was 13.2 °C, the average temperature of the hottest month was 28.8 °C, the average annual precipitation was 1272.3 mm, and the average annual sunshine duration was 1509.3 h. Lighting is natural light. All tested plants assigned to the groups were randomly selected. In this study, we used Strongreen and Yumeiren, to treat the Citrus reticulata Blanco ‘Orah’. Strongreen contains biological fulvic acid (BFA), organic matter, humic acid, manganese, boron, zinc (3–5%), and potassium oxide (≥ 6%) (Jiangmen Just Agrotech Co., LTD, China). Yumeiren contains organic fertilizers used for the enzymatic hydrolysis of fish proteins and peanut bran (Guangxi Yuanrun Agricultural Development Co., Ltd., China). The 60 trees were randomly divided into two groups: WYT and WYCK groups. The WYT group was sprayed with 50 ml Strongreen and 250 g Yumeiren (diluted in 10 kg ddH 2 O) five times every month from July to November in 2022 to enhance fruit enlargement and improve quality. The WYCK group was sprayed any fertilizer but 10 kg ddH 2 O as same as the WYT group. The 10 kg diluted fertilizer or ddH 2 O were sprayed for the trees by digging a circular trench around the drip line of the tree and applying the fertilizer in this trench. Sampling and measurements were double-blind . For each group, 100 g of rhizosphere soil samples from five trees with three replicates were collected using a sampling shovel for 16 S rRNA gene sequencing. Briefly, sterilize shovels were used to sample the rhizosphere soil. After removing impurities such as plant roots, stones, etc., from the soil, a portion of the soil sample was preserved in an insulated box containing ice packs and brought back to the laboratory, where it was immediately stored in a -80 °C freezer for microbial analysis. For the fruit measurement, 10 fruits were collected from 10 trees from each group. For the analysis of carotenoids, 3 fruit replicates from 3 trees were obtained in January 2023 for the UPLC-MS/MS analysis. Measurements of fruit The weight, cross diameter, and longitudinal diameter of the ‘Orah’ fruit were measured following the standard method as previously described on ripening stage. The color of the ‘Orah’ fruit was determined by UltraScan Pro (Hunter Lab, Reston, VA, USA) under room temperature per the L, a, b standard color. Citrus color index (CCI) was used to evaluate fruit color. The CCI was calculated as CCI = (a × 100)/L × b. DNA extraction and sequencing For 16s rRNA sequencing, DNA from the rhizosphere soil samples (5 g for each sample) was extracted using the TIANamp Soil DNA Kit (TIANGEN, China) according to the manufacturer’s protocols. DNA was assayed using a 1% agarose gel and a NanoDrop 2000 spectrophotometer (NanoDrop, USA) to confirm its quality and calculate its concentration. Sequence fragments of bacterial DNA from the 16 S rRNA genes were amplified using polymerase chain reaction (PCR). The primers were 341 F: 5′- CCTAYGGGRBGCASCAG-3′ and 806R: 5′-GGACTACNNGGGTATCTAAT-3′ as reported before . Three biological replicates were used for each treatment group. The PCR reaction condition was: 98 °C for 30 s, followed by 25 cycles (98 °C for 10 s, 55 °C for 30 s, and 72 °C for 30 s), and a final extension at 72 °C for 5 min. After PCR, the products were isolated by 2% agarose gel electrophoresis and Agencourt Ampure XP beads (Beckman, USA), following the PicoGreen dsDNA quantitation assay (Thermo Fisher, USA). The sequencing libraries were constructed by a TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA), determined by an Agilent Bioanalyzer 2100 (Agilent, USA), and sequenced using an Illumina NovoSeq 6000 platform (Illumina, USA). All the raw sequencing data have been deposited in the NCBI SRA database under the BioProject number PRJNA1090555 ( https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA1090555 ). Bioinformatic analysis of soil bacterial communities The generated raw reads were filtered to eliminate adaptors, primers, low-quality sequences, nonbacterial ribosome sequences, and chimeras. The sequences were then assembled by FLASH v1.2.11 following clustered into operational taxonomic units by CD-HIT algorithm using the UCLUST program (USEARCH V11, https://www.drive5.com/usearch/ ). Diversity comparisons were analyzed for alpha and beta diversity using QIIME2 plugins . The Bray-Curtis distances and square-root transformed abundance data were analyzed and visualized by the “phyloseq,” “dplyr”, and “ggplot2” R packages. Analysis of carotenoids in ‘Orah’ Carotenoid content was analyzed using UPLC-MS/MS (Wuhan MetWare Biotechnology Co., Ltd., Wuhan, China). The fruit skin and pulp from WYCK and WYT groups were first crushed into powder by an MM 400 Grinding Mill (Retsch, Germany) at 4 °C. The analyzed groups were named WYCKS (fruit skin of the WYCK group), WYCKP (fruit pulp of the WYCK group), WYTS (fruit skin of the WYT group), and WYTP (pulp of the WYT group). Then, 50 mg samples were dissolved in 0.6 ml extraction buffer (n-hexane/acetone/ethanol with a volume of 1:1:2 containing 0.01% BHT). After adding the standard substance and treated for 20 min at room temperature, the samples were isolated by 12,000 r/min centrifugation for 5 min at 4 °C twice. Samples were dissolved in a methanol/methyl tert-butyl ether (1:1) mixture. By filtration into 0.22 μm pore size, the samples were analyzed by a UPLC-MS/MS system. The system included an ExionLC™ AD ( https://sciex.com.cn/ ) and a QTRAP® 6500+ ( https://sciex.com.cn/ ) using the ESI mode. The condition for ExionLC™ AD was as follows: chromatographic column, YMC C30 (3 μm, 100 mm × 2.0 mm i.d.); mobile phase A was methanol: acetonitrile (1:3), 0.01% BHT and 0.1% formic acid and mobile phase B was methyl tert-butyl ether (containing 0.01% BHT); the gradient elution program was 0 min A/B 100:0 (V/V), 3 min 100:0 (V/V), 5 min 30:70 (V/V), 9 min 5:95 (V/V), 10 min 100:0 (V/V), 11 min 100:0 (V/V); the flow rate was 0.8 mL/min, column temperature was 28 °C, and the injection volume was 2 µL. The mass spectrometry used a 350 °C Atmospheric Pressure Chemical Ionization Source. In QTRAP® 6500+, the Declustering Potential and Collision Energy methods were used for the detection. The data were processed using the Analyst 1.6.3 software (Sciex). Scheduled multiple reaction monitoring was used for the analysis, and the Multiquant 3.0.3 software (Sciex) was used to quantify the carotenoids. The declaration of potentials and collision energies as key mass spectrometer parameters were analyzed for optimization. Statistical analysis All statistical analyses and plots in this study were performed using R software. Alpha and beta diversities were calculated after normalization, and the Shannon diversity index was used to represent alpha diversity. The significant differences of Shannon index and Simpson index between the groups were analyzed by t -test. When p < 0.05, significant differences were confirmed between the groups. Beta diversity was presented using a Bray–Curtis dissimilarity matrix and permutational analysis. Correlation analysis between microorganisms and carotenoids was performed using the correlation package of R. The studied ‘Orah’ in this study were growing in a local orchard in Wuming town from Nanning, Guangxi, China. The plants were grown from 2017 to 2022 and grown at the same management conditions as described previously . The longitude and latitude positions of experimental place were 23°15′58″ and 108°17′23″E, respectively. The annual average temperature was 22.3 °C, the average temperature of the coldest month was 13.2 °C, the average temperature of the hottest month was 28.8 °C, the average annual precipitation was 1272.3 mm, and the average annual sunshine duration was 1509.3 h. Lighting is natural light. All tested plants assigned to the groups were randomly selected. In this study, we used Strongreen and Yumeiren, to treat the Citrus reticulata Blanco ‘Orah’. Strongreen contains biological fulvic acid (BFA), organic matter, humic acid, manganese, boron, zinc (3–5%), and potassium oxide (≥ 6%) (Jiangmen Just Agrotech Co., LTD, China). Yumeiren contains organic fertilizers used for the enzymatic hydrolysis of fish proteins and peanut bran (Guangxi Yuanrun Agricultural Development Co., Ltd., China). The 60 trees were randomly divided into two groups: WYT and WYCK groups. The WYT group was sprayed with 50 ml Strongreen and 250 g Yumeiren (diluted in 10 kg ddH 2 O) five times every month from July to November in 2022 to enhance fruit enlargement and improve quality. The WYCK group was sprayed any fertilizer but 10 kg ddH 2 O as same as the WYT group. The 10 kg diluted fertilizer or ddH 2 O were sprayed for the trees by digging a circular trench around the drip line of the tree and applying the fertilizer in this trench. Sampling and measurements were double-blind . For each group, 100 g of rhizosphere soil samples from five trees with three replicates were collected using a sampling shovel for 16 S rRNA gene sequencing. Briefly, sterilize shovels were used to sample the rhizosphere soil. After removing impurities such as plant roots, stones, etc., from the soil, a portion of the soil sample was preserved in an insulated box containing ice packs and brought back to the laboratory, where it was immediately stored in a -80 °C freezer for microbial analysis. For the fruit measurement, 10 fruits were collected from 10 trees from each group. For the analysis of carotenoids, 3 fruit replicates from 3 trees were obtained in January 2023 for the UPLC-MS/MS analysis. The weight, cross diameter, and longitudinal diameter of the ‘Orah’ fruit were measured following the standard method as previously described on ripening stage. The color of the ‘Orah’ fruit was determined by UltraScan Pro (Hunter Lab, Reston, VA, USA) under room temperature per the L, a, b standard color. Citrus color index (CCI) was used to evaluate fruit color. The CCI was calculated as CCI = (a × 100)/L × b. For 16s rRNA sequencing, DNA from the rhizosphere soil samples (5 g for each sample) was extracted using the TIANamp Soil DNA Kit (TIANGEN, China) according to the manufacturer’s protocols. DNA was assayed using a 1% agarose gel and a NanoDrop 2000 spectrophotometer (NanoDrop, USA) to confirm its quality and calculate its concentration. Sequence fragments of bacterial DNA from the 16 S rRNA genes were amplified using polymerase chain reaction (PCR). The primers were 341 F: 5′- CCTAYGGGRBGCASCAG-3′ and 806R: 5′-GGACTACNNGGGTATCTAAT-3′ as reported before . Three biological replicates were used for each treatment group. The PCR reaction condition was: 98 °C for 30 s, followed by 25 cycles (98 °C for 10 s, 55 °C for 30 s, and 72 °C for 30 s), and a final extension at 72 °C for 5 min. After PCR, the products were isolated by 2% agarose gel electrophoresis and Agencourt Ampure XP beads (Beckman, USA), following the PicoGreen dsDNA quantitation assay (Thermo Fisher, USA). The sequencing libraries were constructed by a TruSeq® DNA PCR-Free Sample Preparation Kit (Illumina, USA), determined by an Agilent Bioanalyzer 2100 (Agilent, USA), and sequenced using an Illumina NovoSeq 6000 platform (Illumina, USA). All the raw sequencing data have been deposited in the NCBI SRA database under the BioProject number PRJNA1090555 ( https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA1090555 ). The generated raw reads were filtered to eliminate adaptors, primers, low-quality sequences, nonbacterial ribosome sequences, and chimeras. The sequences were then assembled by FLASH v1.2.11 following clustered into operational taxonomic units by CD-HIT algorithm using the UCLUST program (USEARCH V11, https://www.drive5.com/usearch/ ). Diversity comparisons were analyzed for alpha and beta diversity using QIIME2 plugins . The Bray-Curtis distances and square-root transformed abundance data were analyzed and visualized by the “phyloseq,” “dplyr”, and “ggplot2” R packages. Carotenoid content was analyzed using UPLC-MS/MS (Wuhan MetWare Biotechnology Co., Ltd., Wuhan, China). The fruit skin and pulp from WYCK and WYT groups were first crushed into powder by an MM 400 Grinding Mill (Retsch, Germany) at 4 °C. The analyzed groups were named WYCKS (fruit skin of the WYCK group), WYCKP (fruit pulp of the WYCK group), WYTS (fruit skin of the WYT group), and WYTP (pulp of the WYT group). Then, 50 mg samples were dissolved in 0.6 ml extraction buffer (n-hexane/acetone/ethanol with a volume of 1:1:2 containing 0.01% BHT). After adding the standard substance and treated for 20 min at room temperature, the samples were isolated by 12,000 r/min centrifugation for 5 min at 4 °C twice. Samples were dissolved in a methanol/methyl tert-butyl ether (1:1) mixture. By filtration into 0.22 μm pore size, the samples were analyzed by a UPLC-MS/MS system. The system included an ExionLC™ AD ( https://sciex.com.cn/ ) and a QTRAP® 6500+ ( https://sciex.com.cn/ ) using the ESI mode. The condition for ExionLC™ AD was as follows: chromatographic column, YMC C30 (3 μm, 100 mm × 2.0 mm i.d.); mobile phase A was methanol: acetonitrile (1:3), 0.01% BHT and 0.1% formic acid and mobile phase B was methyl tert-butyl ether (containing 0.01% BHT); the gradient elution program was 0 min A/B 100:0 (V/V), 3 min 100:0 (V/V), 5 min 30:70 (V/V), 9 min 5:95 (V/V), 10 min 100:0 (V/V), 11 min 100:0 (V/V); the flow rate was 0.8 mL/min, column temperature was 28 °C, and the injection volume was 2 µL. The mass spectrometry used a 350 °C Atmospheric Pressure Chemical Ionization Source. In QTRAP® 6500+, the Declustering Potential and Collision Energy methods were used for the detection. The data were processed using the Analyst 1.6.3 software (Sciex). Scheduled multiple reaction monitoring was used for the analysis, and the Multiquant 3.0.3 software (Sciex) was used to quantify the carotenoids. The declaration of potentials and collision energies as key mass spectrometer parameters were analyzed for optimization. All statistical analyses and plots in this study were performed using R software. Alpha and beta diversities were calculated after normalization, and the Shannon diversity index was used to represent alpha diversity. The significant differences of Shannon index and Simpson index between the groups were analyzed by t -test. When p < 0.05, significant differences were confirmed between the groups. Beta diversity was presented using a Bray–Curtis dissimilarity matrix and permutational analysis. Correlation analysis between microorganisms and carotenoids was performed using the correlation package of R. Fruit quality and soil content The fruit quality of the two groups was assessed (Fig. A). The fruit weight, cross diameter, longitudinal diameter, and color were compared. Fruit weight was significantly higher in the WYT group than in the WYCK group ( p < 0.05) (Fig. B). The fruit color differed significantly between the WYT and WYCK groups. The analysis results, including L, a, b, and CCI, were significantly different between the groups ( p < 0.05) (Fig. C-F). Bacterial diversity altered by fertilization The 16s rRNA sequencing generated 736,037 clean sequences, and 683,533 sequences (92.87%) were effective in generating 7,126 operational taxonomic units (OTU) (Supplementary Table ). Shannon and Simpson indices were used to analyze the genus diversity of the WYT and WYCK groups. The WYT group showed a higher Shannon index than the WYCK group ( p < 0.05, Fig. A), whereas similar Simpson indices were observed when comparing the WYT and WYCK groups ( p > 0.05, Fig. B). The Curtis similarity and PCA were performed to show the overall divergence in bacterial community composition between the groups (Fig. C, D). The replicate samples from the WYT and WYCK groups demonstrated similar results; however, a significant separation between the two groups was observed. Bacterial composition of rhizosphere bacterial community Most of the OTU were assigned into 10 phyla, accounting for 95.41%, including Proteobacteria (30.70%), Chloroflexi (13.55%), Acidobacteria (14.96%), Actinobacteria (17.19%), Cyanobacteria (3.82%), Saccharibacteria (3.71%), Verrucomicrobia (2.82%), Gemmatimonadetes (3.35%), Firmicutes (2.30%) and Bacteroidetes (3.01%) (Supplementary Table , Fig. ). The top three phyla in the WYT group were Proteobacteria (34.00%), Actinobacteria (16.77%), and Acidobacteria (15.69%), whereas those in the WYCK group were Proteobacteria (27.39%), Actinobacteria (17.61%), and Chloroflexi (16.02%). Only 4.59% of the taxa observed in the two groups were not assigned to the top 10 phyla. Rhizosphere bacterial community changes in ‘Orah’ with microbial fertilizer and organic manure combined fertilization The comparison of microbiota in the rhizosphere soil between the WYT and WYCK groups was performed by discriminant analysis of the effect size. The results demonstrated that Pseudomonas was significantly enriched in WYT, whereas Cyanobacteria were significantly enriched in WYCK ( p < 0.05, Student’s t-test) (Fig. ). At the family level, Phyllobacteriaceae was significantly more abundant in the WYT group than in the WYCK group ( p < 0.05). Thermosporothrix and Sphingobium were significantly more abundant in the WYCK group than in the WYT group ( p < 0.05) (Fig. ) (Supplementary Tables - ). Carotenoid changes in ‘Orah’ with microbial fertilizer and organic manure combined fertilization A total of 51 carotenoid components were identified using UPLC-MS/MS (Fig. A). Among the identified carotenoid components, 37 were downregulated in WYCKP compared to those in WYTP. The comparison between WYCKS and WYTS identified 24 significantly different components, including 7 downregulated and 17 upregulated carotenoid components (Fig. B). The top 10 components with the most changes in WYCKP compared to WYTP comparison were violaxanthin myristate, α-cryptoxanthin, lutein dimyristate, zeaxanthin palmitate, violaxanthin dipalmitate, 5,6epoxy-luttein dilaurate, lutein oleate, lutein palmitate, violaxanthin-myristate-laurate, and violaxanthin dimyristate (Fig. C). By WYCKS and WYTS comparison, the most up-regulated 5 carotenoid components were zeaxanthin-laurate-myristate, zeaxanthin dilaurate, zeaxanthin dimyristate, lutein dilaurate, and antheraxanthin while the most up-downregulated 5 carotenoid components were violaxanthin dioleate, 8’-apo-beta-carotenal, violaxanthin-myristate-oleate, rubixanthin palmitate and β-cryptoxanthin palmitate (Fig. D) (Supplementary Table ). Correlations between rhizosphere bacterial community and carotenoids The interactions between the rhizosphere bacterial community and carotenoids were evaluated using correlation analysis. The threshold of the p-value was < 0.0001 in this study. A total of 113 OTU-carotenoid pairs with high correlations were identified in skin tissue (Fig. A). In the pulp tissue, 88 OTU-carotenoid pairs with high correlations were observed (Fig. B). The overlap of the OTU-carotenoid pairs from the skin and pulp tissues was identified. Four OTUs were correlated with seven carotenoid components in the two tissues. The four OTUs were annotated as TRA3-20 (order), Roseiflexus (genus), OPB35 (class), and Fictibacillus (genus) (Fig. C). The results showed that these groups of rhizosphere bacterial affected carotenoid generation in the fruit of “Orah.” The fruit quality of the two groups was assessed (Fig. A). The fruit weight, cross diameter, longitudinal diameter, and color were compared. Fruit weight was significantly higher in the WYT group than in the WYCK group ( p < 0.05) (Fig. B). The fruit color differed significantly between the WYT and WYCK groups. The analysis results, including L, a, b, and CCI, were significantly different between the groups ( p < 0.05) (Fig. C-F). The 16s rRNA sequencing generated 736,037 clean sequences, and 683,533 sequences (92.87%) were effective in generating 7,126 operational taxonomic units (OTU) (Supplementary Table ). Shannon and Simpson indices were used to analyze the genus diversity of the WYT and WYCK groups. The WYT group showed a higher Shannon index than the WYCK group ( p < 0.05, Fig. A), whereas similar Simpson indices were observed when comparing the WYT and WYCK groups ( p > 0.05, Fig. B). The Curtis similarity and PCA were performed to show the overall divergence in bacterial community composition between the groups (Fig. C, D). The replicate samples from the WYT and WYCK groups demonstrated similar results; however, a significant separation between the two groups was observed. Most of the OTU were assigned into 10 phyla, accounting for 95.41%, including Proteobacteria (30.70%), Chloroflexi (13.55%), Acidobacteria (14.96%), Actinobacteria (17.19%), Cyanobacteria (3.82%), Saccharibacteria (3.71%), Verrucomicrobia (2.82%), Gemmatimonadetes (3.35%), Firmicutes (2.30%) and Bacteroidetes (3.01%) (Supplementary Table , Fig. ). The top three phyla in the WYT group were Proteobacteria (34.00%), Actinobacteria (16.77%), and Acidobacteria (15.69%), whereas those in the WYCK group were Proteobacteria (27.39%), Actinobacteria (17.61%), and Chloroflexi (16.02%). Only 4.59% of the taxa observed in the two groups were not assigned to the top 10 phyla. The comparison of microbiota in the rhizosphere soil between the WYT and WYCK groups was performed by discriminant analysis of the effect size. The results demonstrated that Pseudomonas was significantly enriched in WYT, whereas Cyanobacteria were significantly enriched in WYCK ( p < 0.05, Student’s t-test) (Fig. ). At the family level, Phyllobacteriaceae was significantly more abundant in the WYT group than in the WYCK group ( p < 0.05). Thermosporothrix and Sphingobium were significantly more abundant in the WYCK group than in the WYT group ( p < 0.05) (Fig. ) (Supplementary Tables - ). A total of 51 carotenoid components were identified using UPLC-MS/MS (Fig. A). Among the identified carotenoid components, 37 were downregulated in WYCKP compared to those in WYTP. The comparison between WYCKS and WYTS identified 24 significantly different components, including 7 downregulated and 17 upregulated carotenoid components (Fig. B). The top 10 components with the most changes in WYCKP compared to WYTP comparison were violaxanthin myristate, α-cryptoxanthin, lutein dimyristate, zeaxanthin palmitate, violaxanthin dipalmitate, 5,6epoxy-luttein dilaurate, lutein oleate, lutein palmitate, violaxanthin-myristate-laurate, and violaxanthin dimyristate (Fig. C). By WYCKS and WYTS comparison, the most up-regulated 5 carotenoid components were zeaxanthin-laurate-myristate, zeaxanthin dilaurate, zeaxanthin dimyristate, lutein dilaurate, and antheraxanthin while the most up-downregulated 5 carotenoid components were violaxanthin dioleate, 8’-apo-beta-carotenal, violaxanthin-myristate-oleate, rubixanthin palmitate and β-cryptoxanthin palmitate (Fig. D) (Supplementary Table ). The interactions between the rhizosphere bacterial community and carotenoids were evaluated using correlation analysis. The threshold of the p-value was < 0.0001 in this study. A total of 113 OTU-carotenoid pairs with high correlations were identified in skin tissue (Fig. A). In the pulp tissue, 88 OTU-carotenoid pairs with high correlations were observed (Fig. B). The overlap of the OTU-carotenoid pairs from the skin and pulp tissues was identified. Four OTUs were correlated with seven carotenoid components in the two tissues. The four OTUs were annotated as TRA3-20 (order), Roseiflexus (genus), OPB35 (class), and Fictibacillus (genus) (Fig. C). The results showed that these groups of rhizosphere bacterial affected carotenoid generation in the fruit of “Orah.” In this study, we compared the “Orah” under different fertilization conditions to investigate the changes in microbial fertilizer and organic manure combined fertilization. The characteristics of the fruits were compared, and the results showed significant changes in fruit color after combined fertilization. A brighter color was detected in the WYT group. Similar results have been reported for other Citrus varieties. For example, Magnesium (Mg) application can alter fruit coloration and sugar accumulation in navel orange ( Citrus sinensis Osb.) . Using Bacillus subtilis biofertilizer, a simultaneous color change in the fruit skin and pulp of Tarocco blood orange ( Citrus sinensis (L.) Osbeck) was observed . Although the effects of fertilizer application on fruit color have been investigated previously, knowledge of the underlying mechanisms is limited. Undoubtedly, combined microbial fertilizer and organic manure fertilization changed the rhizosphere bacterial community. Therefore, we assayed the bacterial community by rhizosphere soil 16s rRNA sequencing. In addition, to evaluate carotenoid accumulation in the different groups, carotenoids were analyzed using UPLC-MS/MS. Interaction networks between rhizosphere bacterial community and carotenoid components were constructed and key bacterial related to carotenoid accumulation were identified. Therefore, this study investigated changes in fruit color and provided mechanistic details from rhizosphere bacterial community insights using microbial fertilizer and organic manure combined fertilization. The top 10 phyla with OTU were identical for the ‘Orah’ in the two groups. Thus, at the phylum level, the rhizosphere bacterial composition was similar with and without combined fertilization of Strongreen and Yumeiren. However, in the WYT group, Pseudomonas was enriched compared to the WYCK group. Pseudomonas, the major gram-negative bacteria phylum, contains several important genera and pathogenic bacteria. Some free-living bacteria of this phylum participate in nitrogen fixation. In this study, rhizosphere bacterial community, which are free-living bacteria, were obtained from the root microbial community. Palm oil seedlings ( Elaeis guineensis Jacq.), sterilization, and chemical fertilizers change the Pseudomonadota community and contribute to the high nutrient transformation effectiveness . Similar results were reported for Myrothamnus flabellifolia , winter wheat ( Triticum aestivum L.) , and maize ( Zea mays L.) . Thus, it could be inferred that these enriched bacteria are responsible for nitrogen fixation and promote the production of the ‘Orah’ fruit. In the WYCK group, cyanobacteria, which are gram-negative bacteria, were enriched. Cyanobacteria can associate with other plants to fix nitrogen . Several factors may be affected by the fertilization and improved microbial communities and agronomic traits: First, the fertilization alters the nutrient content and distribution in the soil, affecting the quantity and diversity of microbial communities; Second, the fertilization provides additional nutrient sources, enhancing microbial metabolic activity; Third, dripline fertilization, due to its localized application, can cause uneven distribution of fertilizers in the soil, leading to spatial heterogeneity in microbial community structure. Thus, it is reasonable that the present fertilization method improved microbial communities and fruit quality. This study revealed that 37 of the 51 carotenoid components tested were downregulated in WYCKP compared to those in WYTP. Surprisingly, all carotenoid components in the pulp were decreased by combined fertilization. In contrast, 7 downregulated and 17 upregulated carotenoid components were identified in the skin. These results indicated that the effects of combined fertilization on carotenoid component accumulation were different between the pulp and skin in the ‘Orah’ fruit. To date, many reports have demonstrated conserved carotenoid metabolism in different plant species. However, the carotenoid content under the different fertilization conditions was significantly different. For example, Magnesium treatment could increase the lutein, β-cryptoxanthin, zeaxanthin, and violaxanthin levels in the pulp of Satsuma mandarin ( C. unshiu Marc.) . The sulfur fertilization of spinach ( Spinacia oleracea ) can increase carotenoid levels . The more vibrant colors of fruit skin in the combined fertilization ‘Orah’ are associated with the increasing of zeaxanthin-related components and lutein dilaurate and antheraxanthin. Carotenoids in the skin are yellow and red. Furthermore, the β-carotene pathway yielded a substantial amount of decomposition products among the decreased components. Thus, the results inferred that the more vibrant colors and higher juice yield by combined fertilization by increasing the products in the α-carotene pathway rather than the β-carotene pathway. Correlation analysis identified an interacting network that included four OTUs and seven carotenoid components. The OTUs identified were TRA3-20 (order), Roseiflexus (genus), OPB35 (class), and Fictibacillus (genus). TRA3-20 is positively correlated with C and N metabolism . Roseiflexus is the carotenoid source and grows photoheterotrophically . OPB35 is a good predictor of total organic carbon (TOC), total nitrogen (TN), and total phosphorus (TP), which may contribute to soil health for plant growth . Fictibacillus belongs to the Bacillaceae family. Several species of this genus can improve carbon metabolic properties and plays a role in promoting diversity in the rhizosphere microbial community . Soil health, including soil microbiome communication, is important for the interactions between microorganisms and plants. Rhizosphere bacterial community are crucial for soil metabolism. Our results demonstrated that the key microorganisms were strongly related to some carotenoid accumulation that participated in coloration of the ‘Orah’ skin. The seven carotenoid components that correlated with the OTUs were from both the lycopene ε-cyclase (LCYE)- and lycopene β-cyclase (LCYB)-mediated synthesis pathways. Microbial fertilizer and organic manure combined fertilization changed the bacterial community in the rhizosphere soil. Thus, we propose that the changes in carotenoids were driven by an improvement in rhizosphere bacterial community rather than by direct carotenoid synthesis pathway regulation. The fertilization improving soil nutrient status that not only supply the plants directly but also provide essential nutrients for soil microorganisms, promoting their growth and metabolic activities. Meanwhile, fertilizers can promote the proliferation of beneficial microorganisms and the beneficial microorganisms form symbiotic relationships with plant roots, enhancing the plant’s ability to absorb nutrients. Additionally, some microorganisms secrete plant growth-promoting substances (e.g., auxins, gibberellins), further stimulating plant growth and fruit development. Furthermore, the fertilization can optimize the microbial community structure, increasing the proportion of beneficial microorganisms while suppressing pathogenic microbes. This optimization of the microbial community helps reduce plant diseases, thereby enhancing fruit yield and quality. Finally, microorganisms influence carotenoid synthesis in plants by regulating nutrient supply and metabolic pathways in the soil. For instance, certain microorganisms can enhance the plant’s uptake of nitrogen and potassium, elements that are closely related to carotenoid synthesis. Adequate nitrogen and potassium supply can increase the synthesis of carotenoids, making the fruit color more vibrant. In this study, we compared the fruit characteristics, rhizosphere bacterial community, and carotenoids in ‘Orah’. The combined fertilization improved the fruit weight and the Citrus color and increased rhizosphere bacteria of Pseudomonas. In addition, 37 and 24 carotenoid components were significantly changed by combined fertilization. A 4 OTUs- 7 carotenoid components regulatory network affected by combined fertilization was constructed. Thus, the combined fertilization contributed to better nutrient absorption of ‘Orah’ resulting in changes in carotenoid accumulation. These results provide evidence for elucidating the mechanism of improved agronomic characteristics of fruits by microbial fertilizer and organic manure combined fertilization and provide clues to the microorganism-carotenoid regulatory network. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 Supplementary Material 4 Supplementary Material 5 |
Cancer diagnosis in the post-coronavirus disease era: the promising role of telepathology and artificial intelligence | 0740f406-7981-4776-9cd5-021225805a61 | 11164283 | Pathology[mh] | Cancer is one of the main public health challenges worldwide, being one of the leading causes of death and representing a significant barrier to increasing life expectancy. In many countries, cancer is the first or second leading cause of premature death before the age of 70. Cancer incidence and mortality are on the rise worldwide . This increase is a result of demographic and epidemiological transitions taking place globally. From a demographic perspective, there is a reduction in the fertility rate and infant mortality, resulting in an increase in the proportion of elderly people in the population. The epidemiological transition, on the other hand, reflects the gradual shift from mortality from infectious diseases to deaths related to chronic diseases. Population aging and changes in behavior and environment, such as structural changes affecting mobility, recreation, diet, and exposure to environmental pollutants, contribute to increased cancer incidence and mortality . In countries with a high human development index (HDI), impacts on incidence and mortality rates have been observed through effective actions for the prevention, early detection, and treatment of cancer. On the contrary, in countries in transition, these rates continue to increase or, at most, remain stable. The challenge for less developed countries is to make more effective use of available resources and efforts to control cancer. According to estimates by the Global Cancer Observatory (Globocan), prepared by the International Agency for Research on Cancer (IARC), in 2020, there were about 19.3 million new cases of cancer worldwide (excluding cases of non-melanoma skin cancer, which totaled 18.1 million). It is estimated that one in five people will get cancer in their lifetime , . The 10 most common cancers account for more than 60% of new cases. Female breast cancer is the most common cancer globally, with 2.3 million (11.7%) new cases, followed by lung cancer, with 2.2 million (11.4%); colon and rectum, with 1.9 million (10.0%); prostate, with 1.4 million (7.3%); and non-melanoma skin, with 1.2 million (6.2%) new cases. For Brazil, the estimate for the three-year period from 2023 to 2025 indicates that there will be approximately 704,000 new cases of cancer, 483,000 of which are cases of non-melanoma skin cancer when cases of non-melanoma skin cancer are excluded. Non-melanoma skin cancer is estimated to be the most prevalent, accounting for about 220,000 cases (31.3%). Next is breast cancer, with 74,000 cases (10.5%); prostate, with 72,000 cases (10.2%); colon and rectum, with 46,000 cases (6.5%); lung, with 32,000 cases (4.6%); and stomach, with 21,000 new cases (3.1%) . When analyzing the most frequent types of cancer in men, there is a predominance of non-melanoma skin cancer, with 102,000 cases (29.9%); followed by prostate cancer, with 72,000 cases (21.0%); colon and rectum, with 22,000 cases (6.4%); lung, with 18,000 cases (5.3%); stomach, with 13,000 cases (3.9%); and oral cavity, with 11,000 cases (3.2%). In women, the most common cancers are non-melanoma skin cancers, with 118,000 cases (32.7%); breast, with 74,000 cases (20.3%); colon and rectum, with 24,000 cases (6.5%); cervix, with 17,000 cases (4.7%); lung, with 15,000 cases (4.0%); and thyroid, with 14,000 cases (3.9%) . The coronavirus disease 2019 (COVID-19) pandemic has had a profound impact on health and the global economy. As of October 2023, there were a total of 771,191,203 confirmed cases of COVID-19, with 6,961,014 deaths . In the field of health, the impact was significant. The health system in several countries has been overwhelmed, with an urgent need for hospital beds, personal protective equipment, and health workers. Many hospitals and health facilities have worked beyond their maximum capacity, struggling to care for all patients affected by the disease. COVID-19 has proven to be a serious health threat, especially for vulnerable groups such as the elderly and people with pre-existing conditions. In addition to health, the pandemic has also had a devastating impact on the global economy. Business closures, travel restrictions, and lockdown measures have resulted in a collapse in tourism, retail, entertainment, and many other sectors. Millions of people lost their jobs and faced financial hardship. Governments around the world have had to take urgent action to contain the impact on the economy by implementing financial stimulus packages, aid programs, and support for businesses. Despite these efforts, economic recovery has been an ongoing challenge, with long-lasting consequences for many industries and individuals. Vaccination has been a key tool in the fight against the disease. As of October 5, 2023, a total of 13,516,185,809 vaccine doses had been administered. Mass vaccination was a hope to control the spread of the virus, lessen the severity of the disease, and reduce the number of deaths . In May 2020, the American Society of Clinical Oncology (ASCO) published a special report recommending the postponement of any clinic visits and any cancer screening, diagnosis, or staging-related procedures if this postponement does not pose a risk of disease progression or worsening prognosis . Some international studies show that the decrease in cancer diagnoses in the first months of the pandemic was 65.2% of new cancer cases . Screening for some cancers has been hampered, with data showing that breast, colon, and rectal cancers were the most affected, with 89.2 and 84.5%, respectively. In a study carried out in the United Kingdom, the lockdown caused the suspension of cancer screenings, compromising the early diagnosis of numerous patients. Only in this case were patients with critical and symptomatic clinical conditions directed to diagnostic intervention. Cancer records from the National Health Service (NHS) were used through hospital databases with patients aged 15–84 years diagnosed with breast cancer (35583), colorectal cancer (24,975), and esophageal cancer (6,744) in 2010 with follow-up until 2014. In patients with primary lung cancer (29,305), 2012 was used as the year of diagnosis and 2015 as the final follow-up date. Using a flowchart to define the pathways of cancer patients within the NHS, an estimate was made to assess the consequences of delayed diagnosis in this group of patients over a period of 12 months, starting in March 2020 (lockdown date), contextualizing its impact 1, 3, and 5 years after the initial diagnosis. In this methodology, three pathways or flows of these patients were considered, corresponding to the best to the worst scenario. Based on this, the actual impact of survival at 1, 3, and 5 years after diagnosis was estimated, thus calculating the total number of deaths attributed to cancer and the total number of years of life lost compared with pre-pandemic data . In Brazil, there are several articles reporting the impact of the pandemic on anatomical and pathological diagnoses of cancer, especially in the public health system. The Brazilian Society of Pathology (SBP) was one of the first societies to warn about the problem of cancer diagnosis in the midst of the pandemic. In an article published in Folha de São Paulo on April 17, journalist Claudia Colucci interviewed several representatives of medical societies, among which Dr. Clóvis Klock, at the time President of the Advisory Board of SBP., warned that many pathology services had a 70–80% decrease in cancer diagnoses at the beginning of the pandemic . Subsequently, many articles have demonstrated these aspects of the prediction and impact of the decrease in diagnoses, both in Brazil and in other countries. This impact has been greater in some countries, especially in the case of the most vulnerable people – . In all scenarios, an increase of 7.9–9.6% in breast cancer deaths was estimated within 5 years after diagnosis, meaning 281–344 more deaths, respectively. In colorectal cancer, the increase was from 15.3% (1,445) to 16.6% (1,563), and in lung cancer, the increase was from 4.8% (1,235) to 5.3% (1,372). And finally, the increase seen in patients with esophageal cancer was 5.8% (330) to 6% (342). These data show that there has been a significant increase in preventable deaths in the United Kingdom, likely due to restrictive measures and social isolation . Another study observed a 40% reduction in the weekly incidence of cancer in the Netherlands and 75% in the United Kingdom since the beginning of the COVID-19 pandemic. This study used a methodology similar to ours, evaluating the records in a database from January to April 2019 comparing them with the same period in 2020. Delays in cancer diagnosis can occur at different levels of health care: the patient level, primary care, and secondary care. Late diagnoses of more advanced neoplastic diseases may occur when patients are slow to recognize and act on suspicious symptoms . Lack of awareness about early cancer symptoms is the main reason for late presentation, especially when symptoms are atypical . In addition, the high demand for specialized medical services can create an additional barrier, delaying diagnosis, especially in public health services . The COVID-19 pandemic has had significant impacts on cancer diagnosis and treatment, with delays in detection and overburdening health systems. In this context, telepathology and artificial intelligence (AI) emerge as promising tools to overcome these challenges and provide accurate and timely diagnoses . Telepathology allows the remote analysis of pathological samples, especially slides, whether hematoxylin and eosin, or special techniques, such as immunohistochemistry, facilitating access to specialists and collaborative interpretation of complex cases . With telepathology, it is possible to send scanned images of slides to specialists anywhere in the world, allowing for accurate and rapid assessment. This is especially relevant in resource-constrained areas or during public health crises such as the COVID-19 pandemic . Telepathology can be used in several stages of cancer diagnosis, including screening, primary diagnosis, and second opinion, providing greater agility and access to specialized care. AI, through advanced algorithms, can analyze large amounts of data quickly and accurately. In cancer diagnosis, AI has shown promising results in early detection, differentiation between benign and malignant lesions, classification of cancer subtypes, and selection of personalized therapies. These capabilities can help speed up the diagnostic process and improve accuracy, allowing for more appropriate and timely treatment for patients – . The use of telepathology and AI in cancer diagnosis can bring several benefits to overcoming the challenges posed by the COVID-19 pandemic. These technologies make it possible to carry out remote consultations, avoiding the need for patients to travel and reducing the risk of contamination . In addition, AI's ability to analyze quickly and accurately contributes to decreasing diagnostic delays and providing reliable results. Implementing these technologies can improve access to healthcare services, particularly in remote or resource-limited areas . The use of telepathology and AI in cancer diagnosis raises important ethical and regulatory considerations. Resolution No. 2,264/2019 regulates the use of telepathology in Brazil. It is necessary to ensure the privacy and protection of patient data, informed consent for the use of technologies, and the appropriate regulation of companies that develop and market telepathology and AI solutions, following the General Data Protection Law (Law No. 13,853) of 2019 . In addition, it is essential to ensure that these technologies are used as an auxiliary tool for physicians, respecting the expertise and clinical judgment of healthcare professionals. Cancer diagnosis faces significant challenges in the context of the COVID-19 pandemic. Telepathology and AI emerge as promising solutions for early detection and accurate diagnosis, overcoming delays and reducing the need for patients to travel. The implementation of these technologies requires appropriate ethical and regulatory considerations to ensure their responsible and effective use. Going forward, telepathology and AI are expected to continue to evolve, providing significant advancements in cancer diagnosis and treatment, regardless of public health crises like COVID-19. In addition, it is important to highlight that telepathology and AI can also be useful in the monitoring and follow-up of cancer patients, enabling the early identification of recurrences and the adjustment of treatments in a personalized way. These technologies have the potential to revolutionize the approach to cancer by offering more accurate, efficient, and accessible medicine. Therefore, investments in research, development, and implementation of telepathology and AI in the context of cancer are essential to improve treatment outcomes and quality of life for patients. Telepathology and AI are promising tools in cancer diagnosis, especially in the post-COVID-19 pandemic context. These technologies can provide accurate and timely diagnoses, overcoming delays caused by social distancing measures and overburdening healthcare services. However, it is critical to ensure the protection of patient data, proper regulation, and responsible use of these technologies. With continued investments and advancements, telepathology and AI are expected to play a crucial role in improving access to healthcare services and optimizing cancer diagnosis and treatment, achieving better outcomes for patients worldwide. |
Traditional East Asian herbal medicines for the treatment of poststroke constipation | 072bfdb0-f235-4db9-8462-66f95e3af1bc | 8052026 | Pharmacology[mh] | Introduction Post-stroke constipation is a major complication after stroke and has been reported to occur in 22.9–79% of patients with stroke. Discomfort due to constipation causes distress in both patients and their caregivers, and can negatively affect the patient's quality of life. Furthermore, post-stroke constipation increases the length of hospital stay, confers poor rehabilitation outcome, increases the recurrence of stroke, and can cause death in patients with stroke; furthermore, it has been reported to increase the incidence of infectious complications, such as pneumonia and urinary tract infections. Therefore, active and prompt treatment of post-stroke constipation is essential. Currently, pharmacotherapies, such as laxatives (osmotic and stimulant), anticholinesterases, enterokinetic medications, secretagogues, and serotonin 5-HT4 receptor agonists, have been mainly used to treat post-stroke constipation. However, these medications are known to cause adverse effects, including electrolyte imbalance, nausea, headache, diarrhea, abdominal pain, anaphylaxis, and carcinogenesis. Therefore, there is a shortage of effective strategies for the treatment of constipation in patients with stroke, mostly elderly patients. In addition, long-term use of conventional pharmacotherapies can cause dependence and permanent changes in the bowel habits of patients with stroke. These limitations of existing therapies warrant the need to develop safer and more effective treatments for post-stroke constipation. Traditional medicine, which mainly uses herbs, acupuncture, and moxibustion, is still widely used in Northeast Asian regions, such as Korea, China, Japan, and Taiwan, and several related studies on the effects of traditional medicine to treat functional constipation or post-stroke constipation have steadily emerged. Dahuang (Rhei Radix et Rhizoma) is the most commonly used herb for treating constipation. A prospective, double-blind, double-dummy, randomized controlled trial suggested that MaZiRenWan, which contains Dahuang, could be effective to treat functional constipation. An open-label study reported that another herbal prescription, Daikenchuto, which does not contain Dahuang, significantly improve the constipation score (constipation scoring system [CSS]) in patients with post-stroke constipation. In order for decision makers to easily utilize these existing evidence in the clinical setting, a systematic review is needed to identify, evaluate, and summarize related studies. However, to date, no systematic review has been conducted to evaluate the efficacy and safety of traditional East Asian herbal medicine to treat post-stroke constipation. Therefore, the aims of this study are as follows: (1) To assess whether traditional East Asian herbal medicine therapies for the treatment of post-stroke constipation are more effective and safer than conventional Western medicine therapies or placebo. (2) To assess whether adjunct traditional East Asian herbal medicine therapies in combination with conventional Western medicine therapies is more effective and safer than conventional Western medicine therapies alone for the treatment of post-stroke constipation.
Methods 2.1 Study registration The protocol of the present study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA) guidelines and checklist and has been registered with the Research Registry 2021 under number review registry1117. 2.2 Eligible criteria for study selection 2.2.1 Types of studies Only randomized controlled trials (RCTs) investigating the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation will be included in this study, without any publication or language restrictions. Quasi-randomized controlled trials (such as those allocating participants by alternate days of the week or date of birth), non-RCTs, case reports, case series, uncontrolled trials, and laboratory studies will be excluded. Studies that fail to provide detailed results will be excluded. Cross-over trials will also be excluded because of the potential for a carry-over effect. 2.2.2 Types of participants Eligible participants will be defined as adult patients (over 18 years of age) having constipation after a life-first or recurrent stroke. Post-stroke constipation should be diagnosed according to at least one of the current diagnostic criteria or diagnostic criteria at the time of the study. Patients with a history of constipation before the diagnosis of stroke will be excluded. There will be no restrictions based on sex, ethnicity, symptom severity, disease duration, and clinical setting. However, patients with subdural hemorrhage or subarachnoid hemorrhage will be excluded. 2.2.3 Types of interventions We will include studies using traditional East Asian herbal medicines alone or adjunct traditional East Asian herbal medicines in combination with conventional Western medicine therapies as experimental interventions. In the present study, only oral administration forms of traditional East Asian herbal medicines will be included, with no limitations on the dosage, frequency, duration of treatment, and formulation (decoctions, extracts, tablets, capsules, and powders). Therefore, intravenous or acupuncture point injections of herbal medicines will be excluded. The control interventions will include placebo, placebo + conventional Western medicine therapies, or conventional Western medicine therapies alone. We will exclude studies comparing other traditional East Asian medicine therapies, such as those using different types of traditional East Asian herbal medicines, acupuncture, or moxibustion. Studies comparing the effect of traditional East Asian herbal medicines with other traditional East Asian medicine therapies, such as acupuncture treatment or moxibustion, will also be excluded. 2.2.4 Types of outcome measures For the primary outcome, we will assess the frequency of spontaneous defecation, defined as the mean number of spontaneous defecations per week. For secondary outcomes, we will include the CCS and gas volume score (calculated by Koide's method ), the frequency of use of rescue medications (laxatives or rectal evacuants), mean transit time, total effective rate for post-stroke constipation, and other parameters evaluating neurologic deficits, such as the National Institute of Health Stroke Scale score, modified Rankin Scale (mRS) score, modified Barthel Index (mBI), and quality of life (QoL). We will also investigate the number and severity of adverse events. 2.3 Search strategies for the identification of studies 2.3.1 Electronic searches The following electronic databases will be searched from inception to April 2021: MEDLINE (via PubMed), the Cochrane Central Register of Controlled Trials (CENTRAL), Excerpta Medica dataBASE (EMBASE), Scopus, Citation Information by Nii (CiNii), China National Knowledge Infrastructure Database (CNKI), Oriental Medicine Advanced Searching Integrated System (OASIS), and National Digital Science Library (NDSL). The specific search strategies (for example, PubMed) are listed in Table . We will make relative modifications in accordance with the requirements, and an equivalent translation of the search terms will be adopted to ensure that similar search terms are used in all databases. If additional information is needed from the identified studies, we will contact the corresponding authors. 2.3.2 Search for other resources A manual search will also be performed to search the reference lists of the relevant articles. Clinical trial registries (ClinicalTrials.gov, Clinical Research Information Service [CRIS]), conference presentations, and expert contacts will also be searched. 2.4 Data collection and analysis 2.4.1 Study selection Two reviewers (SK and CJ) trained in the process and purpose of study selection will independently review the titles, abstracts, and manuscripts of the studies and screen them for eligibility for inclusion in the analysis. After removing duplicates, the full texts will be reviewed. All studies, identified by both electronic and manual searches, will be uploaded to EndNote X9 (Clarivate Analytics), and the reasons for excluding studies will be recorded and shown in a PRISMA flowchart, as shown in Figure . All disagreements will be resolved by consulting an independent reviewer (BHJ). 2.4.2 Data extraction and management One review writer (CJ) will independently extract the data and fill out the standard data extraction form, which includes study information—the first author, publication year, language, sample size, characteristics of participants (e.g., age, sex, and types of stroke), details of randomization, blinding, interventions (names of herbal medicines used, type of formula, number and dosage of administration), comparison (types of comparison [e.g., placebo, no additional treatment, number and dosage of administration]), treatment period, outcome measures, primary outcome, secondary outcome, and statistical method used. Another independent review writer (SW) will confirm the contents of the extraction. Disagreements, if any, will be resolved by consulting another review writer (BHJ). 2.4.3 Assessment of bias risk and quality of included studies Two reviewers (SK and CJ) will assess the risk of bias (RoB) based on the Cochrane Collaboration tool, which includes references to random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment data (detection bias), incomplete outcome data (attribution bias), selective reporting (reporting bias), and other biases. The assessment results will be presented as follows: low, unclear, and high RoB. 2.4.4 Measurement of treatment effect For continuous data, the pooled results will be presented as the mean difference (MD) or standardized MD with 95% confidence intervals (CIs). For dichotomous data, the pooled results will be presented as a risk ratio (RR) with 95% CIs. 2.4.5 Managing missing data If there are missing, insufficient, or unclear data, we will contact the corresponding author and gather relevant information. If the information cannot be obtained, only the remaining available information, which will be discussed, will be analyzed. 2.4.6 Assessment of heterogeneity We will perform the I 2 test to evaluate statistical heterogeneity. Statistical heterogeneity will be considered if I 2 is greater than 50%. 2.4.7 Data synthesis The Review Manager program (V.5.4 Copenhagen: The Nordic Cochrane Center. The Cochrane Collaboration, 2014) will be used for statistical analysis. If I 2 is ≤ 50%, the fixed-effect model will be used to evaluate the outcome data. Otherwise, a random-effects model will be used. The studies will be synthesized according to the type of intervention and/or control as follows: 1. Traditional East Asian herbal medicines vs. conventional Western medicine therapies 2. Traditional East Asian herbal medicines vs. placebo 3. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. placebo + conventional Western medicine therapies 4. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. conventional Western medicine therapies alone. The heterogeneity levels will be assessed in the included literature, and if enough studies are available to investigate the causes of heterogeneity and its criteria, the groups indicated below (Subgroup analysis section) will be assessed. If more than 10 studies are included in the meta-analysis, we will estimate publication bias using Egger's test and depict the results visually with a funnel plot. We will use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) pro software from Cochrane Systematic Reviews to create a Summary of Findings table. 2.4.8 Subgroup analysis If sufficient studies are available to investigate the cause of heterogeneity and its criteria, the following will be assessed: the types of stroke (e.g., ischemic or hemorrhagic), stroke duration (e.g., acute or chronic), the name of the herbal medicines used, and the formula of the herbal medicine (such as granules or decoctions). 2.4.9 Sensitivity analysis We will perform a sensitivity analysis to verify the robustness of the results. This will be done by assessing the impact of sample size, high RoB, missing data, and selected models. Following the analyses, if the quality of the studies is judged to be low, these studies will be removed to ensure the robustness of the results. 2.4.10 Ethics and dissemination A formal ethical approval was not required for this protocol. We will collect and analyze data based on published studies, and because no patients are directly or specifically assessed in this study, individual privacy will not be a concern. The results of this review will be disseminated to peer-reviewed journals or presented at a relevant conference.
Study registration The protocol of the present study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocol (PRISMA) guidelines and checklist and has been registered with the Research Registry 2021 under number review registry1117.
Eligible criteria for study selection 2.2.1 Types of studies Only randomized controlled trials (RCTs) investigating the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation will be included in this study, without any publication or language restrictions. Quasi-randomized controlled trials (such as those allocating participants by alternate days of the week or date of birth), non-RCTs, case reports, case series, uncontrolled trials, and laboratory studies will be excluded. Studies that fail to provide detailed results will be excluded. Cross-over trials will also be excluded because of the potential for a carry-over effect. 2.2.2 Types of participants Eligible participants will be defined as adult patients (over 18 years of age) having constipation after a life-first or recurrent stroke. Post-stroke constipation should be diagnosed according to at least one of the current diagnostic criteria or diagnostic criteria at the time of the study. Patients with a history of constipation before the diagnosis of stroke will be excluded. There will be no restrictions based on sex, ethnicity, symptom severity, disease duration, and clinical setting. However, patients with subdural hemorrhage or subarachnoid hemorrhage will be excluded. 2.2.3 Types of interventions We will include studies using traditional East Asian herbal medicines alone or adjunct traditional East Asian herbal medicines in combination with conventional Western medicine therapies as experimental interventions. In the present study, only oral administration forms of traditional East Asian herbal medicines will be included, with no limitations on the dosage, frequency, duration of treatment, and formulation (decoctions, extracts, tablets, capsules, and powders). Therefore, intravenous or acupuncture point injections of herbal medicines will be excluded. The control interventions will include placebo, placebo + conventional Western medicine therapies, or conventional Western medicine therapies alone. We will exclude studies comparing other traditional East Asian medicine therapies, such as those using different types of traditional East Asian herbal medicines, acupuncture, or moxibustion. Studies comparing the effect of traditional East Asian herbal medicines with other traditional East Asian medicine therapies, such as acupuncture treatment or moxibustion, will also be excluded. 2.2.4 Types of outcome measures For the primary outcome, we will assess the frequency of spontaneous defecation, defined as the mean number of spontaneous defecations per week. For secondary outcomes, we will include the CCS and gas volume score (calculated by Koide's method ), the frequency of use of rescue medications (laxatives or rectal evacuants), mean transit time, total effective rate for post-stroke constipation, and other parameters evaluating neurologic deficits, such as the National Institute of Health Stroke Scale score, modified Rankin Scale (mRS) score, modified Barthel Index (mBI), and quality of life (QoL). We will also investigate the number and severity of adverse events.
Types of studies Only randomized controlled trials (RCTs) investigating the efficacy and safety of traditional East Asian herbal medicines for the treatment of post-stroke constipation will be included in this study, without any publication or language restrictions. Quasi-randomized controlled trials (such as those allocating participants by alternate days of the week or date of birth), non-RCTs, case reports, case series, uncontrolled trials, and laboratory studies will be excluded. Studies that fail to provide detailed results will be excluded. Cross-over trials will also be excluded because of the potential for a carry-over effect.
Types of participants Eligible participants will be defined as adult patients (over 18 years of age) having constipation after a life-first or recurrent stroke. Post-stroke constipation should be diagnosed according to at least one of the current diagnostic criteria or diagnostic criteria at the time of the study. Patients with a history of constipation before the diagnosis of stroke will be excluded. There will be no restrictions based on sex, ethnicity, symptom severity, disease duration, and clinical setting. However, patients with subdural hemorrhage or subarachnoid hemorrhage will be excluded.
Types of interventions We will include studies using traditional East Asian herbal medicines alone or adjunct traditional East Asian herbal medicines in combination with conventional Western medicine therapies as experimental interventions. In the present study, only oral administration forms of traditional East Asian herbal medicines will be included, with no limitations on the dosage, frequency, duration of treatment, and formulation (decoctions, extracts, tablets, capsules, and powders). Therefore, intravenous or acupuncture point injections of herbal medicines will be excluded. The control interventions will include placebo, placebo + conventional Western medicine therapies, or conventional Western medicine therapies alone. We will exclude studies comparing other traditional East Asian medicine therapies, such as those using different types of traditional East Asian herbal medicines, acupuncture, or moxibustion. Studies comparing the effect of traditional East Asian herbal medicines with other traditional East Asian medicine therapies, such as acupuncture treatment or moxibustion, will also be excluded.
Types of outcome measures For the primary outcome, we will assess the frequency of spontaneous defecation, defined as the mean number of spontaneous defecations per week. For secondary outcomes, we will include the CCS and gas volume score (calculated by Koide's method ), the frequency of use of rescue medications (laxatives or rectal evacuants), mean transit time, total effective rate for post-stroke constipation, and other parameters evaluating neurologic deficits, such as the National Institute of Health Stroke Scale score, modified Rankin Scale (mRS) score, modified Barthel Index (mBI), and quality of life (QoL). We will also investigate the number and severity of adverse events.
Search strategies for the identification of studies 2.3.1 Electronic searches The following electronic databases will be searched from inception to April 2021: MEDLINE (via PubMed), the Cochrane Central Register of Controlled Trials (CENTRAL), Excerpta Medica dataBASE (EMBASE), Scopus, Citation Information by Nii (CiNii), China National Knowledge Infrastructure Database (CNKI), Oriental Medicine Advanced Searching Integrated System (OASIS), and National Digital Science Library (NDSL). The specific search strategies (for example, PubMed) are listed in Table . We will make relative modifications in accordance with the requirements, and an equivalent translation of the search terms will be adopted to ensure that similar search terms are used in all databases. If additional information is needed from the identified studies, we will contact the corresponding authors. 2.3.2 Search for other resources A manual search will also be performed to search the reference lists of the relevant articles. Clinical trial registries (ClinicalTrials.gov, Clinical Research Information Service [CRIS]), conference presentations, and expert contacts will also be searched.
Electronic searches The following electronic databases will be searched from inception to April 2021: MEDLINE (via PubMed), the Cochrane Central Register of Controlled Trials (CENTRAL), Excerpta Medica dataBASE (EMBASE), Scopus, Citation Information by Nii (CiNii), China National Knowledge Infrastructure Database (CNKI), Oriental Medicine Advanced Searching Integrated System (OASIS), and National Digital Science Library (NDSL). The specific search strategies (for example, PubMed) are listed in Table . We will make relative modifications in accordance with the requirements, and an equivalent translation of the search terms will be adopted to ensure that similar search terms are used in all databases. If additional information is needed from the identified studies, we will contact the corresponding authors.
Search for other resources A manual search will also be performed to search the reference lists of the relevant articles. Clinical trial registries (ClinicalTrials.gov, Clinical Research Information Service [CRIS]), conference presentations, and expert contacts will also be searched.
Data collection and analysis 2.4.1 Study selection Two reviewers (SK and CJ) trained in the process and purpose of study selection will independently review the titles, abstracts, and manuscripts of the studies and screen them for eligibility for inclusion in the analysis. After removing duplicates, the full texts will be reviewed. All studies, identified by both electronic and manual searches, will be uploaded to EndNote X9 (Clarivate Analytics), and the reasons for excluding studies will be recorded and shown in a PRISMA flowchart, as shown in Figure . All disagreements will be resolved by consulting an independent reviewer (BHJ). 2.4.2 Data extraction and management One review writer (CJ) will independently extract the data and fill out the standard data extraction form, which includes study information—the first author, publication year, language, sample size, characteristics of participants (e.g., age, sex, and types of stroke), details of randomization, blinding, interventions (names of herbal medicines used, type of formula, number and dosage of administration), comparison (types of comparison [e.g., placebo, no additional treatment, number and dosage of administration]), treatment period, outcome measures, primary outcome, secondary outcome, and statistical method used. Another independent review writer (SW) will confirm the contents of the extraction. Disagreements, if any, will be resolved by consulting another review writer (BHJ). 2.4.3 Assessment of bias risk and quality of included studies Two reviewers (SK and CJ) will assess the risk of bias (RoB) based on the Cochrane Collaboration tool, which includes references to random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment data (detection bias), incomplete outcome data (attribution bias), selective reporting (reporting bias), and other biases. The assessment results will be presented as follows: low, unclear, and high RoB. 2.4.4 Measurement of treatment effect For continuous data, the pooled results will be presented as the mean difference (MD) or standardized MD with 95% confidence intervals (CIs). For dichotomous data, the pooled results will be presented as a risk ratio (RR) with 95% CIs. 2.4.5 Managing missing data If there are missing, insufficient, or unclear data, we will contact the corresponding author and gather relevant information. If the information cannot be obtained, only the remaining available information, which will be discussed, will be analyzed. 2.4.6 Assessment of heterogeneity We will perform the I 2 test to evaluate statistical heterogeneity. Statistical heterogeneity will be considered if I 2 is greater than 50%. 2.4.7 Data synthesis The Review Manager program (V.5.4 Copenhagen: The Nordic Cochrane Center. The Cochrane Collaboration, 2014) will be used for statistical analysis. If I 2 is ≤ 50%, the fixed-effect model will be used to evaluate the outcome data. Otherwise, a random-effects model will be used. The studies will be synthesized according to the type of intervention and/or control as follows: 1. Traditional East Asian herbal medicines vs. conventional Western medicine therapies 2. Traditional East Asian herbal medicines vs. placebo 3. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. placebo + conventional Western medicine therapies 4. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. conventional Western medicine therapies alone. The heterogeneity levels will be assessed in the included literature, and if enough studies are available to investigate the causes of heterogeneity and its criteria, the groups indicated below (Subgroup analysis section) will be assessed. If more than 10 studies are included in the meta-analysis, we will estimate publication bias using Egger's test and depict the results visually with a funnel plot. We will use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) pro software from Cochrane Systematic Reviews to create a Summary of Findings table. 2.4.8 Subgroup analysis If sufficient studies are available to investigate the cause of heterogeneity and its criteria, the following will be assessed: the types of stroke (e.g., ischemic or hemorrhagic), stroke duration (e.g., acute or chronic), the name of the herbal medicines used, and the formula of the herbal medicine (such as granules or decoctions). 2.4.9 Sensitivity analysis We will perform a sensitivity analysis to verify the robustness of the results. This will be done by assessing the impact of sample size, high RoB, missing data, and selected models. Following the analyses, if the quality of the studies is judged to be low, these studies will be removed to ensure the robustness of the results. 2.4.10 Ethics and dissemination A formal ethical approval was not required for this protocol. We will collect and analyze data based on published studies, and because no patients are directly or specifically assessed in this study, individual privacy will not be a concern. The results of this review will be disseminated to peer-reviewed journals or presented at a relevant conference.
Study selection Two reviewers (SK and CJ) trained in the process and purpose of study selection will independently review the titles, abstracts, and manuscripts of the studies and screen them for eligibility for inclusion in the analysis. After removing duplicates, the full texts will be reviewed. All studies, identified by both electronic and manual searches, will be uploaded to EndNote X9 (Clarivate Analytics), and the reasons for excluding studies will be recorded and shown in a PRISMA flowchart, as shown in Figure . All disagreements will be resolved by consulting an independent reviewer (BHJ).
Data extraction and management One review writer (CJ) will independently extract the data and fill out the standard data extraction form, which includes study information—the first author, publication year, language, sample size, characteristics of participants (e.g., age, sex, and types of stroke), details of randomization, blinding, interventions (names of herbal medicines used, type of formula, number and dosage of administration), comparison (types of comparison [e.g., placebo, no additional treatment, number and dosage of administration]), treatment period, outcome measures, primary outcome, secondary outcome, and statistical method used. Another independent review writer (SW) will confirm the contents of the extraction. Disagreements, if any, will be resolved by consulting another review writer (BHJ).
Assessment of bias risk and quality of included studies Two reviewers (SK and CJ) will assess the risk of bias (RoB) based on the Cochrane Collaboration tool, which includes references to random sequence generation (selection bias), allocation concealment (selection bias), blinding of participants and personnel (performance bias), blinding of outcome assessment data (detection bias), incomplete outcome data (attribution bias), selective reporting (reporting bias), and other biases. The assessment results will be presented as follows: low, unclear, and high RoB.
Measurement of treatment effect For continuous data, the pooled results will be presented as the mean difference (MD) or standardized MD with 95% confidence intervals (CIs). For dichotomous data, the pooled results will be presented as a risk ratio (RR) with 95% CIs.
Managing missing data If there are missing, insufficient, or unclear data, we will contact the corresponding author and gather relevant information. If the information cannot be obtained, only the remaining available information, which will be discussed, will be analyzed.
Assessment of heterogeneity We will perform the I 2 test to evaluate statistical heterogeneity. Statistical heterogeneity will be considered if I 2 is greater than 50%.
Data synthesis The Review Manager program (V.5.4 Copenhagen: The Nordic Cochrane Center. The Cochrane Collaboration, 2014) will be used for statistical analysis. If I 2 is ≤ 50%, the fixed-effect model will be used to evaluate the outcome data. Otherwise, a random-effects model will be used. The studies will be synthesized according to the type of intervention and/or control as follows: 1. Traditional East Asian herbal medicines vs. conventional Western medicine therapies 2. Traditional East Asian herbal medicines vs. placebo 3. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. placebo + conventional Western medicine therapies 4. Traditional East Asian herbal medicines + conventional Western medicine therapies vs. conventional Western medicine therapies alone. The heterogeneity levels will be assessed in the included literature, and if enough studies are available to investigate the causes of heterogeneity and its criteria, the groups indicated below (Subgroup analysis section) will be assessed. If more than 10 studies are included in the meta-analysis, we will estimate publication bias using Egger's test and depict the results visually with a funnel plot. We will use the Grading of Recommendations Assessment, Development and Evaluation (GRADE) pro software from Cochrane Systematic Reviews to create a Summary of Findings table.
Subgroup analysis If sufficient studies are available to investigate the cause of heterogeneity and its criteria, the following will be assessed: the types of stroke (e.g., ischemic or hemorrhagic), stroke duration (e.g., acute or chronic), the name of the herbal medicines used, and the formula of the herbal medicine (such as granules or decoctions).
Sensitivity analysis We will perform a sensitivity analysis to verify the robustness of the results. This will be done by assessing the impact of sample size, high RoB, missing data, and selected models. Following the analyses, if the quality of the studies is judged to be low, these studies will be removed to ensure the robustness of the results.
Ethics and dissemination A formal ethical approval was not required for this protocol. We will collect and analyze data based on published studies, and because no patients are directly or specifically assessed in this study, individual privacy will not be a concern. The results of this review will be disseminated to peer-reviewed journals or presented at a relevant conference.
Discussion Post-stroke constipation can negatively impact the prognosis of patients with stroke. It not only leads to poor QoL in patients with stroke but also increases the prevalence of complications, such as pneumonia and urinary tract infection. However, currently used pharmacological treatments have a one-off effect and adverse effects, such as electrolyte imbalance and anaphylaxis, which could be fatal to patients with stroke; thus, the necessity for the development of new treatments continues to increase. Clinical trials have reported that MaZiRenWan (which contains Dahuang) and Daikenchuto (without Dahuang) could be effective in treating functional constipation. Both herbal prescriptions are herbal combinations with a long history of use and listed in the “Synopsis of Prescriptions of the Golden Chamber, ” published during the Han Dynasty in ancient China; these formulations have been used to improve constipation. The pharmacological mechanisms underlying the clinical effects of these two prescriptions have also been reported. In a previous study, the focused network pharmacology approach was used to analyze the mechanism of action of MaZiRenWan on constipation; the study found that representative compounds of MaZiRenWan, such as amygdalin, albiflorin, emodin, honokiol, and naringin, could induce spontaneous contractions of colonic smooth muscle. Furthermore, several previous studies have suggested that Zanthoxylum fruit, one of the components of Daikenchuto, could improve delayed propulsion in the small intestine and distal colon, while maltose, another component, induces endogenous cholecystokinin secretion, both of which reportedly helps to improve constipation. Thus, traditional East Asian herbal medicines are likely to become newer alternatives to existing Western medicines to improve post-stroke constipation. The current review will be conducted to assess the efficacy and safety of using herbal medicine to treat post-stroke constipation and establish novel management strategies that is expected to reduce the burden on patients and their caregivers.
Conceptualization: Seungwon Kwon. Data curation: Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seung-Bo Yang, Seungwon Kwon. Formal analysis: Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seungwon Kwon. Funding acquisition: Seungwon Kwon. Project administration: Seungwon Kwon. Writing – original draft: Chul Jin, Seungwon Kwon. Writing – review & editing: Chul Jin, Bo-Hyoung Jang, Jin Pyeong Jeon, Ye-Seul Lee, Seung-Bo Yang, Seungwon Kwon.
The grant number appeared incorrectly as HB20C0147 and has been corrected to HF20C0147.
|
Predicting orthognathic surgery results as postoperative lateral cephalograms using graph neural networks and diffusion models | 67fba05f-2bf2-4f02-afac-70eebe12df69 | 11911408 | Dentistry[mh] | Orthognathic surgery (OGS) is widely used to correct severe dentofacial deformities. Establishing the surgical treatment objective and predicting surgical results are necessary to obtain a balance among esthetics, function, and stability and ensure patient satisfaction . Therefore, it is essential to compare various treatment options, such as whether to extract teeth or perform single-jaw surgery or double-jaw surgery, in terms of their expected results to select an optimal treatment plan for the patient. Such pre-procedural planning is even more important with the increased demand for appearance enhancements, as orthognathic surgeries are increasingly being done to improve facial esthetics, even for those who do not have severe facial deformities. Thus, the prediction of facial changes that would occur with orthognathic surgery serves as an important factor in deciding whether a patient should receive surgical treatment , . Traditionally, the prediction of OGS has been carried out by tracing lateral cephalometric radiographs. Changes in the facial appearance were predicted based on the ratio of the movement of the soft-tissue landmark corresponding to the hard-tissue landmark using a pre-operational cephalogram (pre-ceph) , . However, this ratio is affected by various factors, such as the direction of bony movement, thickness or tension of soft tissue, type of surgery, and type of malocclusion, and thus, the accuracy is low, and the deviation is exceedingly large for clinical usage. Commercial programs used for orthodontic diagnosis can provide clinically practical guidelines by simulating post-operational (post-op) changes based on the bone–skin displacement ratio but have limitations in describing actual changes. As a result, the post-op changes provided by these commercial programs do not accurately reflect real changes – . To overcome these problems, several researchers had developed various algorithms for accurately predicting soft-tissue changes. However, most of these algorithms have limited application, such as for mandibular surgery only or for mandibular advance surgery only – . Although there had been a rare attempt to develop a prediction algorithm for various surgical movements , its prediction error was exceedingly large that it could not be applied in clinical situations. Recently, some investigators have been studied to predict surgical results in three dimensions(3D) – . CBCT was introduced into the field of dentistry from its early stages of development due to its advantages of being accurately reproducing the craniofacial structures in 3D without distortion, magnification, or overlap of images with low radiation dose . Initially, CBCT was mainly used to evaluate the alveolar bone region , but as the field of view (FOV) gradually increased, its application has expanded to include the evaluation of impacted teeth , assessment of diseases or trauma in the craniofacial region , and analysis for orthodontics and OGS , . Lee et al. attempted to predict facial changes in 10 OGS patients using CBCT and facial scans. They achieved satisfactory results within 2.0 mm, but the sample size was too small. Resnick et al. also evaluated and predicted soft tissue changes in three dimensions after maxillary surgery, but obtained results that were unsatisfactory for clinical application. Bengtsson et al. compared soft tissue predictions using 2D cephalograms and 3D CBCT and found no significant difference in accuracy. However, they reported that 3D analysis is more advantageous in cases of facial asymmetry. With the application of CBCT to OGS, the amount of radiation exposed to patients has also increased as the FOV and image resolution have increased . Previous studies on CBCT dosimetry have shown that the mean organ dose (84–212 μSv) is significantly higher than that delivered for the acquisition of lateral cephalograms and panoramic radiographs . Jha et al. investigated the cancer risk for various organs based on the median and maximum CBCT imaging conditions commonly used in Korea. The results showed that cancer risk was higher in women than in men, increased with younger age, and rose with the number of imaging sessions, as cancer risk is influenced by factors such as age, gender, equipment parameters, and the number of imaging sessions. Therefore, the ALARA (As Low As Reasonably Achievable) principle must be strictly followed when performing CBCT in clinical practice, and routine CBCT imaging for orthodontic treatment cannot be justified. For the analysis of OGS, CBCT can offer advantages in cases of severe skeletal discrepancies, such as pronounced facial asymmetry with a canted occlusal plane or developmental disorders . While some studies advocate the use of CBCT for orthognathic or TMJ surgery, systematic reviews have failed to support their universal application . As the field of generative AI using deep-learning models dramatically improved, some researchers tried to apply synthetic images in medical and dental imaging. Kim et al. attempted to generate lateral cephalograms using deep learning . They reported visual Turing test results showing that the synthetic lateral cephalograms were indistinguishable from real lateral cephalograms and that tracing on the synthetic images was possible. The use of diffusion models – has led to advancements in multi-modal generation, such as text-to-image or layout-to-image generation, and various applications were demonstrated in the medical domain. For example, the method proposed by ref. , overcame the limitations of existing diffusion-based methods and improved 3D medical image reconstruction tasks such as MRI and CT, by effectively solving 3D inverse problems. Furthermore, the diffusion model can synthesize high-quality medical images, improving medical image analysis performance when data is scarce – . Among them, a latent diffusion model has been developed for a powerful and flexible generation with conditioning inputs and high-resolution synthesis with cross-attention layers into the model architecture . With these advances, it could be possible to generate synthetic post-op lateral cephalograms (spost-cephs) for OGS to compare the outcomes of various treatment options. Therefore, the purpose of this study is to predict facial changes after OGS using a latent diffusion model. We utilized deep learning to generate spost-cephs, enabling surgical outcomes to be anticipated and images to be generated for various surgical planning scenarios through condition adjustments. Our approach relied on two methods. First, to enhance surgical planning accuracy, we employed GCNN to predict appropriate surgical movements from the pre-ceph. Second, we took the surgical movements predicted by GCNN and other information from pre-ceph and its profile line tracing as inputs to generate spost-cephs using a diffusion model. This generative prediction for orthognathic surgery using ceph network (GPOSC-Net) leveraged pre-cephs to generate spost-cephs based on the intended amount of surgical movement (IASM). Afterward, we validated the spost-cephs through various methods. First, to assess the quality and medical realism of the spost-cephs, a visual Turing test (VTT) was performed with four doctors of dental surgery (DDSs), namely, two orthodontists (ODs) and two oral and maxillofacial surgeons (OMFSs), with an average of over 15 years of experience, to differentiate real post-op lateral cephalograms (post-ceph) from spost-cephs and achieved an average accuracy of 48%, which indicated that the spost-cephs exhibited medically plausible quality and features. Second, the spost-cephs were validated via a landmark comparison between the post-cephs and corresponding spost-cephs by two ODs. The distances of these 35 landmarks were grouped into five and evaluated. In each group, the mean Euclidean distance error of the landmarks was 1.5 mm, and the successful prediction rate (successful prediction rate, SPR; errors <2.0 mm) for each landmark averaged at ~90%. Third, by adjusting the weight of classifier-free guidance (CFG) in GPOSC-Net, we generated spost-cephs for various surgical planning scenarios. We requested an evaluation from the same two ODs and two OMFSs. After being shown simulated surgery images generated at guidance IASM ranging from under, exactly, and over setback amounts of 0.1 to 1.6 (where 0, pre-ceph; 1, exact setback amount, i.e., similar to those of post-ceph; 1.6, over setback amount, i.e., beyond the surgical movement of post-ceph), they selected the most appropriate surgical outcome images for those patients, resulting in an average selected IASM of 1.03 ± 0.31. Finally, a survey consisting of five questions was performed to evaluate the clinical utility of the proposed model.
Comparison of landmarks between post-ceph and spost-ceph To evaluate the accuracy of the model, two ODs traced the landmarks in both the post-cephs and spost-cephs (shown in Fig. ) from the test set. Figure –d show the distance errors for the Euclidean, x-axis, and y-axis, respectively. We categorized all the landmarks into five anatomical groups: cranial base, dental, jaw, upper profile, and lower profile (Table ). The average errors of the landmarks for the internal and external test sets were within 1.5 mm. This was smaller or similar to the inter-observer differences shown in past studies investigating the reproducibility of landmark selection in real cephalograms , . In the internal test, errors ranged from 1.01 ± 0.64 mm at the cranial base to 1.46 ± 0.93 mm at the lower profile, with an average error of ~1.27 ± 0.51 mm. In the external test, errors ranged from 0.85 ± 0.58 mm at the cranial base to 1.51 ± 1.01 mm at the jaw, with an average error of ~1.29 ± 0.62 mm (Fig. ). In the internal test, x-axis errors ranged from 0.59 ± 0.53 mm at the cranial base to 0.94 ± 0.81 mm at the lower profile, with an average error of approximately 0.80 ± 0.40 mm. In the external test, x-axis errors ranged from 0.52 ± 0.45 mm at the cranial base to 1.05 ± 0.96 mm at the lower profile, with an average error of approximately 0.80 ± 0.51 mm (Fig. ). In the internal test, y-axis errors ranged from 0.68 ± 0.6 mm at the cranial base to 0.94 ± 0.77 mm at the lower profile, with an average error of approximately 0.84 ± 0.43 mm. In the external test, y-axis errors ranged from 0.55 ± 0.48 mm at the cranial base to 0.93 ± 0.86 mm at the lower profile, with an average error of approximately 0.74 ± 0.42 mm (Fig. ). The results for each of the landmarks can be found in Supplementary Table of the supplementary materials. Comparison of accumulated SPRs The distance errors between the gold standard landmarks and those predicted by the models for the five groups, namely, the cranial base, dental, jaw, upper profile, and lower profile, were evaluated. The SPRs for each group were assessed according to errors <2.0 mm as determined by an OD with more than 15 years of experience (Fig. ). For both the internal and external test sets, landmarks at the cranial base that were not affected by OGS exhibited very high SPRs, whereas landmarks at the remaining parts whose positions changed as a result of OGS exhibited lower SPRs. The SPRs for soft-tissue landmarks appeared lower than those for hard-tissue landmarks, because the errors for the soft-tissue landmarks were generally larger than those for the hard-tissue landmarks , , , . In the internal test, the SPRs were 94% for the cranial base, 79.1% for dental, 78.1% for the jaw, 91.2% for the upper profile, and 76.5% for the lower profile. In the external test, the SPRs were 96.5% for the cranial base, 80% for dental, 81.2% for the jaw, 89.3% for the upper profile, and 74.9% for the lower profile (Table ). The results for each of the landmarks can be found in Supplementary Table of the supplementary materials. Visual Turing test A VTT was conducted with two ODs and two OMFSs, with an average of over 15 years of experience, to evaluate the quality of the spost-cephs. In general, a VTT for a generative model is considered ideal when the resulting accuracy is ~50%. We presented 57 pairs of randomly selected images consisting of both real and generated images (1:1 ratio). Although specificity was high for one examiner, the average accuracy of all examiners was 49.55%. The accuracies of the two ODs and two OMFSs were 45.6, 38.6, 64.9, and 49.1%, respectively. Meanwhile, the sensitivity values for OD1, OD2, OMFS1, and OMFS2 were 51.7, 41.4, 35.5, and 48.3%, respectively, whereas their specificity values were 39.3, 35.7, 96.4, and 50.0%, respectively. These results demonstrated that the quality of the spost-cephs was reasonably good because even expert DDSs were unable to differentiate between real and generated cephs in a blind condition. Digital twin After the serial generation of spost-cephs based on IASM, as shown in Fig. , two ODs and two OMFSs were requested to choose the most proper images among the spost-cephs as a treatment goal. The spost-cephs were generated based on IASM 1.0, which denotes an amount of movement similar to that of actual surgical bony movement. On the other hand, the spost-cephs with IASMs corresponding to under or excessive movement were continuously generated as follows: an image generated based on IASM 0.8, for example, denotes setting the surgical movement to be 20% smaller than the actual setback amount, whereas an image generated based on IASM 1.2 denotes setting the surgical movement to be 20% larger than the actual amount. For IASM 0.1 to 1.6, five images, including for IASM 1.0, were thus randomly generated. The two ODs and two OMFSs were requested to select only one image as an appropriate treatment goal based on the pre-ceph. If a spost-ceph generated based on IASM 0.8 to 1.2 was selected, it was considered to be a correct answer, i.e., an appropriate treatment goal. If the selected spost-ceph was an image generated based on movement similar to actual surgical movement, then it may be used as a digital twin for predicting the simulated surgical result. The two ODs and two OMFSs independently selected a total of 35 cases each and demonstrated an average accuracy of 90.0%, as shown in Fig. . The practicality of the clinical application of spost-ceph was evaluated using the questionnaire shown in Fig. , which attempted to assess if spost-ceph would be useful in predicting surgical results and in patient consultation. As shown in Fig. , the four DDSs indicated the positive utility of our generative model for most of the questions. However, with regard to question 4, this model has a limitation in its usefulness to assist surgical planning in clinical practice, because simply presenting post-op images would not be of much help in establishing a surgical plan. Ablation study We conducted various experiments comparing different conditions and networks. Initially, we compared the performance of generative models between generative adversarial networks (GAN) – and diffusion models – . Subsequently, we enhanced the model by adding various conditions. The first condition used the pre-ceph coordinates of landmarks, whereas the second used surgical movement vectors, which significantly enhanced performance. During the experiments, we identified a problem with the incorrect generation of the mandible. To resolve this, we added the profile line of the pre-ceph as the final condition. This addition significantly enhanced the performance of the model, particularly improving the depiction of the mandibular of the patient. The results of these experiments are presented in Table . The hyperparameters of the model were set to default. We used the same dataset for training both the GAN and diffusion model. The primary backbone model employed for training was StyleGAN , , and we utilized a pSp encoder for projection. Furthermore, to facilitate manipulation, we trained an additional encoder, specifically a graph network , , to learn surgical movement vectors . However, during the training with GANs, we frequently observed mode collapse. Furthermore, no noticeable changes were observed as a result of surgical movements.
To evaluate the accuracy of the model, two ODs traced the landmarks in both the post-cephs and spost-cephs (shown in Fig. ) from the test set. Figure –d show the distance errors for the Euclidean, x-axis, and y-axis, respectively. We categorized all the landmarks into five anatomical groups: cranial base, dental, jaw, upper profile, and lower profile (Table ). The average errors of the landmarks for the internal and external test sets were within 1.5 mm. This was smaller or similar to the inter-observer differences shown in past studies investigating the reproducibility of landmark selection in real cephalograms , . In the internal test, errors ranged from 1.01 ± 0.64 mm at the cranial base to 1.46 ± 0.93 mm at the lower profile, with an average error of ~1.27 ± 0.51 mm. In the external test, errors ranged from 0.85 ± 0.58 mm at the cranial base to 1.51 ± 1.01 mm at the jaw, with an average error of ~1.29 ± 0.62 mm (Fig. ). In the internal test, x-axis errors ranged from 0.59 ± 0.53 mm at the cranial base to 0.94 ± 0.81 mm at the lower profile, with an average error of approximately 0.80 ± 0.40 mm. In the external test, x-axis errors ranged from 0.52 ± 0.45 mm at the cranial base to 1.05 ± 0.96 mm at the lower profile, with an average error of approximately 0.80 ± 0.51 mm (Fig. ). In the internal test, y-axis errors ranged from 0.68 ± 0.6 mm at the cranial base to 0.94 ± 0.77 mm at the lower profile, with an average error of approximately 0.84 ± 0.43 mm. In the external test, y-axis errors ranged from 0.55 ± 0.48 mm at the cranial base to 0.93 ± 0.86 mm at the lower profile, with an average error of approximately 0.74 ± 0.42 mm (Fig. ). The results for each of the landmarks can be found in Supplementary Table of the supplementary materials.
The distance errors between the gold standard landmarks and those predicted by the models for the five groups, namely, the cranial base, dental, jaw, upper profile, and lower profile, were evaluated. The SPRs for each group were assessed according to errors <2.0 mm as determined by an OD with more than 15 years of experience (Fig. ). For both the internal and external test sets, landmarks at the cranial base that were not affected by OGS exhibited very high SPRs, whereas landmarks at the remaining parts whose positions changed as a result of OGS exhibited lower SPRs. The SPRs for soft-tissue landmarks appeared lower than those for hard-tissue landmarks, because the errors for the soft-tissue landmarks were generally larger than those for the hard-tissue landmarks , , , . In the internal test, the SPRs were 94% for the cranial base, 79.1% for dental, 78.1% for the jaw, 91.2% for the upper profile, and 76.5% for the lower profile. In the external test, the SPRs were 96.5% for the cranial base, 80% for dental, 81.2% for the jaw, 89.3% for the upper profile, and 74.9% for the lower profile (Table ). The results for each of the landmarks can be found in Supplementary Table of the supplementary materials.
A VTT was conducted with two ODs and two OMFSs, with an average of over 15 years of experience, to evaluate the quality of the spost-cephs. In general, a VTT for a generative model is considered ideal when the resulting accuracy is ~50%. We presented 57 pairs of randomly selected images consisting of both real and generated images (1:1 ratio). Although specificity was high for one examiner, the average accuracy of all examiners was 49.55%. The accuracies of the two ODs and two OMFSs were 45.6, 38.6, 64.9, and 49.1%, respectively. Meanwhile, the sensitivity values for OD1, OD2, OMFS1, and OMFS2 were 51.7, 41.4, 35.5, and 48.3%, respectively, whereas their specificity values were 39.3, 35.7, 96.4, and 50.0%, respectively. These results demonstrated that the quality of the spost-cephs was reasonably good because even expert DDSs were unable to differentiate between real and generated cephs in a blind condition.
After the serial generation of spost-cephs based on IASM, as shown in Fig. , two ODs and two OMFSs were requested to choose the most proper images among the spost-cephs as a treatment goal. The spost-cephs were generated based on IASM 1.0, which denotes an amount of movement similar to that of actual surgical bony movement. On the other hand, the spost-cephs with IASMs corresponding to under or excessive movement were continuously generated as follows: an image generated based on IASM 0.8, for example, denotes setting the surgical movement to be 20% smaller than the actual setback amount, whereas an image generated based on IASM 1.2 denotes setting the surgical movement to be 20% larger than the actual amount. For IASM 0.1 to 1.6, five images, including for IASM 1.0, were thus randomly generated. The two ODs and two OMFSs were requested to select only one image as an appropriate treatment goal based on the pre-ceph. If a spost-ceph generated based on IASM 0.8 to 1.2 was selected, it was considered to be a correct answer, i.e., an appropriate treatment goal. If the selected spost-ceph was an image generated based on movement similar to actual surgical movement, then it may be used as a digital twin for predicting the simulated surgical result. The two ODs and two OMFSs independently selected a total of 35 cases each and demonstrated an average accuracy of 90.0%, as shown in Fig. . The practicality of the clinical application of spost-ceph was evaluated using the questionnaire shown in Fig. , which attempted to assess if spost-ceph would be useful in predicting surgical results and in patient consultation. As shown in Fig. , the four DDSs indicated the positive utility of our generative model for most of the questions. However, with regard to question 4, this model has a limitation in its usefulness to assist surgical planning in clinical practice, because simply presenting post-op images would not be of much help in establishing a surgical plan.
We conducted various experiments comparing different conditions and networks. Initially, we compared the performance of generative models between generative adversarial networks (GAN) – and diffusion models – . Subsequently, we enhanced the model by adding various conditions. The first condition used the pre-ceph coordinates of landmarks, whereas the second used surgical movement vectors, which significantly enhanced performance. During the experiments, we identified a problem with the incorrect generation of the mandible. To resolve this, we added the profile line of the pre-ceph as the final condition. This addition significantly enhanced the performance of the model, particularly improving the depiction of the mandibular of the patient. The results of these experiments are presented in Table . The hyperparameters of the model were set to default. We used the same dataset for training both the GAN and diffusion model. The primary backbone model employed for training was StyleGAN , , and we utilized a pSp encoder for projection. Furthermore, to facilitate manipulation, we trained an additional encoder, specifically a graph network , , to learn surgical movement vectors . However, during the training with GANs, we frequently observed mode collapse. Furthermore, no noticeable changes were observed as a result of surgical movements.
In this paper, we propose the GPOSC-Net model, which is based on a GCNN and a diffusion model, which generates spost-cephs to predict facial changes after OGS. First, the GPOSC-Net model employs two modules, i.e., an image embedding module (IEM) and a landmark topology embedding module (LTEM), to accurately obtain the amounts of surgical movement that the cephalometric landmarks would undergo as a result of surgery. Afterward, the model uses the predicted post-op landmarks and profile lines segmented on the pre-ceph, among other necessary conditions, to generate accurate spost-cephs. In this study, we independently trained two models, which we then concatenated during the inference process. We conducted training and evaluation using a dataset of high-quality patient data consisting of 707 pairs of pre-cephs and post-cephs dated from 2007 to 2019 provided by nine university hospitals and one dental hospital. To train and test the model, data from four of the institutions were used for internal validation to evaluate the accuracy of the model. Subsequently, to demonstrate the robustness of the model, data from the six other institutions were used for external validation. The cephalometric landmarks of post-ceph and spost-ceph were then compared. In the internal validation, no statistically significant differences were observed for most of the landmarks (33 of the total 35 landmarks), whereas in the external validation, no statistically significant differences were observed for 23 of the 35 landmarks. Landmarks on the cranial base, which were not changed by surgery, had average errors of 0.85 ± 0.62 mm and 1.07 ± 0.79 mm for the internal and external test sets, respectively. These values were comparable to or smaller than the intra-observer errors observed in reproducibility studies with real cephalograms , . Thus, it could be said that the landmarks in spost-ceph were not significantly different from those of the real post-ceph. Researches that predict the outcomes of OGS by training artificial intelligence on cephalometric radiographs are still relatively few, and some of them compared the accuracy of predictions using metrics such as F1 score or AUC for cephalometric measurements . However, such evaluation methods may not always be appropriate for clinical application. Donatelli and Lee argued that in orthodontic research, when assessing the reliability of 2D data, it is more appropriate to represent errors based on horizontal and vertical axes and to evaluate them using Euclidean distance rather than simply relying on measurements of distance or angles . Previous studies that predicted the outcomes of OGS typically focused on the changes in soft tissue. Suh et al. investigated that the partial least squares (PLS) method was more accurate than the traditional ordinary least squares method in predicting the outcomes of mandibular surgery . According to the study by Park et al., when predictions were made using the PLS algorithm, the Euclidean distance from the actual results ranged from 1.4 to 3.0 mm, whereas the AI (TabNet DNN algorithm) prediction error ranged from 1.9 to 3.8 mm . In this study, the PLS algorithm predicted the soft tissue changes more accurately in the upper part of the upper lip, while the AI (TabNet DNN algorithm) provided more accurate predictions in the lower mandibular border and neck area. The prediction errors for soft tissue changes in our study were 0.8 to 1.22 mm in the upper profile and 1.32 to 1.75 mm in the lower profile, resulting in better outcomes compared to previous studies. Kim et al. predicted the positions of hard-tissue landmarks after surgery using linear regression, random forest regression, the LTEM, and the IEM They found that combining LTEM and IEM allowed for more accurate predictions, with errors ranging from 1.3 to 1.8 mm . Our study also achieved similar results, with prediction errors ranging from 1.3 to 1.6 mm. For the internal and external test sets, the average errors of cephalometric landmarks in the dental area were 1.34 ± 0.83 mm and 1.60 ± 1.08 mm, respectively, whereas the errors of landmarks in the jaw were 1.33 ± 0.86 mm and 1.57 ± 0.94 mm, respectively. Although the errors were larger than those of landmarks on the cranial base, they were comparable to the inter-observer errors demonstrated in a past study involving real cephalograms , , and thus, it could be inferred that the actual surgical results were accurately predicted. In particular, the dental area, which is difficult to accurately create in a generative model, was generated as accurately as the jaws. For the internal test set, there were no statistically significant differences among all 16 landmarks. However, for the external test set, there were significant differences in 6 of the landmarks, four of which were positioned at the jaws. It seemed that the prediction of these landmarks (A point, anterior nasal spine or ANS, protuberance menti, and pogonion) was made difficult by remodeling procedures after surgery, such as ANS trimming and genioplasty. The landmarks in the upper profile had relatively smaller errors than those of the landmarks in the lower profile, but there were more landmarks showing statistically significant differences in the upper profile than in the lower profile. This was probably due to the small standard deviation of the landmark errors in the upper profile. The upper profile undergoes relatively little or no change due to surgery, and thus, the measurement errors were small. By contrast, in the lower profile, it seemed that the prediction errors were relatively larger because of various changes in the chin position that could occur depending on whether genioplasty was done. However, nonetheless, the landmark errors in the lower profile were comparable to the inter-observer errors demonstrated in another study , . VTT results revealed that the four examiners had ~50% accuracy, suggesting that the spost-cephs were perceived as realistic and could not be differentiated even by expert ODs and OMFSs with an average of over 15 years of experience. Serial spost-cephs adjusted with different values for IASM were generated and evaluated in a test on selecting appropriate surgical results based on pre-cephs. Most of the answers chosen by the four examiners in a blind condition were within the criteria for preferred predictions (0.8 ≤ IASM ≤1.2), which meant that if an appropriate surgical movement could be presented, our generative model would be able to synthesize images that could be used as a simulated surgical goal. Therefore, with our proposed model, the surgical results could be reliably predicted and used in actual clinical practice. In the same test, most of the ODs and OMFSs responded positively to the usefulness of spost-cephs. In particular, spost-cephs would be of great help in explaining various kinds of surgical plans to patients and predicting their surgical results. However, the experts did not have a high expectation regarding the usefulness of spost-cephs in establishing an actual surgical plan. This might be because the actual amounts of bony movement could not be determined simply from spost-cephs. A more positive answer could have been obtained if the amounts of bony movement had been presented with a comparison of pre-ceph and spost-ceph. This study had several limitations. First, our model depends on two-dimensional cephalometric images, which could not represent actual 3D movement and changes due to OGS. In the near future, this study could be extended to use 3D cone beam computed tomography (CBCT) of OGS. Second, this study was performed in a single nation and on an Asian population only. We need to extend our model to be applicable to various races from other nations. Lastly, in this study, there was a possibility of simulation-based digital twins for our model. For better clinical significance, we need more clinical evaluations on real-world clinical validation involving more examiners and performed in a prospective manner. This study fundamentally aims to assist physicians in making better decisions in ambiguous cases, enhance communication between patients and doctors, and ultimately foster better rapport. However, there is concern that the outcomes of this study could potentially lead to misconceptions among patients, resulting in an increase in unnecessary surgeries or treatments. It is crucial for physicians to be aware of these risks, and there is a need for regulatory agencies to develop regulations that prevent unnecessary treatments. Our group is committed to actively addressing these concerns. Despite these concerns, our study demonstrates that AI-based prediction models, such as GPOSC-Net, can provide valuable insights for surgical planning and clinical decision-making. In this paper, we propose GPOSC-Net, an automated and powerful OGS prediction model that uses lateral cephalometric X-ray images. In this study, these images were obtained from nine university hospitals and one dental hospital in South Korea. Our model predicted the movement of landmarks as a result of OGS between pre-cephs, post-cephs, and generated spost-cephs using pre-ceph and IASM (virtual setback ratio only). Based on a comparison with post-ceph, the spost-ceph not only accurately predicted the positions of the cephalometric landmarks but also generated accurate spost-cephs. Although 2D images have their limitations in formulating accurate surgical plans, our model has the potential to significantly contribute to simulations for surgical planning and communications with other dentists and patients.
Ethical approval This retrospective study was conducted according to the principles of the Declaration of Helsinki. This nationwide study was reviewed and approved by the Institutional Review Board Committee of ten institutions: (A) Seoul National University Dental Hospital (SNUDH) (ERI20022), (B) Kyung Hee University Dental Hospital (KHUDH) (19-007-003), (C) Kooalldam Dental Hospital (KOO) (P01-202105-21-019), (D) Kyungpook National University Dental Hospital (KNUDH) (KNUDH-2019-03-02-00), (E) Wonkwang University Dental Hospital (WUDH) (WKDIRB201903-01), (F) Korea University Anam Hospital (KUDH) (2019AN0166), (G) Ewha Woman’s University Dental Hospital (EUMC) (EUMC 2019-04-017-003), (H) Chonnam National University Dental Hospital (CNUDH) (2019-004), (I) Ajou University Dental Hospital (AUDH) (AJIRB-MED-MDB-19-039), and (J) Asan Medical Center (AMC) (2019-0927). The requirement for patient consent was waived by each center’s Institutional Review Board Committee. Overall procedure Based on the IASM and pre-ceph, the spost-ceph is generated by GPOSC-Net. In this study, two ODs traced the spost-cephs and compared them with post - cephs to evaluate the accuracy of the landmark positions and the soft- and hard-tissue profile lines. 45 landmarks were digitized by experienced orthodontists using the V-ceph software (Version 8.0, Osstem, Seoul, Korea). Additionally, a VTT was conducted with two ODs and two OMFSs to validate the quality of the spost-cephs. During the spost-ceph generation process, additional images reflecting various amounts of surgical movement were generated and reviewed to establish an appropriate surgical plan (Fig. ). The proposed GPOSC-Net model is visualized in Fig. . Data acquisition A total of 707 patients with malocclusion who underwent orthognathic surgery (OGS) between 2007 and 2019 at one of nine university hospitals and/or one dental hospital and had lateral cephalograms taken before and after surgery (Fig. ) were included in this study (Fig. ). The age of the patients ranged from 16 to 50 years. All lateral cephalogram pairs were anonymized and stored in Digital Imaging and Communications in Medicine (DICOM) format as 12-bit grayscale images. The gender distribution of the patients was nearly equal (Fig. ). In this study, sex or gender was not considered as a factor in the experiments. The average duration of pre-surgical orthodontic treatment was 14 months, although some patients required 2 to 3 years to complete the pre-surgical phase (Fig. ). We initially selected hospitals A, B, and C, which had the richest datasets, as our primary sources for the internal dataset. However, the majority of the patients from institutions A and B underwent two-jaw surgery (Fig. ). Consequently, to prevent a bias in the deep learning model toward patients that underwent one-jaw surgeries, we incorporated data from institution D, which had a higher proportion of patients who underwent one-jaw surgery, into our internal dataset. Through this process, a dataset comprising a total of 707 pairs was constructed, of which 550 were utilized as the training dataset, 50 as the validation dataset, and 50 as the internal test set. Additionally, we employed 57 pairs of pre-cephs and post-cephs from university hospitals E, F, G, H, I, and J as the external test set, because the different institutions had different cephalogram machines. In addition, there were variations in the imaging protocols and in the quality of the cephalograms. With regard to the direction of surgical movement, the majority of anterior nasal spine (ANS), posterior nasal spine (PNS), and upper-lip landmarks moved anteriorly and superiorly, whereas the majority of B-point, Md 1 crown, lower lip, soft-tissue pogonion, and soft-tissue menton landmarks moved posteriorly and superiorly (Fig. ). The reason for these surgical movements was that most of the patients had skeletal Class III malocclusions, which needed anterior movement of the maxilla and posterior movement of the mandible. For most of the OGSs, the maxilla moved within 10 mm, whereas the mandible moved within 15 mm. Detailed information regarding the composition, demographic characteristics, and cephalography machines, among others, is provided in Supplementary Table of the supplementary materials. Model description Overview of GPOSC-Net Herein, we propose generative prediction for orthognathic surgery using ceph network (GPOSC-Net) , which comprises two models: a two-module combination of our CNN-based image embedding module (IEM) and a GCNN-based landmark topology embedding module (LTEM), which predict the movement of landmarks that would occur as a result of OGS; and a latent diffusion model , which is used to generate spost-cephs (Fig. ). The IEM utilizes a high-resolution network to maintain detailed representations of lateral cephalometric images. Before proceeding to the next step, the output of the IEM is subjected to channel coupling by the channel relation score module (CRSM), which calculates the relation score between channels of a feature map. On the other hand, the LTEM employs a GCNN to train the topological structures and spatial relationships of 45 hard- and soft-tissue landmarks. Finally, the movement of these landmarks is predicted by a multi-layer perceptron (MLP) module, which uses the combined outputs of IEM and LTEM. To generate spost-cephs, the model uses a set of conditions that includes the movement of landmarks obtained through IEM and LTEM, along with segmented profile lines of pre-ceph. This approach aims to ensure a minimal generation ability for our system. To reinforce this capability, we trained an autoencoder on a dual dataset, including one with labeled pre-ceph and post-ceph images, and The other is an extensive unlabeled set of 30,000 lateral cephalograms, randomly collected between 2007 and 2020, which are unrelated to any pre- or post-surgical conditions or orthodontic treatment, and are sourced from an internal institution (Hospital J). The learning methods and model structure and description are explained in detail further in this paper. Finally, we employed the IASM during the testing phase to generate serial spost-ceph images corresponding to various amounts of virtual surgical movement. IASM made it possible to calibrate the expected surgical movement ratio precisely across a continuous spectrum from 0 to 1.6, where a value of 0 represents no surgical movement (similar to pre-ceph, 0%), a value of 1 corresponds to the full predicted movement (similar to post-ceph, 100%), and a value of 1.6 equates to an enhanced projection with a 160% setback. This enabled the serial generation of spost-ceph images with nuanced variations in surgical movement. For IASM ranging from 0.1 to 1.6, five spost-ceph images, including for IASM 1, were randomly generated, and an appropriate treatment goal based on the pre-ceph was selected by two ODs and two OMFSs in a blind condition. Surgical movement vector prediction modules As indicated earlier, our model consists of IEM and LTEM , which are trained using images and landmarks, respectively (Fig. ). The IEM adopted HR-NET as its backbone and was trained to represent a ceph as a low-dimensional feature map. To correspond to each landmark, the feature map outputs 45 channels, where each channel has dimensions of 45 × 45. CRSM is used to measure a relationship score matrix between distinct channels; similarly, the matrix has dimensions of 45 × 45. Finally, an image feature vector is evaluated using a weighted combination of the flattened feature map and relationship score matrix. On the other hand, the LTEM was designed based on the GCNN to learn the topological structures of landmarks. The training process of the LTEM is as follows: [12pt]{minimal}
$${{{}}}({{{{}}}}_{{{{}}}}^{{{{}}}})\,={{{{}}}}_{{{{}}}}^{{{{}}}+1}={{{}}}(.{{{{}}}}_{{{{}}}}^{{{{}}}}{{{{}}}}_{1}+{}_{{{{}}}}{{{{}}}}_{{{{}}}}{{{{}}}}_{{{{}}}}^{{{{}}}}({{{{}}}}_{2})$$ GCNN ( f i k ) = f i k + 1 = ReLU f i k W 1 + ∑ j e ij f i j ( W 2 ) , where [12pt]{minimal}
$${{{{}}}}_{1}$$ W 1 and [12pt]{minimal}
$${{{{}}}}_{2}$$ W 2 are weight matrices learned from the training, [12pt]{minimal}
$${{{}}}$$ f denotes node features, and [12pt]{minimal}
$${{{}}}$$ e is the edge of the graph. Meanwhile, ReLU(·) = max(0, ·) is the nonlinear activation function, is the learnable connectivity at the i th node from A, denotes the data we want to train, and is expressed as input data. In our experiment, D = 92 and N = 45, where D is the input dimension of the graph, the position of the i-node, and the distance features from the neighborhood of node i; and N is the number of nodes, which is the same as the number of landmarks (Fig. ). The encoder of the LTEM comprises two layers of the GCNN, which is the graph embedding, and the learned weight matrices in these layers. Herein, A is the connectivity of all nodes shared by both layers. The output dimensions of the first and second layers are set to 64 and 32, respectively. Our model utilizes IEM and LTEM to obtain embeddings of images and landmarks, and then concatenates these embedding vectors to ultimately predict the surgical movement vectors. We trained the model using the L1 loss between the predicted surgical movement vectors and the gold standard. We also used the Adam optimizer , which combined the momentum and exponentially weighted moving average gradients methods, to update the weights of our networks. The learning rate was initially set to 0.001, and then decreased by a factor of 10 when the accuracy of the networks on the validation dataset stopped improving. In total, the learning rate was decreased three times to end the training. The networks were constructed under the open-source machine learning framework of PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 GPU. For the model training, we adopted a data augmentation strategy to enhance its robustness and generalization ability. This data augmentation strategy could prevent overfitting and lead to robust model performance, particularly when a limited training dataset is used. Data augmentation was performed on the image and graph inputs to increase the training dataset. When the spatial information of an image was transformed, such as by random rotation and random shift, the same augmentation was applied to the input of the graph. For the gamma, sharpness, blurriness, and random noise, the spatial information of the image was not transformed; thus, these were applied only to the image and not to the graph input. Generation module Image compression (Fig. ). The objective of our generation module is to generate spost-cephs using pre - ceph as input. To achieve this, we employed a latent diffusion model consisting of an autoencoder for encoder [12pt]{minimal}
$${{{}}}$$ E and decoder [12pt]{minimal}
$${{{}}}$$ D and a diffusion model for generating the encoding latent (Fig. ). To train the autoencoder, we used not only pre-ceph and post-ceph data but also an unlabeled set of 30,000 lateral cephalograms sourced from an internal institution (Hospital J). This was important to ensure that the latent space of the autoencoder was well-formed, guaranteeing minimal generation capability . Additionally, we employed vector quantization , , which uses a discrete codebook [12pt]{minimal}
$${{{}}}{{{}}}{{}}^{16 128 128}$$ Z R 16 × 128 × 128 , and adversarial learning techniques to enhance model stability and achieve high-quality results. The loss function is as follows. 1 [12pt]{minimal}
$${{{{}}}}_{{VQ}}({{{}}},{{{}}},{{{}}})={{||}{{{}}}-}}}}{||}}^{2}+{{||}{{{}}}[{{{}}}({{{}}})]-{{{{}}}}_{{{{}}}}{||}}_{2}^{2}+{{||}{{{}}}[{{{{}}}}_{{{{}}}}]{{{}}}{{{}}}({{{}}}){||}}_{2}^{2}+ {{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)$$ L V Q E , D , Z = ∣ ∣ x − x ^ ∣ ∣ 2 + ∣ ∣ sg E x − z q ∣ ∣ 2 2 + ∣ ∣ sg z q - E x ∣ ∣ 2 2 + λ L G A N E , D , Z , D where [12pt]{minimal}
$$D$$ D is the patch-based discriminator, [12pt]{minimal}
$$={{{}}}({{{}}}(x))$$ x ^ = D E x , and [12pt]{minimal}
$${{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)=[ D(x)+ (1-D())]$$ L G A N E , D , Z , D = log D x + log 1 − D x ^ Diffusion model. The encoded data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 is gradually converted into a well-behaved distribution [12pt]{minimal}
$${{{}}}(y)$$ π y by repeated application of a Markov diffusion kernel [12pt]{minimal}
$${T}_{{{{}}}}(\,{y|y;}{{{}}})$$ T π y ∣ y ; β for π (y) . Then, 2 [12pt]{minimal}
$$q({z}_{t}{|z})={T}_{{{{}}}}({z}_{t}|{z}_{t-1};{{{{}}}}_{t})={{{}}}({z}_{t};}}}}_{t}}{z}_{t-1},{{{{}}}}_{t}{{{}}})$$ q z t ∣ z = T π z t ∣ z t − 1 ; β t = N z t ; 1 − β t z t − 1 , β t I Meanwhile, the forward trajectory, starting at the data distribution and performing [12pt]{minimal}
$$T={{}}$$ T = 1000 steps of diffusion process, is as follows: [12pt]{minimal}
$$q({z}_{0:T})=q({z}_{0}){ }_{t=1}^{T}q({z}_{t} | {z}_{t-1})$$ q z 0 : T = q z 0 ∏ t = 1 T q z t ∣ z t − 1 , where [12pt]{minimal}
$${z}_{1},{z}_{2}, {z}_{T}$$ z 1 , z 2 , … z T are latents of the same dimension as the data [12pt]{minimal}
$${z}_{0}$$ z 0 . The forward process is that which admits sampling [12pt]{minimal}
$${z}_{t}$$ z t at an arbitary timestep [12pt]{minimal}
$$t$$ t in closed form. Using the notation [12pt]{minimal}
$${{{{}}}}_{t}=1-{{{{}}}}_{t}$$ α t = 1 − β t and [12pt]{minimal}
$${}}}}}_{t}={ }_{s=1}^{t}{{{{}}}}_{s}$$ α ¯ t = ∑ s = 1 t α s , then, we obtain the analytical form of [12pt]{minimal}
$$q({z}_{t} | {z}_{0})$$ q z t ∣ z 0 as follows. 3 [12pt]{minimal}
$$q({z}_{t} | {z}_{0})={{{}}}({z}_{t};}}}}}_{t}}{z}_{0},(1-{}}}}}_{t}){{{}}})$$ q z t ∣ z 0 = N z t ; α ¯ t z 0 , 1 − α ¯ t I We can easily obtain a sample in the immediate distribution of the diffusion process. 4 [12pt]{minimal}
$${z}_{t}=}}}}}_{t}}{z}_{0}+}}}}}_{t}}{{{}}}$$ z t = α ¯ t z 0 + 1 − α ¯ t ϵ Diffusion models are latent variable models of the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0})={ {p}_{{{{}}}}({z}_{0:T}){dz}}_{1:T}$$ p θ z 0 = ∫ p θ z 0 : T d z 1 : T . The reverse trajectory, starting at the prior distribution, is as follows. 5 [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0:T})=p({z}_{T})_{t=1}^{T}{p}_{{{{}}}}({z}_{t-1}|{z}_{t})$$ p θ z 0 : T = p z T ∏ t = 1 T p θ z t − 1 ∣ z t where [12pt]{minimal}
$$p({z}_{T})={{{}}}({z}_{T})$$ p ( z T ) = π z T and [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1}|{z}_{t})={{{}}}({z}_{t-1};{ }_{ }({z}_{t},t),{ }_{ }({z}_{t},t))$$ p θ z t − 1 ∣ z t = N z t − 1 ; μ θ z t , t , Σ θ z t , t , and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ μ θ z t , t and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ Σ θ z t , t are training targets defining the mean and covariance, respectively, of the reverse Markov transitions for a Gaussian distribution. To approximate between the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({x}_{0})$$ p θ x 0 and data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 , training is performed by optimizing the variational lower bound on negative log likelihood. 6 [12pt]{minimal}
$${{}}_{{{{}}} {{{}}}({{{}}})}[- {p}_{ }(z)] {{}}_{z q(z)}[- p({z}_{T})-{}_{t 1}_{{{{}}}}({z}_{t-1} | {z}_{t})}{q({z}_{t} | {z}_{t-1})}]={{{{}}}}_{{vlb}}$$ E z ~ q z − log p θ ( z ) ≤ E z ~ q z − log p z T − ∑ t ≥ 1 p θ z t − 1 ∣ z t q z t ∣ z t − 1 = L v l b For efficient training, further improvement is made by re-expressing [12pt]{minimal}
$${{{{}}}}_{{vlb}}$$ L v l b as follows. 7 [12pt]{minimal}
$${{{{}}}}_{{vlb}}={{}}_{z q(z)}[{D}_{{KL}}(q({z}_{T}|{z}_{0}){||p}({z}_{T}))+{D}_{{KL}}(q({z}_{t-1}|{z}_{t},{z}_{0}){||}{p}_{{{{}}}}({z}_{t-1}|{z}_{t}))- {p}_{{{{}}}}({z}_{0} | {z}_{1})]$$ L v l b = E z ~ q z D K L q z T ∣ z 0 ∣ ∣ p z T + D K L q z t − 1 ∣ z t , z 0 ∣ ∣ p θ z t − 1 ∣ z t − log p θ z 0 ∣ z 1 The equation uses Kullback–Leibler divergence to directly compare [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1} | {z}_{t})$$ p θ z t − 1 ∣ z t against forward process posteriors. The posterior distributions are tractable when conditioned on [12pt]{minimal}
$${z}_{0}$$ z 0 . 8 [12pt]{minimal}
$$q({z}_{t} | {z}_{t-1})=q({z}_{t-1} | {z}_{t},{z}_{0})_{t} | {z}_{0})}{q({z}_{t-1} | {z}_{0})}={{{}}}({z}_{t-1};{}_{t}({z}_{t},{z}_{0}),{}_{t}{{{}}}),$$ q z t ∣ z t − 1 = q z t − 1 ∣ z t , z 0 q z t ∣ z 0 q z t − 1 ∣ z 0 = N z t − 1 ; μ ~ t z t , z 0 , β ~ t I , where [12pt]{minimal}
$${}_{t}({z}_{t},{z}_{0})=}_{t-1}}{ }_{t}}{1-{}_{t}}{z}_{0}+_{t}}(1-{}_{t-1})}{1-{}_{t}}{z}_{t}$$ μ ~ t z t , z 0 = α ¯ t − 1 β t 1 − α ¯ t z 0 + α t 1 − α ¯ t − 1 1 − α ¯ t z t and [12pt]{minimal}
$${}_{t}=}_{t}}{1-{}_{t-1}}{ }_{t}$$ β ~ t = 1 − α ¯ t 1 − α ¯ t − 1 β t , and the values of [12pt]{minimal}
$${ }_{0}$$ β 0 and [12pt]{minimal}
$${ }_{T}$$ β T were 0.0015 and 0.0195, respectively. Then, the loss function can be defined as follows. 9 [12pt]{minimal}
$${{{{}}}}_{{simple}}={{}}_{{{{}}}(x),q}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},t){||}}^{2}]$$ L s i m p l e = E E x , q ∣ ∣ ϵ − ϵ θ z t , t ∣ ∣ 2 After training, samples can be generated by starting from [12pt]{minimal}
$${z}_{T}{{{}}}{{{}}}(0,{{{}}})$$ z T N 0 , I and following the parameterized reverse Markov chain. 10 [12pt]{minimal}
$${z}_{t-1}=}}}}_{t}}}({z}_{t}-}}}}_{t}}{}}}}}_{t}}}{{{{}}}}_{{{{}}}}({z}_{t},t))+{{{{}}}}_{t}z$$ z t − 1 = 1 α t z t − 1 − α t 1 − α ¯ t ϵ θ z t , t + σ t z Furthermore, we aimed to generate spost-cephs using multiple conditions in the diffusion model. We used a total of four conditions, including pre-cephs and their profile lines, which were concatenated, whereas the pre-ceph landmarks and the movement vectors predicted through IEM and LTEM were latentized using a graph network and subsequently embedded into the diffusion model via a cross-attention module. Then, we can train the conditional diffusion model using conditions c via 11 [12pt]{minimal}
$${{{{}}}}_{{condition}}={{}}_{{{{}}}(x)}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},c,t){||}}^{2}]$$ L c o n d i t i o n = E E x ∣ ∣ ϵ − ϵ θ z t , c , t ∣ ∣ 2 where, [12pt]{minimal}
$$c=[{m,x}^{{pre}},\,{l}^{{pre}},\,{p}^{{pre}}]$$ c = m , x p r e , l p r e , p p r e and [12pt]{minimal}
$$m {{}}^{45 45}$$ m ∈ R 45 × 45 is the surgical movement vector predicted through the graph network, and [12pt]{minimal}
$${x}^{{pre}} {{}}^{1 1024 1024}$$ x p r e ∈ R 1 × 1024 × 1024 , [12pt]{minimal}
$${l}^{{pre}} {{}}^{45 45}$$ l p r e ∈ R 45 × 45 and [12pt]{minimal}
$${p}^{{pre}} {{}}^{1 1024 1024}$$ p p r e ∈ R 1 × 1024 × 1024 represent the pre-ceph, the landmarks of pre-ceph, and the profile line of the pre-ceph. Additionally, we used the LTEM model to embed [12pt]{minimal}
$$m$$ m and [12pt]{minimal}
$${l}^{{pre}}$$ l p r e into the diffusion model. We used an untrained model, which is trained together as the diffusion model is trained. After training, sampling is performed using the trained diffusion model. To reduce the generation time and maintain consistency, a DDIM was used. The formula for DDIM is as follows: 12 [12pt]{minimal}
$${{{{}}}}_{{{{{}}}}_{t-1}}=_{{{{{}}}}_{t-1}}}(_{{{{{}}}}_{t}}-_{{{{{}}}}_{t}}}{ }_{ }^{(t)}({z}_{{{{{}}}}_{t}})}{_{{{{{}}}}_{t}}}})+_{{{{{}}}}_{t-1}}} { }_{ }^{(t)}({z}_{{{{{}}}}_{t}})$$ z τ t − 1 = α τ t − 1 z τ t − 1 − α τ t ϵ θ t z τ t α τ t + 1 − α τ t − 1 ⋅ ϵ θ t z τ t where [12pt]{minimal}
$${{{}}}$$ τ is a sub-sequence of timesteps of length [12pt]{minimal}
$$T$$ T . To train the generation module, we utilized the Adam optimizer , which combines momentum and exponentially weighted moving average gradient methods. The initial learning rate was set to 2e − 6, and we trained the model for a total of 1000 epochs. The networks were implemented using open-source machine learning frameworks such as PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 48GB GPU. However, we did not employ data augmentation in our training process. Classifier-free guidance for digital twin To conduct experiments for generating various surgical movements, we used classifier-free guidance (CFG) . Unlike classifier guidance , , CFG is distinct in that the classifier model is not separate from the diffusion model but is trained together. CFG achieves an effect similar to modifying epsilon [12pt]{minimal}
$${{{}}}$$ ϵ for classifier guidance sampling, but without the separated classifier. The diffusion model can be trained by setting a condition c or a null token [12pt]{minimal}
$${{ }}$$ ∅ into the model for some probability. Then, we defined the estimated score – using model θ for the input condition c as [12pt]{minimal}
$${{{{}}}}_{{{{}}}}({z}_{t},t,{{{}}})$$ ϵ θ ( z t , t , c ) , and the estimated score for the null token as [12pt]{minimal}
$${{{ }}}_{ }({z}_{t},t,\,{{ }})={{{ }}}_{ }({z}_{t},t)$$ ϵ θ z t , t , ∅ = ϵ θ ( z t , t ) . After training, we modified the score using a linear combination of the unconditional score and conditional score by the IASM. The CFG sampling method is known to be robust against gradient-based adversarial attacks, whereas classifier guidance sampling by a poorly trained classifier may lead to problems in consistency and fidelity. The score estimated by the CFG sampling is shown as follows: 13 [12pt]{minimal}
$${}_{ }({z}_{t},t,c)=(1+s) { }_{ }({z}_{t},t,c)-s { }_{ }({z}_{t},t)$$ ϵ ~ θ z t , t , c = 1 + s ⋅ ϵ θ z t , t , c − s ⋅ ϵ θ z t , t Preprocessing of dataset Before training, all lateral cephalograms were standardized with a pixel spacing of 0.1 mm. Subsequently, the post - ceph was conventionally aligned with the pre - ceph based on the Sella–Nasion (SN) line. To include all landmarks in both pre-ceph and post - ceph, a rectangle encompassing the regions defined by the Basion, Soft-tissue menton, Pronasale, and Glabella points in both pre-ceph and post-ceph was cropped. Additionally, zero padding was applied horizontally and vertically to create a square image with a resolution of 1024 × 1024. The cropped image was divided by the maximum pixel value of the image. Pixel normalization was performed such that the pixel value was within 0–1. In addition, the coordinates of each landmark and the distances among landmarks were expressed as vectors to train the model. Before input to the model, the x- and y-axis distances were divided by the width and height of the cropped picture, and normalization was performed such that the feature value was within the range of 0–1. Statistical analysis All statistical analysis was performed using IBM SPSS Statistics (IBM Corporation, Armonk, NY, USA) version 25. Landmark distance comparison for post-ceph and spost-ceph Two ODs traced post-cephs and spost-cephs in the internal ( n = 50) and external ( n = 57) test sets. The SN − 7° line was set as the horizontal reference line, and the line passing through the S point and perpendicular to the SN − 7° line was set as the vertical reference line. The horizontal and vertical distances from each landmark were used as coordinate values. The coordinate values of the same landmark in post-ceph and spost-ceph were compared, and the distance between landmarks was calculated. A paired equivalence test was performed for each landmark. In this case, the margin of error applied was 1.5 mm41, 42. The SPRs for each point were assessed according to errors <2.0 mm. Furthermore, we measured the distance between the profile lines of post-ceph and spost-ceph. Taking anatomical structures into account, we divided them into four lines, and the distances between the lines were measured using the Hausdorff distance. Details on the errors in the profile lines and the definition of the four profile lines can be found in the Supplementary Table and Supplementary Fig. of the supplementary materials. Visual Turing test For the VTT, 57 external test images (29 post-cephs and 28 spost-cephs) were used, as OMFSs and ODs had already observed the generated internal dataset during the digital twin experiment. VTT was conducted with two ODs and two OMFSs by displaying images one by one through a dedicated web-based interface. Each examiner had more than 15 years of clinical experience. To reduce environmental variability, the images were displayed in the same order, and revisiting previous answers was prohibited. The examiners were informed that there were 29 real and 28 synthesized images. In addition, none had prior experience with synthesized images before the test. All examiners successfully completed the test. Sensitivity, specificity, and accuracy were derived, with real images defined as positive and synthetic images as negative. Digital twin We investigated the clinical applicability of the spost-cephs as digital twins for simulated surgical planning. Two ODs and OMFSs were simultaneously shown pre-ceph and five spost-cephs randomly generated at different degrees of surgical movement. To focus on cases with significant surgical changes, patients with surgical movement of ≤5 mm were excluded, resulting in the selection of 35 cases from the initial internal test set of 50. Subsequently, the examiners were asked to select an appropriate surgical movement amount considering the pre-ceph. The percentage of spost-cephs reflecting real surgical movements was then calculated. Ablation study The ablation study was conducted using an internal dataset of 50 samples. A single OD manually measured landmarks for each experimental condition. Given the intensive nature of manual landmark annotation, only the internal dataset was used to ensure feasibility while maintaining evaluation consistency. Paired t -tests were performed at each of the five experimental stages to compare results with those from the preceding stage, assessing the impact of landmarks distance error. Statistical significance was set at p < 0.05, with p < 0.005 considered highly significant. Reporting summary Further information on research design is available in the linked to this article.
This retrospective study was conducted according to the principles of the Declaration of Helsinki. This nationwide study was reviewed and approved by the Institutional Review Board Committee of ten institutions: (A) Seoul National University Dental Hospital (SNUDH) (ERI20022), (B) Kyung Hee University Dental Hospital (KHUDH) (19-007-003), (C) Kooalldam Dental Hospital (KOO) (P01-202105-21-019), (D) Kyungpook National University Dental Hospital (KNUDH) (KNUDH-2019-03-02-00), (E) Wonkwang University Dental Hospital (WUDH) (WKDIRB201903-01), (F) Korea University Anam Hospital (KUDH) (2019AN0166), (G) Ewha Woman’s University Dental Hospital (EUMC) (EUMC 2019-04-017-003), (H) Chonnam National University Dental Hospital (CNUDH) (2019-004), (I) Ajou University Dental Hospital (AUDH) (AJIRB-MED-MDB-19-039), and (J) Asan Medical Center (AMC) (2019-0927). The requirement for patient consent was waived by each center’s Institutional Review Board Committee.
Based on the IASM and pre-ceph, the spost-ceph is generated by GPOSC-Net. In this study, two ODs traced the spost-cephs and compared them with post - cephs to evaluate the accuracy of the landmark positions and the soft- and hard-tissue profile lines. 45 landmarks were digitized by experienced orthodontists using the V-ceph software (Version 8.0, Osstem, Seoul, Korea). Additionally, a VTT was conducted with two ODs and two OMFSs to validate the quality of the spost-cephs. During the spost-ceph generation process, additional images reflecting various amounts of surgical movement were generated and reviewed to establish an appropriate surgical plan (Fig. ). The proposed GPOSC-Net model is visualized in Fig. .
A total of 707 patients with malocclusion who underwent orthognathic surgery (OGS) between 2007 and 2019 at one of nine university hospitals and/or one dental hospital and had lateral cephalograms taken before and after surgery (Fig. ) were included in this study (Fig. ). The age of the patients ranged from 16 to 50 years. All lateral cephalogram pairs were anonymized and stored in Digital Imaging and Communications in Medicine (DICOM) format as 12-bit grayscale images. The gender distribution of the patients was nearly equal (Fig. ). In this study, sex or gender was not considered as a factor in the experiments. The average duration of pre-surgical orthodontic treatment was 14 months, although some patients required 2 to 3 years to complete the pre-surgical phase (Fig. ). We initially selected hospitals A, B, and C, which had the richest datasets, as our primary sources for the internal dataset. However, the majority of the patients from institutions A and B underwent two-jaw surgery (Fig. ). Consequently, to prevent a bias in the deep learning model toward patients that underwent one-jaw surgeries, we incorporated data from institution D, which had a higher proportion of patients who underwent one-jaw surgery, into our internal dataset. Through this process, a dataset comprising a total of 707 pairs was constructed, of which 550 were utilized as the training dataset, 50 as the validation dataset, and 50 as the internal test set. Additionally, we employed 57 pairs of pre-cephs and post-cephs from university hospitals E, F, G, H, I, and J as the external test set, because the different institutions had different cephalogram machines. In addition, there were variations in the imaging protocols and in the quality of the cephalograms. With regard to the direction of surgical movement, the majority of anterior nasal spine (ANS), posterior nasal spine (PNS), and upper-lip landmarks moved anteriorly and superiorly, whereas the majority of B-point, Md 1 crown, lower lip, soft-tissue pogonion, and soft-tissue menton landmarks moved posteriorly and superiorly (Fig. ). The reason for these surgical movements was that most of the patients had skeletal Class III malocclusions, which needed anterior movement of the maxilla and posterior movement of the mandible. For most of the OGSs, the maxilla moved within 10 mm, whereas the mandible moved within 15 mm. Detailed information regarding the composition, demographic characteristics, and cephalography machines, among others, is provided in Supplementary Table of the supplementary materials.
Overview of GPOSC-Net Herein, we propose generative prediction for orthognathic surgery using ceph network (GPOSC-Net) , which comprises two models: a two-module combination of our CNN-based image embedding module (IEM) and a GCNN-based landmark topology embedding module (LTEM), which predict the movement of landmarks that would occur as a result of OGS; and a latent diffusion model , which is used to generate spost-cephs (Fig. ). The IEM utilizes a high-resolution network to maintain detailed representations of lateral cephalometric images. Before proceeding to the next step, the output of the IEM is subjected to channel coupling by the channel relation score module (CRSM), which calculates the relation score between channels of a feature map. On the other hand, the LTEM employs a GCNN to train the topological structures and spatial relationships of 45 hard- and soft-tissue landmarks. Finally, the movement of these landmarks is predicted by a multi-layer perceptron (MLP) module, which uses the combined outputs of IEM and LTEM. To generate spost-cephs, the model uses a set of conditions that includes the movement of landmarks obtained through IEM and LTEM, along with segmented profile lines of pre-ceph. This approach aims to ensure a minimal generation ability for our system. To reinforce this capability, we trained an autoencoder on a dual dataset, including one with labeled pre-ceph and post-ceph images, and The other is an extensive unlabeled set of 30,000 lateral cephalograms, randomly collected between 2007 and 2020, which are unrelated to any pre- or post-surgical conditions or orthodontic treatment, and are sourced from an internal institution (Hospital J). The learning methods and model structure and description are explained in detail further in this paper. Finally, we employed the IASM during the testing phase to generate serial spost-ceph images corresponding to various amounts of virtual surgical movement. IASM made it possible to calibrate the expected surgical movement ratio precisely across a continuous spectrum from 0 to 1.6, where a value of 0 represents no surgical movement (similar to pre-ceph, 0%), a value of 1 corresponds to the full predicted movement (similar to post-ceph, 100%), and a value of 1.6 equates to an enhanced projection with a 160% setback. This enabled the serial generation of spost-ceph images with nuanced variations in surgical movement. For IASM ranging from 0.1 to 1.6, five spost-ceph images, including for IASM 1, were randomly generated, and an appropriate treatment goal based on the pre-ceph was selected by two ODs and two OMFSs in a blind condition. Surgical movement vector prediction modules As indicated earlier, our model consists of IEM and LTEM , which are trained using images and landmarks, respectively (Fig. ). The IEM adopted HR-NET as its backbone and was trained to represent a ceph as a low-dimensional feature map. To correspond to each landmark, the feature map outputs 45 channels, where each channel has dimensions of 45 × 45. CRSM is used to measure a relationship score matrix between distinct channels; similarly, the matrix has dimensions of 45 × 45. Finally, an image feature vector is evaluated using a weighted combination of the flattened feature map and relationship score matrix. On the other hand, the LTEM was designed based on the GCNN to learn the topological structures of landmarks. The training process of the LTEM is as follows: [12pt]{minimal}
$${{{}}}({{{{}}}}_{{{{}}}}^{{{{}}}})\,={{{{}}}}_{{{{}}}}^{{{{}}}+1}={{{}}}(.{{{{}}}}_{{{{}}}}^{{{{}}}}{{{{}}}}_{1}+{}_{{{{}}}}{{{{}}}}_{{{{}}}}{{{{}}}}_{{{{}}}}^{{{{}}}}({{{{}}}}_{2})$$ GCNN ( f i k ) = f i k + 1 = ReLU f i k W 1 + ∑ j e ij f i j ( W 2 ) , where [12pt]{minimal}
$${{{{}}}}_{1}$$ W 1 and [12pt]{minimal}
$${{{{}}}}_{2}$$ W 2 are weight matrices learned from the training, [12pt]{minimal}
$${{{}}}$$ f denotes node features, and [12pt]{minimal}
$${{{}}}$$ e is the edge of the graph. Meanwhile, ReLU(·) = max(0, ·) is the nonlinear activation function, is the learnable connectivity at the i th node from A, denotes the data we want to train, and is expressed as input data. In our experiment, D = 92 and N = 45, where D is the input dimension of the graph, the position of the i-node, and the distance features from the neighborhood of node i; and N is the number of nodes, which is the same as the number of landmarks (Fig. ). The encoder of the LTEM comprises two layers of the GCNN, which is the graph embedding, and the learned weight matrices in these layers. Herein, A is the connectivity of all nodes shared by both layers. The output dimensions of the first and second layers are set to 64 and 32, respectively. Our model utilizes IEM and LTEM to obtain embeddings of images and landmarks, and then concatenates these embedding vectors to ultimately predict the surgical movement vectors. We trained the model using the L1 loss between the predicted surgical movement vectors and the gold standard. We also used the Adam optimizer , which combined the momentum and exponentially weighted moving average gradients methods, to update the weights of our networks. The learning rate was initially set to 0.001, and then decreased by a factor of 10 when the accuracy of the networks on the validation dataset stopped improving. In total, the learning rate was decreased three times to end the training. The networks were constructed under the open-source machine learning framework of PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 GPU. For the model training, we adopted a data augmentation strategy to enhance its robustness and generalization ability. This data augmentation strategy could prevent overfitting and lead to robust model performance, particularly when a limited training dataset is used. Data augmentation was performed on the image and graph inputs to increase the training dataset. When the spatial information of an image was transformed, such as by random rotation and random shift, the same augmentation was applied to the input of the graph. For the gamma, sharpness, blurriness, and random noise, the spatial information of the image was not transformed; thus, these were applied only to the image and not to the graph input. Generation module Image compression (Fig. ). The objective of our generation module is to generate spost-cephs using pre - ceph as input. To achieve this, we employed a latent diffusion model consisting of an autoencoder for encoder [12pt]{minimal}
$${{{}}}$$ E and decoder [12pt]{minimal}
$${{{}}}$$ D and a diffusion model for generating the encoding latent (Fig. ). To train the autoencoder, we used not only pre-ceph and post-ceph data but also an unlabeled set of 30,000 lateral cephalograms sourced from an internal institution (Hospital J). This was important to ensure that the latent space of the autoencoder was well-formed, guaranteeing minimal generation capability . Additionally, we employed vector quantization , , which uses a discrete codebook [12pt]{minimal}
$${{{}}}{{{}}}{{}}^{16 128 128}$$ Z R 16 × 128 × 128 , and adversarial learning techniques to enhance model stability and achieve high-quality results. The loss function is as follows. 1 [12pt]{minimal}
$${{{{}}}}_{{VQ}}({{{}}},{{{}}},{{{}}})={{||}{{{}}}-}}}}{||}}^{2}+{{||}{{{}}}[{{{}}}({{{}}})]-{{{{}}}}_{{{{}}}}{||}}_{2}^{2}+{{||}{{{}}}[{{{{}}}}_{{{{}}}}]{{{}}}{{{}}}({{{}}}){||}}_{2}^{2}+ {{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)$$ L V Q E , D , Z = ∣ ∣ x − x ^ ∣ ∣ 2 + ∣ ∣ sg E x − z q ∣ ∣ 2 2 + ∣ ∣ sg z q - E x ∣ ∣ 2 2 + λ L G A N E , D , Z , D where [12pt]{minimal}
$$D$$ D is the patch-based discriminator, [12pt]{minimal}
$$={{{}}}({{{}}}(x))$$ x ^ = D E x , and [12pt]{minimal}
$${{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)=[ D(x)+ (1-D())]$$ L G A N E , D , Z , D = log D x + log 1 − D x ^ Diffusion model. The encoded data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 is gradually converted into a well-behaved distribution [12pt]{minimal}
$${{{}}}(y)$$ π y by repeated application of a Markov diffusion kernel [12pt]{minimal}
$${T}_{{{{}}}}(\,{y|y;}{{{}}})$$ T π y ∣ y ; β for π (y) . Then, 2 [12pt]{minimal}
$$q({z}_{t}{|z})={T}_{{{{}}}}({z}_{t}|{z}_{t-1};{{{{}}}}_{t})={{{}}}({z}_{t};}}}}_{t}}{z}_{t-1},{{{{}}}}_{t}{{{}}})$$ q z t ∣ z = T π z t ∣ z t − 1 ; β t = N z t ; 1 − β t z t − 1 , β t I Meanwhile, the forward trajectory, starting at the data distribution and performing [12pt]{minimal}
$$T={{}}$$ T = 1000 steps of diffusion process, is as follows: [12pt]{minimal}
$$q({z}_{0:T})=q({z}_{0}){ }_{t=1}^{T}q({z}_{t} | {z}_{t-1})$$ q z 0 : T = q z 0 ∏ t = 1 T q z t ∣ z t − 1 , where [12pt]{minimal}
$${z}_{1},{z}_{2}, {z}_{T}$$ z 1 , z 2 , … z T are latents of the same dimension as the data [12pt]{minimal}
$${z}_{0}$$ z 0 . The forward process is that which admits sampling [12pt]{minimal}
$${z}_{t}$$ z t at an arbitary timestep [12pt]{minimal}
$$t$$ t in closed form. Using the notation [12pt]{minimal}
$${{{{}}}}_{t}=1-{{{{}}}}_{t}$$ α t = 1 − β t and [12pt]{minimal}
$${}}}}}_{t}={ }_{s=1}^{t}{{{{}}}}_{s}$$ α ¯ t = ∑ s = 1 t α s , then, we obtain the analytical form of [12pt]{minimal}
$$q({z}_{t} | {z}_{0})$$ q z t ∣ z 0 as follows. 3 [12pt]{minimal}
$$q({z}_{t} | {z}_{0})={{{}}}({z}_{t};}}}}}_{t}}{z}_{0},(1-{}}}}}_{t}){{{}}})$$ q z t ∣ z 0 = N z t ; α ¯ t z 0 , 1 − α ¯ t I We can easily obtain a sample in the immediate distribution of the diffusion process. 4 [12pt]{minimal}
$${z}_{t}=}}}}}_{t}}{z}_{0}+}}}}}_{t}}{{{}}}$$ z t = α ¯ t z 0 + 1 − α ¯ t ϵ Diffusion models are latent variable models of the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0})={ {p}_{{{{}}}}({z}_{0:T}){dz}}_{1:T}$$ p θ z 0 = ∫ p θ z 0 : T d z 1 : T . The reverse trajectory, starting at the prior distribution, is as follows. 5 [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0:T})=p({z}_{T})_{t=1}^{T}{p}_{{{{}}}}({z}_{t-1}|{z}_{t})$$ p θ z 0 : T = p z T ∏ t = 1 T p θ z t − 1 ∣ z t where [12pt]{minimal}
$$p({z}_{T})={{{}}}({z}_{T})$$ p ( z T ) = π z T and [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1}|{z}_{t})={{{}}}({z}_{t-1};{ }_{ }({z}_{t},t),{ }_{ }({z}_{t},t))$$ p θ z t − 1 ∣ z t = N z t − 1 ; μ θ z t , t , Σ θ z t , t , and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ μ θ z t , t and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ Σ θ z t , t are training targets defining the mean and covariance, respectively, of the reverse Markov transitions for a Gaussian distribution. To approximate between the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({x}_{0})$$ p θ x 0 and data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 , training is performed by optimizing the variational lower bound on negative log likelihood. 6 [12pt]{minimal}
$${{}}_{{{{}}} {{{}}}({{{}}})}[- {p}_{ }(z)] {{}}_{z q(z)}[- p({z}_{T})-{}_{t 1}_{{{{}}}}({z}_{t-1} | {z}_{t})}{q({z}_{t} | {z}_{t-1})}]={{{{}}}}_{{vlb}}$$ E z ~ q z − log p θ ( z ) ≤ E z ~ q z − log p z T − ∑ t ≥ 1 p θ z t − 1 ∣ z t q z t ∣ z t − 1 = L v l b For efficient training, further improvement is made by re-expressing [12pt]{minimal}
$${{{{}}}}_{{vlb}}$$ L v l b as follows. 7 [12pt]{minimal}
$${{{{}}}}_{{vlb}}={{}}_{z q(z)}[{D}_{{KL}}(q({z}_{T}|{z}_{0}){||p}({z}_{T}))+{D}_{{KL}}(q({z}_{t-1}|{z}_{t},{z}_{0}){||}{p}_{{{{}}}}({z}_{t-1}|{z}_{t}))- {p}_{{{{}}}}({z}_{0} | {z}_{1})]$$ L v l b = E z ~ q z D K L q z T ∣ z 0 ∣ ∣ p z T + D K L q z t − 1 ∣ z t , z 0 ∣ ∣ p θ z t − 1 ∣ z t − log p θ z 0 ∣ z 1 The equation uses Kullback–Leibler divergence to directly compare [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1} | {z}_{t})$$ p θ z t − 1 ∣ z t against forward process posteriors. The posterior distributions are tractable when conditioned on [12pt]{minimal}
$${z}_{0}$$ z 0 . 8 [12pt]{minimal}
$$q({z}_{t} | {z}_{t-1})=q({z}_{t-1} | {z}_{t},{z}_{0})_{t} | {z}_{0})}{q({z}_{t-1} | {z}_{0})}={{{}}}({z}_{t-1};{}_{t}({z}_{t},{z}_{0}),{}_{t}{{{}}}),$$ q z t ∣ z t − 1 = q z t − 1 ∣ z t , z 0 q z t ∣ z 0 q z t − 1 ∣ z 0 = N z t − 1 ; μ ~ t z t , z 0 , β ~ t I , where [12pt]{minimal}
$${}_{t}({z}_{t},{z}_{0})=}_{t-1}}{ }_{t}}{1-{}_{t}}{z}_{0}+_{t}}(1-{}_{t-1})}{1-{}_{t}}{z}_{t}$$ μ ~ t z t , z 0 = α ¯ t − 1 β t 1 − α ¯ t z 0 + α t 1 − α ¯ t − 1 1 − α ¯ t z t and [12pt]{minimal}
$${}_{t}=}_{t}}{1-{}_{t-1}}{ }_{t}$$ β ~ t = 1 − α ¯ t 1 − α ¯ t − 1 β t , and the values of [12pt]{minimal}
$${ }_{0}$$ β 0 and [12pt]{minimal}
$${ }_{T}$$ β T were 0.0015 and 0.0195, respectively. Then, the loss function can be defined as follows. 9 [12pt]{minimal}
$${{{{}}}}_{{simple}}={{}}_{{{{}}}(x),q}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},t){||}}^{2}]$$ L s i m p l e = E E x , q ∣ ∣ ϵ − ϵ θ z t , t ∣ ∣ 2 After training, samples can be generated by starting from [12pt]{minimal}
$${z}_{T}{{{}}}{{{}}}(0,{{{}}})$$ z T N 0 , I and following the parameterized reverse Markov chain. 10 [12pt]{minimal}
$${z}_{t-1}=}}}}_{t}}}({z}_{t}-}}}}_{t}}{}}}}}_{t}}}{{{{}}}}_{{{{}}}}({z}_{t},t))+{{{{}}}}_{t}z$$ z t − 1 = 1 α t z t − 1 − α t 1 − α ¯ t ϵ θ z t , t + σ t z Furthermore, we aimed to generate spost-cephs using multiple conditions in the diffusion model. We used a total of four conditions, including pre-cephs and their profile lines, which were concatenated, whereas the pre-ceph landmarks and the movement vectors predicted through IEM and LTEM were latentized using a graph network and subsequently embedded into the diffusion model via a cross-attention module. Then, we can train the conditional diffusion model using conditions c via 11 [12pt]{minimal}
$${{{{}}}}_{{condition}}={{}}_{{{{}}}(x)}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},c,t){||}}^{2}]$$ L c o n d i t i o n = E E x ∣ ∣ ϵ − ϵ θ z t , c , t ∣ ∣ 2 where, [12pt]{minimal}
$$c=[{m,x}^{{pre}},\,{l}^{{pre}},\,{p}^{{pre}}]$$ c = m , x p r e , l p r e , p p r e and [12pt]{minimal}
$$m {{}}^{45 45}$$ m ∈ R 45 × 45 is the surgical movement vector predicted through the graph network, and [12pt]{minimal}
$${x}^{{pre}} {{}}^{1 1024 1024}$$ x p r e ∈ R 1 × 1024 × 1024 , [12pt]{minimal}
$${l}^{{pre}} {{}}^{45 45}$$ l p r e ∈ R 45 × 45 and [12pt]{minimal}
$${p}^{{pre}} {{}}^{1 1024 1024}$$ p p r e ∈ R 1 × 1024 × 1024 represent the pre-ceph, the landmarks of pre-ceph, and the profile line of the pre-ceph. Additionally, we used the LTEM model to embed [12pt]{minimal}
$$m$$ m and [12pt]{minimal}
$${l}^{{pre}}$$ l p r e into the diffusion model. We used an untrained model, which is trained together as the diffusion model is trained. After training, sampling is performed using the trained diffusion model. To reduce the generation time and maintain consistency, a DDIM was used. The formula for DDIM is as follows: 12 [12pt]{minimal}
$${{{{}}}}_{{{{{}}}}_{t-1}}=_{{{{{}}}}_{t-1}}}(_{{{{{}}}}_{t}}-_{{{{{}}}}_{t}}}{ }_{ }^{(t)}({z}_{{{{{}}}}_{t}})}{_{{{{{}}}}_{t}}}})+_{{{{{}}}}_{t-1}}} { }_{ }^{(t)}({z}_{{{{{}}}}_{t}})$$ z τ t − 1 = α τ t − 1 z τ t − 1 − α τ t ϵ θ t z τ t α τ t + 1 − α τ t − 1 ⋅ ϵ θ t z τ t where [12pt]{minimal}
$${{{}}}$$ τ is a sub-sequence of timesteps of length [12pt]{minimal}
$$T$$ T . To train the generation module, we utilized the Adam optimizer , which combines momentum and exponentially weighted moving average gradient methods. The initial learning rate was set to 2e − 6, and we trained the model for a total of 1000 epochs. The networks were implemented using open-source machine learning frameworks such as PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 48GB GPU. However, we did not employ data augmentation in our training process. Classifier-free guidance for digital twin To conduct experiments for generating various surgical movements, we used classifier-free guidance (CFG) . Unlike classifier guidance , , CFG is distinct in that the classifier model is not separate from the diffusion model but is trained together. CFG achieves an effect similar to modifying epsilon [12pt]{minimal}
$${{{}}}$$ ϵ for classifier guidance sampling, but without the separated classifier. The diffusion model can be trained by setting a condition c or a null token [12pt]{minimal}
$${{ }}$$ ∅ into the model for some probability. Then, we defined the estimated score – using model θ for the input condition c as [12pt]{minimal}
$${{{{}}}}_{{{{}}}}({z}_{t},t,{{{}}})$$ ϵ θ ( z t , t , c ) , and the estimated score for the null token as [12pt]{minimal}
$${{{ }}}_{ }({z}_{t},t,\,{{ }})={{{ }}}_{ }({z}_{t},t)$$ ϵ θ z t , t , ∅ = ϵ θ ( z t , t ) . After training, we modified the score using a linear combination of the unconditional score and conditional score by the IASM. The CFG sampling method is known to be robust against gradient-based adversarial attacks, whereas classifier guidance sampling by a poorly trained classifier may lead to problems in consistency and fidelity. The score estimated by the CFG sampling is shown as follows: 13 [12pt]{minimal}
$${}_{ }({z}_{t},t,c)=(1+s) { }_{ }({z}_{t},t,c)-s { }_{ }({z}_{t},t)$$ ϵ ~ θ z t , t , c = 1 + s ⋅ ϵ θ z t , t , c − s ⋅ ϵ θ z t , t
Herein, we propose generative prediction for orthognathic surgery using ceph network (GPOSC-Net) , which comprises two models: a two-module combination of our CNN-based image embedding module (IEM) and a GCNN-based landmark topology embedding module (LTEM), which predict the movement of landmarks that would occur as a result of OGS; and a latent diffusion model , which is used to generate spost-cephs (Fig. ). The IEM utilizes a high-resolution network to maintain detailed representations of lateral cephalometric images. Before proceeding to the next step, the output of the IEM is subjected to channel coupling by the channel relation score module (CRSM), which calculates the relation score between channels of a feature map. On the other hand, the LTEM employs a GCNN to train the topological structures and spatial relationships of 45 hard- and soft-tissue landmarks. Finally, the movement of these landmarks is predicted by a multi-layer perceptron (MLP) module, which uses the combined outputs of IEM and LTEM. To generate spost-cephs, the model uses a set of conditions that includes the movement of landmarks obtained through IEM and LTEM, along with segmented profile lines of pre-ceph. This approach aims to ensure a minimal generation ability for our system. To reinforce this capability, we trained an autoencoder on a dual dataset, including one with labeled pre-ceph and post-ceph images, and The other is an extensive unlabeled set of 30,000 lateral cephalograms, randomly collected between 2007 and 2020, which are unrelated to any pre- or post-surgical conditions or orthodontic treatment, and are sourced from an internal institution (Hospital J). The learning methods and model structure and description are explained in detail further in this paper. Finally, we employed the IASM during the testing phase to generate serial spost-ceph images corresponding to various amounts of virtual surgical movement. IASM made it possible to calibrate the expected surgical movement ratio precisely across a continuous spectrum from 0 to 1.6, where a value of 0 represents no surgical movement (similar to pre-ceph, 0%), a value of 1 corresponds to the full predicted movement (similar to post-ceph, 100%), and a value of 1.6 equates to an enhanced projection with a 160% setback. This enabled the serial generation of spost-ceph images with nuanced variations in surgical movement. For IASM ranging from 0.1 to 1.6, five spost-ceph images, including for IASM 1, were randomly generated, and an appropriate treatment goal based on the pre-ceph was selected by two ODs and two OMFSs in a blind condition.
As indicated earlier, our model consists of IEM and LTEM , which are trained using images and landmarks, respectively (Fig. ). The IEM adopted HR-NET as its backbone and was trained to represent a ceph as a low-dimensional feature map. To correspond to each landmark, the feature map outputs 45 channels, where each channel has dimensions of 45 × 45. CRSM is used to measure a relationship score matrix between distinct channels; similarly, the matrix has dimensions of 45 × 45. Finally, an image feature vector is evaluated using a weighted combination of the flattened feature map and relationship score matrix. On the other hand, the LTEM was designed based on the GCNN to learn the topological structures of landmarks. The training process of the LTEM is as follows: [12pt]{minimal}
$${{{}}}({{{{}}}}_{{{{}}}}^{{{{}}}})\,={{{{}}}}_{{{{}}}}^{{{{}}}+1}={{{}}}(.{{{{}}}}_{{{{}}}}^{{{{}}}}{{{{}}}}_{1}+{}_{{{{}}}}{{{{}}}}_{{{{}}}}{{{{}}}}_{{{{}}}}^{{{{}}}}({{{{}}}}_{2})$$ GCNN ( f i k ) = f i k + 1 = ReLU f i k W 1 + ∑ j e ij f i j ( W 2 ) , where [12pt]{minimal}
$${{{{}}}}_{1}$$ W 1 and [12pt]{minimal}
$${{{{}}}}_{2}$$ W 2 are weight matrices learned from the training, [12pt]{minimal}
$${{{}}}$$ f denotes node features, and [12pt]{minimal}
$${{{}}}$$ e is the edge of the graph. Meanwhile, ReLU(·) = max(0, ·) is the nonlinear activation function, is the learnable connectivity at the i th node from A, denotes the data we want to train, and is expressed as input data. In our experiment, D = 92 and N = 45, where D is the input dimension of the graph, the position of the i-node, and the distance features from the neighborhood of node i; and N is the number of nodes, which is the same as the number of landmarks (Fig. ). The encoder of the LTEM comprises two layers of the GCNN, which is the graph embedding, and the learned weight matrices in these layers. Herein, A is the connectivity of all nodes shared by both layers. The output dimensions of the first and second layers are set to 64 and 32, respectively. Our model utilizes IEM and LTEM to obtain embeddings of images and landmarks, and then concatenates these embedding vectors to ultimately predict the surgical movement vectors. We trained the model using the L1 loss between the predicted surgical movement vectors and the gold standard. We also used the Adam optimizer , which combined the momentum and exponentially weighted moving average gradients methods, to update the weights of our networks. The learning rate was initially set to 0.001, and then decreased by a factor of 10 when the accuracy of the networks on the validation dataset stopped improving. In total, the learning rate was decreased three times to end the training. The networks were constructed under the open-source machine learning framework of PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 GPU. For the model training, we adopted a data augmentation strategy to enhance its robustness and generalization ability. This data augmentation strategy could prevent overfitting and lead to robust model performance, particularly when a limited training dataset is used. Data augmentation was performed on the image and graph inputs to increase the training dataset. When the spatial information of an image was transformed, such as by random rotation and random shift, the same augmentation was applied to the input of the graph. For the gamma, sharpness, blurriness, and random noise, the spatial information of the image was not transformed; thus, these were applied only to the image and not to the graph input.
Image compression (Fig. ). The objective of our generation module is to generate spost-cephs using pre - ceph as input. To achieve this, we employed a latent diffusion model consisting of an autoencoder for encoder [12pt]{minimal}
$${{{}}}$$ E and decoder [12pt]{minimal}
$${{{}}}$$ D and a diffusion model for generating the encoding latent (Fig. ). To train the autoencoder, we used not only pre-ceph and post-ceph data but also an unlabeled set of 30,000 lateral cephalograms sourced from an internal institution (Hospital J). This was important to ensure that the latent space of the autoencoder was well-formed, guaranteeing minimal generation capability . Additionally, we employed vector quantization , , which uses a discrete codebook [12pt]{minimal}
$${{{}}}{{{}}}{{}}^{16 128 128}$$ Z R 16 × 128 × 128 , and adversarial learning techniques to enhance model stability and achieve high-quality results. The loss function is as follows. 1 [12pt]{minimal}
$${{{{}}}}_{{VQ}}({{{}}},{{{}}},{{{}}})={{||}{{{}}}-}}}}{||}}^{2}+{{||}{{{}}}[{{{}}}({{{}}})]-{{{{}}}}_{{{{}}}}{||}}_{2}^{2}+{{||}{{{}}}[{{{{}}}}_{{{{}}}}]{{{}}}{{{}}}({{{}}}){||}}_{2}^{2}+ {{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)$$ L V Q E , D , Z = ∣ ∣ x − x ^ ∣ ∣ 2 + ∣ ∣ sg E x − z q ∣ ∣ 2 2 + ∣ ∣ sg z q - E x ∣ ∣ 2 2 + λ L G A N E , D , Z , D where [12pt]{minimal}
$$D$$ D is the patch-based discriminator, [12pt]{minimal}
$$={{{}}}({{{}}}(x))$$ x ^ = D E x , and [12pt]{minimal}
$${{{{}}}}_{{GAN}}(\{{{{}}},{{{}}},{{{}}}\},\,D)=[ D(x)+ (1-D())]$$ L G A N E , D , Z , D = log D x + log 1 − D x ^ Diffusion model. The encoded data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 is gradually converted into a well-behaved distribution [12pt]{minimal}
$${{{}}}(y)$$ π y by repeated application of a Markov diffusion kernel [12pt]{minimal}
$${T}_{{{{}}}}(\,{y|y;}{{{}}})$$ T π y ∣ y ; β for π (y) . Then, 2 [12pt]{minimal}
$$q({z}_{t}{|z})={T}_{{{{}}}}({z}_{t}|{z}_{t-1};{{{{}}}}_{t})={{{}}}({z}_{t};}}}}_{t}}{z}_{t-1},{{{{}}}}_{t}{{{}}})$$ q z t ∣ z = T π z t ∣ z t − 1 ; β t = N z t ; 1 − β t z t − 1 , β t I Meanwhile, the forward trajectory, starting at the data distribution and performing [12pt]{minimal}
$$T={{}}$$ T = 1000 steps of diffusion process, is as follows: [12pt]{minimal}
$$q({z}_{0:T})=q({z}_{0}){ }_{t=1}^{T}q({z}_{t} | {z}_{t-1})$$ q z 0 : T = q z 0 ∏ t = 1 T q z t ∣ z t − 1 , where [12pt]{minimal}
$${z}_{1},{z}_{2}, {z}_{T}$$ z 1 , z 2 , … z T are latents of the same dimension as the data [12pt]{minimal}
$${z}_{0}$$ z 0 . The forward process is that which admits sampling [12pt]{minimal}
$${z}_{t}$$ z t at an arbitary timestep [12pt]{minimal}
$$t$$ t in closed form. Using the notation [12pt]{minimal}
$${{{{}}}}_{t}=1-{{{{}}}}_{t}$$ α t = 1 − β t and [12pt]{minimal}
$${}}}}}_{t}={ }_{s=1}^{t}{{{{}}}}_{s}$$ α ¯ t = ∑ s = 1 t α s , then, we obtain the analytical form of [12pt]{minimal}
$$q({z}_{t} | {z}_{0})$$ q z t ∣ z 0 as follows. 3 [12pt]{minimal}
$$q({z}_{t} | {z}_{0})={{{}}}({z}_{t};}}}}}_{t}}{z}_{0},(1-{}}}}}_{t}){{{}}})$$ q z t ∣ z 0 = N z t ; α ¯ t z 0 , 1 − α ¯ t I We can easily obtain a sample in the immediate distribution of the diffusion process. 4 [12pt]{minimal}
$${z}_{t}=}}}}}_{t}}{z}_{0}+}}}}}_{t}}{{{}}}$$ z t = α ¯ t z 0 + 1 − α ¯ t ϵ Diffusion models are latent variable models of the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0})={ {p}_{{{{}}}}({z}_{0:T}){dz}}_{1:T}$$ p θ z 0 = ∫ p θ z 0 : T d z 1 : T . The reverse trajectory, starting at the prior distribution, is as follows. 5 [12pt]{minimal}
$${p}_{{{{}}}}({z}_{0:T})=p({z}_{T})_{t=1}^{T}{p}_{{{{}}}}({z}_{t-1}|{z}_{t})$$ p θ z 0 : T = p z T ∏ t = 1 T p θ z t − 1 ∣ z t where [12pt]{minimal}
$$p({z}_{T})={{{}}}({z}_{T})$$ p ( z T ) = π z T and [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1}|{z}_{t})={{{}}}({z}_{t-1};{ }_{ }({z}_{t},t),{ }_{ }({z}_{t},t))$$ p θ z t − 1 ∣ z t = N z t − 1 ; μ θ z t , t , Σ θ z t , t , and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ μ θ z t , t and [12pt]{minimal}
$${ }_{ }({z}_{t},t)$$ Σ θ z t , t are training targets defining the mean and covariance, respectively, of the reverse Markov transitions for a Gaussian distribution. To approximate between the parameterized distribution [12pt]{minimal}
$${p}_{{{{}}}}({x}_{0})$$ p θ x 0 and data distribution [12pt]{minimal}
$$q({z}_{0})$$ q z 0 , training is performed by optimizing the variational lower bound on negative log likelihood. 6 [12pt]{minimal}
$${{}}_{{{{}}} {{{}}}({{{}}})}[- {p}_{ }(z)] {{}}_{z q(z)}[- p({z}_{T})-{}_{t 1}_{{{{}}}}({z}_{t-1} | {z}_{t})}{q({z}_{t} | {z}_{t-1})}]={{{{}}}}_{{vlb}}$$ E z ~ q z − log p θ ( z ) ≤ E z ~ q z − log p z T − ∑ t ≥ 1 p θ z t − 1 ∣ z t q z t ∣ z t − 1 = L v l b For efficient training, further improvement is made by re-expressing [12pt]{minimal}
$${{{{}}}}_{{vlb}}$$ L v l b as follows. 7 [12pt]{minimal}
$${{{{}}}}_{{vlb}}={{}}_{z q(z)}[{D}_{{KL}}(q({z}_{T}|{z}_{0}){||p}({z}_{T}))+{D}_{{KL}}(q({z}_{t-1}|{z}_{t},{z}_{0}){||}{p}_{{{{}}}}({z}_{t-1}|{z}_{t}))- {p}_{{{{}}}}({z}_{0} | {z}_{1})]$$ L v l b = E z ~ q z D K L q z T ∣ z 0 ∣ ∣ p z T + D K L q z t − 1 ∣ z t , z 0 ∣ ∣ p θ z t − 1 ∣ z t − log p θ z 0 ∣ z 1 The equation uses Kullback–Leibler divergence to directly compare [12pt]{minimal}
$${p}_{{{{}}}}({z}_{t-1} | {z}_{t})$$ p θ z t − 1 ∣ z t against forward process posteriors. The posterior distributions are tractable when conditioned on [12pt]{minimal}
$${z}_{0}$$ z 0 . 8 [12pt]{minimal}
$$q({z}_{t} | {z}_{t-1})=q({z}_{t-1} | {z}_{t},{z}_{0})_{t} | {z}_{0})}{q({z}_{t-1} | {z}_{0})}={{{}}}({z}_{t-1};{}_{t}({z}_{t},{z}_{0}),{}_{t}{{{}}}),$$ q z t ∣ z t − 1 = q z t − 1 ∣ z t , z 0 q z t ∣ z 0 q z t − 1 ∣ z 0 = N z t − 1 ; μ ~ t z t , z 0 , β ~ t I , where [12pt]{minimal}
$${}_{t}({z}_{t},{z}_{0})=}_{t-1}}{ }_{t}}{1-{}_{t}}{z}_{0}+_{t}}(1-{}_{t-1})}{1-{}_{t}}{z}_{t}$$ μ ~ t z t , z 0 = α ¯ t − 1 β t 1 − α ¯ t z 0 + α t 1 − α ¯ t − 1 1 − α ¯ t z t and [12pt]{minimal}
$${}_{t}=}_{t}}{1-{}_{t-1}}{ }_{t}$$ β ~ t = 1 − α ¯ t 1 − α ¯ t − 1 β t , and the values of [12pt]{minimal}
$${ }_{0}$$ β 0 and [12pt]{minimal}
$${ }_{T}$$ β T were 0.0015 and 0.0195, respectively. Then, the loss function can be defined as follows. 9 [12pt]{minimal}
$${{{{}}}}_{{simple}}={{}}_{{{{}}}(x),q}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},t){||}}^{2}]$$ L s i m p l e = E E x , q ∣ ∣ ϵ − ϵ θ z t , t ∣ ∣ 2 After training, samples can be generated by starting from [12pt]{minimal}
$${z}_{T}{{{}}}{{{}}}(0,{{{}}})$$ z T N 0 , I and following the parameterized reverse Markov chain. 10 [12pt]{minimal}
$${z}_{t-1}=}}}}_{t}}}({z}_{t}-}}}}_{t}}{}}}}}_{t}}}{{{{}}}}_{{{{}}}}({z}_{t},t))+{{{{}}}}_{t}z$$ z t − 1 = 1 α t z t − 1 − α t 1 − α ¯ t ϵ θ z t , t + σ t z Furthermore, we aimed to generate spost-cephs using multiple conditions in the diffusion model. We used a total of four conditions, including pre-cephs and their profile lines, which were concatenated, whereas the pre-ceph landmarks and the movement vectors predicted through IEM and LTEM were latentized using a graph network and subsequently embedded into the diffusion model via a cross-attention module. Then, we can train the conditional diffusion model using conditions c via 11 [12pt]{minimal}
$${{{{}}}}_{{condition}}={{}}_{{{{}}}(x)}[{{||}{{{}}}-{{{{}}}}_{{{{}}}}({z}_{t},c,t){||}}^{2}]$$ L c o n d i t i o n = E E x ∣ ∣ ϵ − ϵ θ z t , c , t ∣ ∣ 2 where, [12pt]{minimal}
$$c=[{m,x}^{{pre}},\,{l}^{{pre}},\,{p}^{{pre}}]$$ c = m , x p r e , l p r e , p p r e and [12pt]{minimal}
$$m {{}}^{45 45}$$ m ∈ R 45 × 45 is the surgical movement vector predicted through the graph network, and [12pt]{minimal}
$${x}^{{pre}} {{}}^{1 1024 1024}$$ x p r e ∈ R 1 × 1024 × 1024 , [12pt]{minimal}
$${l}^{{pre}} {{}}^{45 45}$$ l p r e ∈ R 45 × 45 and [12pt]{minimal}
$${p}^{{pre}} {{}}^{1 1024 1024}$$ p p r e ∈ R 1 × 1024 × 1024 represent the pre-ceph, the landmarks of pre-ceph, and the profile line of the pre-ceph. Additionally, we used the LTEM model to embed [12pt]{minimal}
$$m$$ m and [12pt]{minimal}
$${l}^{{pre}}$$ l p r e into the diffusion model. We used an untrained model, which is trained together as the diffusion model is trained. After training, sampling is performed using the trained diffusion model. To reduce the generation time and maintain consistency, a DDIM was used. The formula for DDIM is as follows: 12 [12pt]{minimal}
$${{{{}}}}_{{{{{}}}}_{t-1}}=_{{{{{}}}}_{t-1}}}(_{{{{{}}}}_{t}}-_{{{{{}}}}_{t}}}{ }_{ }^{(t)}({z}_{{{{{}}}}_{t}})}{_{{{{{}}}}_{t}}}})+_{{{{{}}}}_{t-1}}} { }_{ }^{(t)}({z}_{{{{{}}}}_{t}})$$ z τ t − 1 = α τ t − 1 z τ t − 1 − α τ t ϵ θ t z τ t α τ t + 1 − α τ t − 1 ⋅ ϵ θ t z τ t where [12pt]{minimal}
$${{{}}}$$ τ is a sub-sequence of timesteps of length [12pt]{minimal}
$$T$$ T . To train the generation module, we utilized the Adam optimizer , which combines momentum and exponentially weighted moving average gradient methods. The initial learning rate was set to 2e − 6, and we trained the model for a total of 1000 epochs. The networks were implemented using open-source machine learning frameworks such as PyTorch 1.8 and Python 3.6, with training performed on an NVIDIA RTX A6000 48GB GPU. However, we did not employ data augmentation in our training process.
To conduct experiments for generating various surgical movements, we used classifier-free guidance (CFG) . Unlike classifier guidance , , CFG is distinct in that the classifier model is not separate from the diffusion model but is trained together. CFG achieves an effect similar to modifying epsilon [12pt]{minimal}
$${{{}}}$$ ϵ for classifier guidance sampling, but without the separated classifier. The diffusion model can be trained by setting a condition c or a null token [12pt]{minimal}
$${{ }}$$ ∅ into the model for some probability. Then, we defined the estimated score – using model θ for the input condition c as [12pt]{minimal}
$${{{{}}}}_{{{{}}}}({z}_{t},t,{{{}}})$$ ϵ θ ( z t , t , c ) , and the estimated score for the null token as [12pt]{minimal}
$${{{ }}}_{ }({z}_{t},t,\,{{ }})={{{ }}}_{ }({z}_{t},t)$$ ϵ θ z t , t , ∅ = ϵ θ ( z t , t ) . After training, we modified the score using a linear combination of the unconditional score and conditional score by the IASM. The CFG sampling method is known to be robust against gradient-based adversarial attacks, whereas classifier guidance sampling by a poorly trained classifier may lead to problems in consistency and fidelity. The score estimated by the CFG sampling is shown as follows: 13 [12pt]{minimal}
$${}_{ }({z}_{t},t,c)=(1+s) { }_{ }({z}_{t},t,c)-s { }_{ }({z}_{t},t)$$ ϵ ~ θ z t , t , c = 1 + s ⋅ ϵ θ z t , t , c − s ⋅ ϵ θ z t , t
Before training, all lateral cephalograms were standardized with a pixel spacing of 0.1 mm. Subsequently, the post - ceph was conventionally aligned with the pre - ceph based on the Sella–Nasion (SN) line. To include all landmarks in both pre-ceph and post - ceph, a rectangle encompassing the regions defined by the Basion, Soft-tissue menton, Pronasale, and Glabella points in both pre-ceph and post-ceph was cropped. Additionally, zero padding was applied horizontally and vertically to create a square image with a resolution of 1024 × 1024. The cropped image was divided by the maximum pixel value of the image. Pixel normalization was performed such that the pixel value was within 0–1. In addition, the coordinates of each landmark and the distances among landmarks were expressed as vectors to train the model. Before input to the model, the x- and y-axis distances were divided by the width and height of the cropped picture, and normalization was performed such that the feature value was within the range of 0–1.
All statistical analysis was performed using IBM SPSS Statistics (IBM Corporation, Armonk, NY, USA) version 25. Landmark distance comparison for post-ceph and spost-ceph Two ODs traced post-cephs and spost-cephs in the internal ( n = 50) and external ( n = 57) test sets. The SN − 7° line was set as the horizontal reference line, and the line passing through the S point and perpendicular to the SN − 7° line was set as the vertical reference line. The horizontal and vertical distances from each landmark were used as coordinate values. The coordinate values of the same landmark in post-ceph and spost-ceph were compared, and the distance between landmarks was calculated. A paired equivalence test was performed for each landmark. In this case, the margin of error applied was 1.5 mm41, 42. The SPRs for each point were assessed according to errors <2.0 mm. Furthermore, we measured the distance between the profile lines of post-ceph and spost-ceph. Taking anatomical structures into account, we divided them into four lines, and the distances between the lines were measured using the Hausdorff distance. Details on the errors in the profile lines and the definition of the four profile lines can be found in the Supplementary Table and Supplementary Fig. of the supplementary materials. Visual Turing test For the VTT, 57 external test images (29 post-cephs and 28 spost-cephs) were used, as OMFSs and ODs had already observed the generated internal dataset during the digital twin experiment. VTT was conducted with two ODs and two OMFSs by displaying images one by one through a dedicated web-based interface. Each examiner had more than 15 years of clinical experience. To reduce environmental variability, the images were displayed in the same order, and revisiting previous answers was prohibited. The examiners were informed that there were 29 real and 28 synthesized images. In addition, none had prior experience with synthesized images before the test. All examiners successfully completed the test. Sensitivity, specificity, and accuracy were derived, with real images defined as positive and synthetic images as negative. Digital twin We investigated the clinical applicability of the spost-cephs as digital twins for simulated surgical planning. Two ODs and OMFSs were simultaneously shown pre-ceph and five spost-cephs randomly generated at different degrees of surgical movement. To focus on cases with significant surgical changes, patients with surgical movement of ≤5 mm were excluded, resulting in the selection of 35 cases from the initial internal test set of 50. Subsequently, the examiners were asked to select an appropriate surgical movement amount considering the pre-ceph. The percentage of spost-cephs reflecting real surgical movements was then calculated. Ablation study The ablation study was conducted using an internal dataset of 50 samples. A single OD manually measured landmarks for each experimental condition. Given the intensive nature of manual landmark annotation, only the internal dataset was used to ensure feasibility while maintaining evaluation consistency. Paired t -tests were performed at each of the five experimental stages to compare results with those from the preceding stage, assessing the impact of landmarks distance error. Statistical significance was set at p < 0.05, with p < 0.005 considered highly significant.
Two ODs traced post-cephs and spost-cephs in the internal ( n = 50) and external ( n = 57) test sets. The SN − 7° line was set as the horizontal reference line, and the line passing through the S point and perpendicular to the SN − 7° line was set as the vertical reference line. The horizontal and vertical distances from each landmark were used as coordinate values. The coordinate values of the same landmark in post-ceph and spost-ceph were compared, and the distance between landmarks was calculated. A paired equivalence test was performed for each landmark. In this case, the margin of error applied was 1.5 mm41, 42. The SPRs for each point were assessed according to errors <2.0 mm. Furthermore, we measured the distance between the profile lines of post-ceph and spost-ceph. Taking anatomical structures into account, we divided them into four lines, and the distances between the lines were measured using the Hausdorff distance. Details on the errors in the profile lines and the definition of the four profile lines can be found in the Supplementary Table and Supplementary Fig. of the supplementary materials.
For the VTT, 57 external test images (29 post-cephs and 28 spost-cephs) were used, as OMFSs and ODs had already observed the generated internal dataset during the digital twin experiment. VTT was conducted with two ODs and two OMFSs by displaying images one by one through a dedicated web-based interface. Each examiner had more than 15 years of clinical experience. To reduce environmental variability, the images were displayed in the same order, and revisiting previous answers was prohibited. The examiners were informed that there were 29 real and 28 synthesized images. In addition, none had prior experience with synthesized images before the test. All examiners successfully completed the test. Sensitivity, specificity, and accuracy were derived, with real images defined as positive and synthetic images as negative.
We investigated the clinical applicability of the spost-cephs as digital twins for simulated surgical planning. Two ODs and OMFSs were simultaneously shown pre-ceph and five spost-cephs randomly generated at different degrees of surgical movement. To focus on cases with significant surgical changes, patients with surgical movement of ≤5 mm were excluded, resulting in the selection of 35 cases from the initial internal test set of 50. Subsequently, the examiners were asked to select an appropriate surgical movement amount considering the pre-ceph. The percentage of spost-cephs reflecting real surgical movements was then calculated.
The ablation study was conducted using an internal dataset of 50 samples. A single OD manually measured landmarks for each experimental condition. Given the intensive nature of manual landmark annotation, only the internal dataset was used to ensure feasibility while maintaining evaluation consistency. Paired t -tests were performed at each of the five experimental stages to compare results with those from the preceding stage, assessing the impact of landmarks distance error. Statistical significance was set at p < 0.05, with p < 0.005 considered highly significant.
Further information on research design is available in the linked to this article.
Supplementary Information Reporting Summary Transparent Peer Review file
Source Data
|
Systematic Review of the Occurrence and Antimicrobial Resistance Profile of Foodborne Pathogens from | 9266503a-8e8f-4e2d-b199-18ae13059a50 | 11728525 | Microbiology[mh] | Game meat is derived from non-domesticated, free-ranging wild animals and birds that are either legally hunted for personal consumption or raised, slaughtered, and commercially processed for food . Game meat has played a significant role in human nutrition, and game hunting has remained an essential activity in many parts of the world, including European countries . Approximately seven million hunters are registered in Europe and are pivotal in the primary production sector of game meat. European countries with the highest absolute numbers of registered hunters include France (approximately 1.3 million), Spain, the United Kingdom (UK), and Italy . In rural areas within mainland Europe, hunters are regarded as primary producers of game meat, with an important contribution to the development of local economies, supporting thus sustainable meat production . Game meat production in Europe decreased almost nine times in 2020, reaching a total production of about 13.5 thousand tons, compared to the first year analyzed, 2005, when there was a production of 129 thousand tons. According to the FAO (Food and Agriculture Organization of the United Nations) data from 2017, Germany led the rankings in terms of hunting meat production, with a total quantity of 58,400 tons, followed by Sweden with 16,062 tons, and Poland with 8103 tons . While the foodborne pathogens affecting livestock and game animals in Europe are largely similar (e.g., Salmonella spp. and Trichinella spp.), the absence of a standardized surveillance program for game meat is a significant issue. This lack of uniform safety protocols and policies across European countries further complicates the management of these issues . Numerous pathogens associated with game animals pose significant risks to the public and animal health . These pathogens can be transmitted through various routes, and only a subset are food-borne or can be transmitted via droplets produced during the processing of infected animal carcasses . Wildlife plays a critical role as a reservoir for zoonosis, especially pathogenic enteric bacteria . Game meat is likely to be more contaminated with enteric micro-organisms than meat from domestic animals due to several highly variable factors during the harvesting (e.g., hunting practices or the conditions under which game carcasses are dressed). Enterobacteriaceae are the indicator bacteria for the microbiological quality of food and the hygiene status of a production process. Additionally, the food contaminated by Enterobacteriaceae poses a microbiological risk for consumers . The Enterobacteriaceae family, including species such as Salmonella , E. coli , Proteus , and Klebsiella , presents a significant challenge in raw and processed meat products worldwide . These bacteria are predominant in food poisoning cases linked to various meat products. Given the unique challenges posed by game meat, it is crucial to include it in safety measures to effectively address its specific contamination risks . Salmonellosis is an enteric infectious disease that significantly threatens public health. In 2022, it was the second most frequently reported foodborne zoonosis in the European Union, with 65,208 cases of human illness. There were also 1014 foodborne outbreaks resulting in 6632 cases of illness, 1406 hospitalizations, and 8 deaths . Additionally, yersiniosis accounted for 7919 reported cases and 636 hospitalizations during the same period. It is important to note that game meat was not identified as a source in these reports. Wildlife is usually not exposed to clinically used antimicrobial agents but can acquire antimicrobial resistance (AMR) through contact with humans, domesticated animals, and environments . The spread of AMR bacteria in wildlife must be viewed as a major concern with serious implications for human and animal health. AMR is a growing global concern in the field of food safety and public health . Game meat, sourced from wild animals such as deer, wild boar, and game birds, has gained popularity in recent years. However, limited research has been conducted to assess the presence and prevalence of AMR bacteria in game meat. Understanding the prevalence of AMR bacteria in game meat is crucial, as it can help identify potential risks to human and animal health. Antimicrobials are necessary agents to fight diseases in humans, animals, plants, and crops. Despite this, their use is complicated by the development of AMR. One of the major causes of this natural process is the overuse of antimicrobials and their inappropriate administration (e.g., wrong category of antimicrobials, inadequate dose, and reduced duration of therapy), and this results in a high quantity of failed treatments against pathogens and in increasing mortality. AMR has been detected particularly among commensal gut bacteria with patterns that vary across species, locations, and times . The increasing number of antimicrobial-resistant Enterobacteriaceae both in veterinary and human medicine, the dissemination of these bacteria in several environments, and their possible repercussions on human health are causing concern. Taking these considerations into account, this review aims to provide a comprehensive summary of the frequency of isolation and AMR profiles of major foodborne pathogens from Enterobacteriaceae , specifically focusing on the genera Salmonella , Escherichia , and Yersinia in game animals across European countries in the 21st century. This study followed the Preferred Reporting Items for Systematic Reviews (PRISMA) guidelines. The process for the systematic review is detailed in . Initially, a thorough exploration was conducted to identify peer-reviewed scientific publications concerning the prevalence and AMR of Enterobacteriaceae members in wild ungulates across a major database: Google Scholar (last searched on July 2024). Additionally, the European Food Safety Authority was consulted (last searched on July 2024). The search was limited to studies conducted in Europe between 2001 and 2024. This geographical restriction was applied to ensure relevance and contextual specificity, considering potential variations in game meat consumption practices, environmental factors, and regulatory frameworks across different regions. The outcomes targeted in this review included the prevalence of foodborne pathogens in wild ungulates and their AMR profiles. The occurrence of specific pathogens, such as E. coli , Salmonella spp., and Yersinia spp., in tested samples from various wild ungulates and the patterns of AMR exhibited by these pathogens. The searching methodology employed specific keywords, including “game meat”, “ Enterobacteriaceae ” and “antimicrobial resistance”, supplemented by additional terms like “roe deer ( Capreolus capreolus )”, “red deer ( Cervus elaphus )”, “wild boar ( Sus scrofa )”, “chamois ( Rupicapra rupicapra )”, “moose ( Alces alces )”, “fallow deer ( Dama dama )”, “mouflon ( Ovis gmelini )”, “ E. coli ”, “ Salmonella spp.”, and “ Yersinia spp.”, strategically combined to ensure inclusivity. Other food-borne pathogens of Enterobacteriaceae , such as Cronobacter and Shigella, were excluded due to the limited availability of comprehensive data on these pathogens within mainland Europe and game meat. Cronobacter spp., for instance, while recognized for its severe implications in neonatal infections, particularly through contaminated powdered infant formula, remains less frequently reported in foodborne disease surveillance compared to more prevalent pathogens like Salmonella or pathogenic strains of E. coli . Similarly, Shigella infections, though significant, are often overshadowed by other more commonly reported enteric pathogens, and the variability in reporting practices can lead to an incomplete epidemiological picture . In an initial search, a total of 437 articles were identified. The eligibility of the articles was based on the availability of information regarding the prevalence of the targeted genera Salmonella , Escherichia, and Yersinia , as well as data on AMR. All selected studies were published in peer-reviewed journals, organizational websites, books, and dissertations, and were exclusively in the English language. The initial screening phase involved assessing the titles of the articles, with exclusions made for irrelevant studies. This included duplicates, studies focused on other samples’ origin, those concerning other animal species such as domestic animals (n = 33), other wild species (n = 41), or wild birds (n = 5), and studies related to other bacterial species (n = 23). In the subsequent selection phase, the abstracts of the remaining studies were independently and thoroughly reviewed to determine their relevance to the study’s objectives. Three independent reviewers extracted data from each included report. The reviewers worked independently, and any discrepancies were resolved through discussion or consultation with a 4th reviewer if necessary. No attempts were made to contact study authors for data confirmation, as all required information was available from the reports. Data extraction was conducted manually without the use of automation tools. Information was systematically extracted from each article, including the author, year of publication, country of the study, wild species investigated, number of samples tested, number of positive samples, number of isolates obtained, and data on AMR. Initially, a total of 437 manuscripts were identified through the Google Scholar database. Of these, 21 publications were excluded as their titles were either completely irrelevant or they were duplicates. The abstracts of the remaining 404 articles were then reviewed, resulting in the exclusion of 209 articles that did not align with the predefined criteria for this review. This exclusion was based on the following reasons: irrelevance to the scope of the review (n = 113), non-English language publications (n = 21), and lack of clear identification of Enterobacteriaceae (n = 75). Consequently, 185 studies remained for full-text examination, of which 13 could not be retrieved. Of the 172 articles fully reviewed, 33 were excluded as they only concerned domestic animals, 46 for concerning only other wild species, 14 for not reporting the number of positive samples, 2 for not reporting the total number of samples, 23 for discussing other bacterial species (e.g., Campylobacter , Listeria ), and 2 for a non-European country (e.g., Egypt and Namibia). Ultimately, 52 manuscripts met the inclusion criteria and were incorporated into this review. We prioritized results that reported the most recent and relevant data, focusing on studies that provided the highest methodological quality and comprehensiveness in their findings. 3.1. Isolation Frequency of Salmonella in Hunted Game Animals Salmonella spp. is a Gram-negative, flagellated, facultative anaerobic bacteria that belongs to the Enterobacteriaceae , and more than 2500 serotypes are known . Salmonellosis is an enteric infectious disease that poses a hazard for meat safety. With 65,208 cases of human illness, 1014 foodborne outbreaks causing 6632 cases of illness, 1406 hospitalizations, and 8 deaths, it was the second-most-often reported foodborne zoonosis in the European Union in 2022 . Salmonella spp. has been identified as a high-priority concern in ensuring the safety of wild boar meat and is recognized as a significant biological hazard in wild animals . Despite this, the contribution of game meat to the epidemiology of human salmonellosis remains unexplored. Regarding the prevalence of Salmonella spp. in game animals, scientific publications conducted in Europe and covering 12 countries, including Norway, Sweden, Germany, Czech Republic, Switzerland, Slovenia, Italy, Portugal, Spain, Romania, Serbia, and Greece, were reviewed. The published results about the prevalence of Salmonella in wild ungulates are summarized in . It is noteworthy that the recorded prevalence values of Salmonella spp. in game animals are highly variable, ranging from 0% in Norway, Germany, and Switzerland to 47.7% in Slovenia, as can be observed in . The most frequently isolated Salmonella serotypes obtained from game meat samples were Salmonella Salamae, accounting for a total of 83 isolates, followed by Salmonella Diarizonae with 73 isolates and Salmonella Enterica with 40 isolates. These findings highlight a notable prevalence of S. salamae and S. diarizonae in the sampled populations, suggesting that these serotypes may be more commonly associated with wild game meat. This distribution of serotypes provides valuable insights into the patterns of Salmonella contamination in wild game and raises important considerations for food safety monitoring and control practices . The data compiled from various studies reveals significant variability in the prevalence of Salmonella spp. among wild animal populations, particularly in wild boars, across different countries. For instance, Vieira-Pinto et al. reported a prevalence of 22.1% in wild boars, underscoring their potential role as carriers and spreaders of Salmonella spp. This finding was supported by the results published by Razzuoli et al. , who found a 12.45% prevalence, yet lower than the 19.3% in Murcia, Spain , and 35% in Latium, Italy . Similarly, Rîmbu et al. identified a 10.7% prevalence, whereas Hulánková et al. reported a very low prevalence of 0.4% in the Czech Republic, illustrating significant geographical variability. Further studies, such as that by Bonardi et al. , found a 2% and 10.2% prevalence in both carcasses and mesenteric lymph nodes, respectively, while Bassi et al. , who reported a 17% seroprevalence in Switzerland, revealed substantial differences even within relatively close regions. These results are mirrored by the findings of Cilia et al. , who documented a 4.18% prevalence, comparable to studies in Spain and Sweden . Additionally, Razzuoli et al. found 540 out of 4335 samples were positive for Salmonella spp., indicating a notable presence of the pathogen. Petrović et al. reported an overall prevalence of 1.6% in Vojvodina hunting grounds, with some areas reaching up to 33.3%, suggesting localized spikes in prevalence. Meanwhile, Siddi et al. observed an overall prevalence of 4.5%, and Floris et al. found no Salmonella spp. in muscle and liver samples tested by PCR. Altissimi et al. reported an overall prevalence of 1.36% in a comprehensive study from 2018 to 2023. In contrast to wild boars, other wild ungulates such as deer, chamois, moose, and ibex exhibit lower or no prevalence of Salmonella spp. . Obwegeser et al. and Lillehaug et al. reported no detection of Salmonella spp. in their samples, while Díaz-Sánchez et al. found an overall sample-level prevalence of 0.8%, with 1.2% in wild boars and 0.3% in red deer. These findings suggest that these species present a lower risk of Salmonella transmission compared to wild boars. Comparing prevalence rates across different regions and species, Ortega et al. reported a 19.3% overall Salmonella seroprevalence in Spain, higher than 1.5% in Portugal , 7.2% in Italy , and 11.3% in Northeast Spain . However, this rate is lower than the 47.7% in Slovenia and 30.7% in Campania, Italy , but higher than the 4.3% in Greece and 12.4% in Switzerland . The most frequently identified serotypes of Salmonella were S. enterica subsp. Salamae, with 83 isolates , followed by S. diarizonae with 73 isolates , S. enteritidis with 37 isolates , and S. typhimurium with 30 isolates . In addition, other serotypes with significant public health implications were identified: S. paratyphi (1) and S. newport (10) . S. paratyphi is responsible for paratyphoid fever, which poses substantial health risks due to its transmission through contaminated food and water , while S. newport is particularly concerning due to its frequent association with multidrug resistance (MDR), complicating treatment efforts and leading to severe, widespread outbreaks . Both serotypes underscore the importance of vigilant monitoring and stringent control measures to mitigate their impact on public health. The reviewed studies confirm that wild boar populations serve as significant reservoirs for Salmonella spp., posing potential risks to human and animal health . The variability in prevalence rates underscores the importance of localized studies to assess specific risk factors and implement targeted public health interventions. While other wild animals such as deer, chamois, and ibex present a lower risk, the increasing wild boar populations and their interaction with human activities elevate the potential for zoonotic transmission. Further research should focus on identifying the specific Salmonella serotypes circulating within wild ungulate populations to better understand the epidemiology and develop effective control measures. Given the public health implications, it is essential to monitor the impact of the growing wild boar populations and the consumption of their meat, implement rigorous food safety practices, and educate the public on the risks associated with handling and consuming wild game meat. This comprehensive approach will aid in mitigating the risks posed by Salmonella spp. and safeguarding public health. 3.2. Distribution of the Pathogenic Escherichia coli Strains in Hunted Game Animals Generic E. coli is often a harmless component of the normal microflora in humans and other animals. Nevertheless, acquiring virulence genes through various mechanisms has endowed certain E. coli strains with different types of pathogenicity. Numerous enteropathogenic groups of E. coli have been identified as causes of various gastrointestinal infections. Six principal pathotypes of E. coli have been distinguished: enteropathogenic E. coli (EPEC), enterotoxigenic E. coli (ETEC), enteroinvasive E. coli (EIEC), diffusely adhering E. coli (DAEC), enteroaggregative E. coli (EAEC), and enterohemorrhagic E. coli (EHEC) . According to the European Food Safety Authority (EFSA), the European Centre for Disease Prevention and Control (ECDC), and the One Health 2020 Zoonoses Report, STEC (Shiga toxin-producing Escherichia coli ) infections rank as the fourth most common zoonotic disease, following campylobacteriosis, salmonellosis, and yersiniosis . Currently, the pathogenicity of STEC is categorized by serotype, with the top five being O26, O103, O111, O145, and O157, which were previously the most frequently detected serotypes among patients with hemolytic uraemic syndrome (HUS) . Public health authorities mainly focus on O157 STEC infections due to their high pathogenicity. However, non-O157 STEC serogroups, including O26, O103, O111, O121, and O145, cause twice as many human infections . This serotype is a component of the gut microbiota in various animal species, with ruminants, particularly cattle, identified as a major reservoir . Wild boars have been previously identified as carriers of E. coli O15 and other STEC strains that pose potential pathogenic risks to humans . It is important to note that the reported prevalence rates of E. coli in game animals exhibit significant variability, as shown in . The published results regarding the prevalence of E. coli in game ungulates are summarized in . In a study by Lillehaug et al. , 104 isolates of potentially pathogenic serovars of E. coli were identified among the 207 pooled samples examined. The serovar E. coli O103 was detected in 41% of the pooled samples, whereas serovars O26 and O145 were found less frequently. Notably, serovars O111 and O157 were not observed in any of the samples. In contrast, in a study conducted in Switzerland , an analysis of 239 fecal samples from wild ungulates revealed that 53.1% were positive for E. coli . Similarly, in a study by Sánchez et al. , STEC strains were detected in 58 (23.9%) of the animals sampled. The prevalence of STEC was found to be 24.7% (51 out of 206) in red deer, 5% (1/20) in roe deer, 33.3% (2/6) in fallow deer, and 36.4% (4/11) in mouflon. Additionally, two different STEC strains were identified in seven of the animals. Díaz-Sánchez et al. isolated STEC from deer and wild boar in the carcass and fecal samples. The overall prevalence of STEC in fecal samples was 21.6% (124/574), while in carcass samples it was 21.3% (125/585). Non-STEC O157 strains were isolated in 34% (89/264) of deer fecal samples, 4% (11/301) of wild boar fecal samples, 7% (19/271) of deer carcass samples, and 4% (12/310) of wild boar carcass samples. Similarly, in a study by Mora et al. , STEC strains were recovered from 52.5% of the roe deer (94/179) and 8.4% of the wild boars (22/262). Lauzi et al. found similar data in the analyzed deer feces, with a prevalence of STEC strains of 19.9%. Out of 536 fecal samples tested from wild boars tested by Plaza-Rodríguez et al. , 37 yielded STEC (6.9%). Considering the species, STEC was recovered from 37% of the red deer samples (37/101) and 14% from wild boar (8/56) samples. These results followed the European trend, with a higher STEC rate identified in cervids . The findings obtained by Szczerba-Turek et al. demonstrated that red deer and roe deer serve as potential carriers of non-O-157 STEC isolates, which may pose pathogenic risks to humans. STEC strains were identified in 21.65% and 24.63% of rectal swabs from red deer and roe deer, respectively. This finding is particularly significant given Europe’s steadily increasing red deer population and the direct and indirect interactions between wildlife and humans, domestic animals, water bodies, and the broader environment. The results further corroborate that wildlife, including wild ungulates, can act as reservoirs and disseminators of STEC within the environment. In a study by Díaz-Sánchez et al. , E. coli O157 was specifically detected and isolated from deer fecal samples at four of the thirty-three hunting estates sampled, corresponding to a prevalence of 12% at the estate level and 1.5% (4/264) at the sample level. Likewise, Navarro-Gonzalez et al. reported that E. coli O157:H7 was found in 4/117 wild boars, yielding a prevalence of 3.41%, and in 2/160 Iberian ibexes, with a prevalence of 1.25%. Similar findings were obtained in the Czech Republic , where E. coli O157 was detected in 2/242 fecal samples (0.8%). In Serbia , during the 2013–2014 hunting season, fecal samples collected from roe deer, deer, and wild boars were sent to the bacteriology laboratory for isolation. Out of 105 fecal samples, E. coli was isolated from 100 samples, indicating a high prevalence rate of 95.23%. Similarly, in Portugal, during the same investigation period, 67 fecal samples were collected from wild ungulates, including wild boar, red deer, and roe deer. E. coli was detected in 96% of the samples (64) . A high prevalence of E. coli (83.78%) was detected in Poland by Wasyl et al. , where, out of 660 fecal samples from wild boar, roe deer, red deer, and fallow deer, 553 were positive for E. coli . In Tuscany, Bertelloni et al., reported that 175 E. coli pure cultures were isolated from 200 tested animals (87.5%). These findings highlight a comparably high prevalence of E. coli in wild, ungulate populations across different regions of Europe. Based on the data, it is noteworthy that various species of wild ungulates across different regions of Europe serve as significant reservoirs for STEC and other pathogenic E. coli strains. The prevalence rates of STEC and non-O157 STEC isolates vary considerably among species and regions, with red deer, roe deer, and wild boars frequently identified as carriers. Studies consistently report notable prevalence rates, such as 21.6% in deer fecal samples and 6.9% in wild boar fecal samples , indicating a widespread distribution of these pathogens in wildlife. Additionally, specific pathogenic serovars like E. coli O103, O26, and O145 are detected more frequently, while O157 remains relatively rare in some studies but present in others at varying levels . These findings underscore the role of wild ungulates as important vectors in the environmental dissemination of STEC, posing potential risks to human and animal health due to their increasing interactions with humans and domestic animals. 3.3. Yersinia spp. in Game Animals The genus Yersinia , belonging to the bacterial family Enterobacteriaceae , comprises 28 distinct species. Among these, three species, Y. pestis , Y. pseudotuberculosis , and Y. enterocolitica , have been identified as pathogenic to humans. Notably, Y. enterocolitica is prevalent across a variety of food sources, animal reservoirs, and environmental niches and comprises both pathogenic and non-pathogenic strains . In Europe, Y. enterocolitica strains (serotype O:3 and serotype O:9) are often associated with clinical cases in humans . Swine is an important reservoir of these bioserotypes, and they usually carry the agent asymptomatically in the tonsils . In the EU, yersiniosis is classified as a notifiable zoonotic disease, mandating reporting to authorities and inclusion in the European Food Safety Authority’s (EFSA) annual report . Most cases of human yersiniosis reported in Europe are caused by Y. enterocolitica, and only a few have been attributed to Y. pseudotuberculosis . According to the European Centre for Disease Prevention and Control, a total of 7663 cases of yersiniosis were reported across the European Union (including the UK) in 2019. Of these cases, 100 were attributed to Y. pseudotuberculosis , while 7563 were caused by Y. enterocolitica . The highest notification rates were observed in member states located in northeastern Europe . It is important to note that none of these infections were caused by consumption of game meat. In wildlife, European authors reported a prevalence between 58.2% and 2.5% in Spain and Italy , respectively. The overall prevalence for each country can be observed in . The published data on the prevalence of Yersinia spp. in game meat are succinctly summarized in . From the data analyzed across 6 European countries, Y. enterocolitica was the most frequently isolated species. This prevalence underscores the widespread distribution of Y. enterocolitica within the region, emphasizing its potential significance as a public health concern across diverse geographic areas in Europe . The contamination of game meat with Y. enterocolitica also remains insufficiently investigated . Pathogenic Y. enterocolitica was detected on the surface of 38.3% of raw game meat samples in Bavaria, Germany. In Switzerland, Fredriksson-Ahomaa et al. reported a 44% detection rate of enteropathogenic Yersinia in the tonsils of 153 wild boars using real-time PCR. Specifically, Y. enterocolitica was detected in 35% of the animals, while Y. pseudotuberculosis was found in 20%. Notably, both species were simultaneously detected in 10% of the sampled wild boars. In Italy , a total of 28 Yersinia spp. were isolated from 18 out of 251 animals (7.2%): ten wild boar (15.4%), four red deer (7.1%), three roe deer (4.9%) and one chamois (1.5%). Six of these were identified as Y. enterocolitica species; these six isolates were retrieved from one chamois, one roe deer, one red deer, and three wild boars. Similarly, in Spain, Arrausi-Subiza et al. reported that antibodies against Y. enterocolitica and Y. pseudotuberculosis were detected in 52.5% of the tested animals. Using PCR, Y. enterocolitica was identified in 33.3% of the wild boars, while Y. pseudotuberculosis was detected in 25% of the tonsil samples. Correspondingly, the study conducted by Sannö et al. provides valuable insights into the prevalence of Yersinia species in wild boars. A comprehensive analysis of 319 samples from 88 wild boars was performed using PCR, including 175 tonsil samples, 88 fecal samples, and 56 ILN samples. The results demonstrated a significant presence of pathogenic Yersinia species among the sampled population. Specifically, 20.5% (18/88) of the wild boars tested positive for Y. enterocolitica , while 19.3% (17/88) were positive for Y. pseudotuberculosis . Four individuals tested positive for both Y. enterocolitica and Y. pseudotuberculosis . In a study conducted by Von Altrock et al. , the tonsils of 111 wild boars hunted in Lower Saxony, Germany, were investigated. A total of 17.1% of the wild boar tonsils were positive for Y. enterocolitica , while two boars (1.8%) carried isolates identified to be Y. frederiksenii . Likewise, bacteriological examination of 302 rectal swabs from 151 wild boars in Poland resulted in the isolation of 40 Y. enterocolitica strains . Laboratory examination of 336 swabs collected from 56 wild ungulates carcasses in Poland revealed 52 Y. enterocolitica strains. These were identified in 12/20 (60%) of roe deer carcasses, 7/16 (43.8%) of red deer carcasses, and 11/20 (55%) of wild boar carcasses. The relatively high degree of carcass contamination with Y. enterocolitica is of concern due to the growing popularity of game meat among consumers . The findings of Syczyło et al. provide compelling evidence of the widespread presence of Y. enterocolitica among game animals in Poland. The study revealed that Y. enterocolitica isolates were detected in the rectal swabs of 21.7% (186/857) of the tested animals, the prevalence of Y. enterocolitica infections (110/434) was highest in wild boars, where 25.3% of the examined animals were infected. In comparison, only 21.6% of red deer (63/291), 9.4% of roe deer (11/117), and 13.3% of fallow deer (2/15) were infected. In a study conducted by Sannö et al. in Sweden, 31.0% (28/90) of wild boars tested positive for Y. enterocolitica , while 22.0% (20/90) were positive for Y. pseudotuberculosis . In the study conducted by Bonardi et al. , Y. enterocolitica was not isolated from carcasses or MLNs. However, 3/49 carcasses were contaminated with bacteria of the genus Yersinia . Specifically, one strain each of Y. frederiksenii , Y. bercovieri , and Y. aldovae was detected from these carcasses. Notably, these species are not recognized as causative agents of human yersiniosis. In contrast, Cilia et al. reported that 71 Yersinia isolates were obtained from rectal swabs of wild boars, accounting for 24.7% of the samples analyzed. Of these, 54 isolates (18.8%) were biochemically identified as Y. enterocolitica . The remaining 17 isolates were classified as Y. frederiksenii or Y. intermedia . Similarly, Siddi et al. reported an overall prevalence of Y. enterocolitica of 30.3% (20/66) in wild boars. Among the Y. enterocolitica -positive animals, 10% (2/20) tested positive in both colon content and carcass surface samples, and an additional 10% (2/20) were positive in both colon content and MLNs. Furthermore, 5% (1/20) of the animals were positive for Y. enterocolitica exclusively in carcass surface samples. Specifically, Y. enterocolitica was identified in 27.3% (18/66) of colon content samples, 4.5% (3/66) of MLN samples, and 6.1% (3/49) of carcass surface samples. In a study conducted by Floris et al. , Y. enterocolitica in 101 liver samples was analyzed, detecting a total of eight strains (7.9%), distributed as follows: five from wild boars, two from chamois, and one from deer. The prevalence of Y. enterocolitica in this study was approximately 8%, which is higher than the 2.5% prevalence reported for Liguria but lower than the prevalence documented in Tuscany . This comprehensive review of studies across various European countries highlights the significant presence and varying prevalence rates of Y. enterocolitica and other Yersinia species in wild ungulates, particularly wild boars. Despite the observed variations in prevalence rates, the overall findings indicate a widespread distribution of these pathogens, with substantial evidence identifying wild boars as significant reservoirs. Under these circumstances, it is imperative to characterize the strains within wild boar populations to ascertain their serotype, biotype, and, most critically, their pathogenic potential. The reviewed studies highlight the importance of ongoing monitoring and detailed characterization of Yersinia strains in wild game populations to enhance our understanding of their pathogenic potential and to inform and develop effective public health strategies. 3.4. Antimicrobial Resistance Profile of the Game Origin Pathogenic Strains AMR has been identified by the World Health Organization (WHO) as one of the top ten global public health threats confronting humanity. Without intervention, it is estimated that global deaths attributable to AMR could reach 10 million annually by 2050 . The administration of antibiotics is a cornerstone of contemporary medicine; however, the rise of AMR, particularly within the Enterobacteriaceae , is escalating into a global crisis . The emergence of strains resistant to most, if not all, available antimicrobials presents significant challenges to public health . Most AMR observed in Enterobacteriaceae arises from acquiring mobile genetic elements, such as plasmids, which facilitate horizontal gene transfer across species and even genera boundaries. Mutations in chromosomal genes also contribute significantly, enhancing resistance to various antimicrobial classes, including both newly introduced and older antimicrobials that are being reconsidered for treating MDR organisms . Wild animals, due to limited direct exposure to antimicrobials, were traditionally expected to exhibit low levels of AMR. Enterobacteriaceae bacteria isolated from wild ungulate meat exhibited the highest resistance to tetracycline (TET) and ampicillin (AMP), followed by resistance to amoxicillin–clavulanic acid (AMC), as reflected in . However, increasing interactions with humans and livestock have significant impacts on their bacterial flora . Dias et al. assessed AMR in E. coli and Salmonella isolated from wild ungulates, testing multiple antimicrobials. E. coli isolates exhibited the highest resistance to ampicillin (9.87%), followed by tetracycline (8.55%), streptomycin (4.61%), and co-trimoxazole (3.95%). MDR was detected in 3.3% of isolates, predominantly from wild boar and red deer. Conversely, a Salmonella strain from a wild boar sample demonstrated susceptibility to all tested antimicrobials. In Norway, Lillehaug et al. reported AMR in only 3 out of 137 E. coli strains (2.2%) from moose, red deer, and roe deer. Among reindeer isolates, three exhibited MDR, with streptomycin resistance being the most frequent. Costa et al. found varied resistance levels in E. coli from wild animals in Portugal, with resistance to tetracycline, streptomycin, ampicillin, and trimethoprim–sulfamethoxazole ranging from 19% to 35%. Bertelloni et al. observed high resistance levels in E. coli isolated from wild boars in Tuscany, particularly against β-lactam antimicrobials. Most isolates showed resistance to cephalothin, amoxicillin–clavulanic acid, and ampicillin (165/175, 152/175, and120/175), respectively, with lower resistance to enrofloxacin and gentamicin (24/175), while minimal resistance was noted for trimethoprim–sulfamethoxazole (3/175) and chloramphenicol (1/175). Even if these percentages seem uncommon and worrying, in Italy, different authors detected similar levels of resistance among Enterobacteriaceae from wild animals other than wild boar . In Poland, Wasyl et al. detected resistance in E. coli from feces (n = 660) of wild ungulates (red, roe, fallow deer, European bison, and wild boar), with the highest resistance to sulfamethoxazole, streptomycin, ampicillin, trimethoprim, and tetracycline (1.3–6.6%). No significant differences were observed between boar and ruminant isolates. Most deer and bison isolates showed no resistance. In Serbia, Velhner et al. detected MDR phenotypes in seven isolates, each exhibiting distinct genomic macrorestriction profiles. PCR analysis and sequencing revealed diverse resistance genes, gene cassettes, and cassette arrays in these MDR isolates. Fluoroquinolone resistance was observed in five E. coli isolates, specifically in two from roe deer, one from deer, and two from wild boar. Mercato et al. comprehensively evaluated antimicrobial susceptibility profiles for 16 ESBL-producing E. coli strains. The study revealed 100% non-susceptibility to penicillins, 3rd-generation cephalosporins (3GCs), 4th-generation cephalosporins (4GCs), tetracyclines, and monobactams. Resistance rates were 75% to trimethoprim–sulfamethoxazole, 37.5% to chloramphenicol, 62.5% to ciprofloxacin, and 31.25% to levofloxacin. Among aminoglycosides, resistance was observed at 31.25% to tobramycin, 37.5% to gentamicin, and 18.75% to amoxicillin/clavulanate. All isolates were fully susceptible to carbapenems, amikacin, tigecycline, fosfomycin, piperacillin/tazobactam, and colistin. Notably, all suspected ESBL-producing E. coli exhibited an MDR profile, with a significant proportion (greater than 60%) showing non-susceptibility to fluoroquinolones. This finding is of particular concern due to the frequent clinical use of fluoroquinolones for treating infections. Elsby et al. found significant resistance to tetracycline and cefpodoxime in AMR E. coli from deer fecal samples in Scotland, although no resistance to meropenem was detected. Razzuoli et al. investigated the prevalence of AMR Salmonella spp. strains within the wild boar population in Liguria, Italy. Of the 260 strains analyzed, 94.6% (246/260) exhibited resistance to at least one of the tested antimicrobials. Specifically, 40% (98/260) were resistant to two or more antimicrobials, 17.3% (45/260) to three or more, and 9.6% (25/260) to four or more antimicrobials. The highest resistance rates were observed against a combination of sulfadiazine, sulfamerazine, and sulfamethazine, with 96% of the strains demonstrating resistance to these compounds. Conversely, less than 1% of the strains were resistant to chloramphenicol, colistin, ceftazidime, enrofloxacin, and nalidixic acid, and no strains showed resistance to ciprofloxacin. Intermediate susceptibility was most commonly observed for kanamycin (43%), streptomycin (30.2%), and tetracycline (23.4%). The observed AMR to these molecules is lower than that reported in other studies conducted in wild boars; however, these studies considered a lower number of Salmonella spp. strains . Siddi et al. reported that all Salmonella isolates (three/three, 100%) were susceptible to all tested antimicrobials. Regarding Y. enterocolitica isolates, three different AMR profiles were identified: 10/24 (41.7%) showed resistance to amoxicillin–clavulanic acid, ampicillin, and cefoxitin (AmcAmpFox); 11/24 (48.8%) showed resistance to amoxicillin–clavulanic acid and ampicillin (AmcAmp); 1/24 (4.2%) showed resistance to ampicillin (Amp); and 2/24 (8.3%) were sensible to all antimicrobials tested. Overall, 22/24 (91.7%) of Y. enterocolitica isolates showed phenotypic resistance to at least one beta-lactam compound. Comparably, Modesto et al. reported that 61.9% (n = 78) exhibited resistance to at least one antimicrobial among the Yersinia isolates tested. Specifically, 85.71% of the isolates were resistant to ampicillin, 23.8% to Triple-Sulfa and sulfisoxazole, and 7.14% to ceftiofur. Resistance to chloramphenicol and enrofloxacin was not detected, and the strains demonstrated very low resistance to streptomycin and tetracycline (0.79%); an increasing trend in resistance was observed for ampicillin, Triple-Sulfa, sulfisoxazole, and ceftiofur. Additionally, regarding MDR, twelve strains were resistant to two antimicrobials, fourteen to three antimicrobials, five to four antimicrobials, and nine to five antimicrobials. The data indicate a consistent pattern of rising AMR in wild ungulates across Europe. Although these animals are less exposed to antibiotics, they are increasingly showing resistance due to their interactions with human activities. This trend underscores the critical need for ongoing surveillance and robust stewardship of antimicrobial use. Effective monitoring should encompass both domestic and wild animal populations to address the spread of resistance comprehensively. Developing novel therapeutic strategies and alternative approaches to antimicrobial use is essential to counteract the escalating threat of AMR. The rising prevalence of AMR in wild ungulates highlights the interconnectedness of ecosystems and the importance of a One Health approach to managing AMR. This approach recognizes that human, animal, and environmental health are intrinsically linked and must be addressed together to mitigate the global threat of AMR. Salmonella spp. is a Gram-negative, flagellated, facultative anaerobic bacteria that belongs to the Enterobacteriaceae , and more than 2500 serotypes are known . Salmonellosis is an enteric infectious disease that poses a hazard for meat safety. With 65,208 cases of human illness, 1014 foodborne outbreaks causing 6632 cases of illness, 1406 hospitalizations, and 8 deaths, it was the second-most-often reported foodborne zoonosis in the European Union in 2022 . Salmonella spp. has been identified as a high-priority concern in ensuring the safety of wild boar meat and is recognized as a significant biological hazard in wild animals . Despite this, the contribution of game meat to the epidemiology of human salmonellosis remains unexplored. Regarding the prevalence of Salmonella spp. in game animals, scientific publications conducted in Europe and covering 12 countries, including Norway, Sweden, Germany, Czech Republic, Switzerland, Slovenia, Italy, Portugal, Spain, Romania, Serbia, and Greece, were reviewed. The published results about the prevalence of Salmonella in wild ungulates are summarized in . It is noteworthy that the recorded prevalence values of Salmonella spp. in game animals are highly variable, ranging from 0% in Norway, Germany, and Switzerland to 47.7% in Slovenia, as can be observed in . The most frequently isolated Salmonella serotypes obtained from game meat samples were Salmonella Salamae, accounting for a total of 83 isolates, followed by Salmonella Diarizonae with 73 isolates and Salmonella Enterica with 40 isolates. These findings highlight a notable prevalence of S. salamae and S. diarizonae in the sampled populations, suggesting that these serotypes may be more commonly associated with wild game meat. This distribution of serotypes provides valuable insights into the patterns of Salmonella contamination in wild game and raises important considerations for food safety monitoring and control practices . The data compiled from various studies reveals significant variability in the prevalence of Salmonella spp. among wild animal populations, particularly in wild boars, across different countries. For instance, Vieira-Pinto et al. reported a prevalence of 22.1% in wild boars, underscoring their potential role as carriers and spreaders of Salmonella spp. This finding was supported by the results published by Razzuoli et al. , who found a 12.45% prevalence, yet lower than the 19.3% in Murcia, Spain , and 35% in Latium, Italy . Similarly, Rîmbu et al. identified a 10.7% prevalence, whereas Hulánková et al. reported a very low prevalence of 0.4% in the Czech Republic, illustrating significant geographical variability. Further studies, such as that by Bonardi et al. , found a 2% and 10.2% prevalence in both carcasses and mesenteric lymph nodes, respectively, while Bassi et al. , who reported a 17% seroprevalence in Switzerland, revealed substantial differences even within relatively close regions. These results are mirrored by the findings of Cilia et al. , who documented a 4.18% prevalence, comparable to studies in Spain and Sweden . Additionally, Razzuoli et al. found 540 out of 4335 samples were positive for Salmonella spp., indicating a notable presence of the pathogen. Petrović et al. reported an overall prevalence of 1.6% in Vojvodina hunting grounds, with some areas reaching up to 33.3%, suggesting localized spikes in prevalence. Meanwhile, Siddi et al. observed an overall prevalence of 4.5%, and Floris et al. found no Salmonella spp. in muscle and liver samples tested by PCR. Altissimi et al. reported an overall prevalence of 1.36% in a comprehensive study from 2018 to 2023. In contrast to wild boars, other wild ungulates such as deer, chamois, moose, and ibex exhibit lower or no prevalence of Salmonella spp. . Obwegeser et al. and Lillehaug et al. reported no detection of Salmonella spp. in their samples, while Díaz-Sánchez et al. found an overall sample-level prevalence of 0.8%, with 1.2% in wild boars and 0.3% in red deer. These findings suggest that these species present a lower risk of Salmonella transmission compared to wild boars. Comparing prevalence rates across different regions and species, Ortega et al. reported a 19.3% overall Salmonella seroprevalence in Spain, higher than 1.5% in Portugal , 7.2% in Italy , and 11.3% in Northeast Spain . However, this rate is lower than the 47.7% in Slovenia and 30.7% in Campania, Italy , but higher than the 4.3% in Greece and 12.4% in Switzerland . The most frequently identified serotypes of Salmonella were S. enterica subsp. Salamae, with 83 isolates , followed by S. diarizonae with 73 isolates , S. enteritidis with 37 isolates , and S. typhimurium with 30 isolates . In addition, other serotypes with significant public health implications were identified: S. paratyphi (1) and S. newport (10) . S. paratyphi is responsible for paratyphoid fever, which poses substantial health risks due to its transmission through contaminated food and water , while S. newport is particularly concerning due to its frequent association with multidrug resistance (MDR), complicating treatment efforts and leading to severe, widespread outbreaks . Both serotypes underscore the importance of vigilant monitoring and stringent control measures to mitigate their impact on public health. The reviewed studies confirm that wild boar populations serve as significant reservoirs for Salmonella spp., posing potential risks to human and animal health . The variability in prevalence rates underscores the importance of localized studies to assess specific risk factors and implement targeted public health interventions. While other wild animals such as deer, chamois, and ibex present a lower risk, the increasing wild boar populations and their interaction with human activities elevate the potential for zoonotic transmission. Further research should focus on identifying the specific Salmonella serotypes circulating within wild ungulate populations to better understand the epidemiology and develop effective control measures. Given the public health implications, it is essential to monitor the impact of the growing wild boar populations and the consumption of their meat, implement rigorous food safety practices, and educate the public on the risks associated with handling and consuming wild game meat. This comprehensive approach will aid in mitigating the risks posed by Salmonella spp. and safeguarding public health. Generic E. coli is often a harmless component of the normal microflora in humans and other animals. Nevertheless, acquiring virulence genes through various mechanisms has endowed certain E. coli strains with different types of pathogenicity. Numerous enteropathogenic groups of E. coli have been identified as causes of various gastrointestinal infections. Six principal pathotypes of E. coli have been distinguished: enteropathogenic E. coli (EPEC), enterotoxigenic E. coli (ETEC), enteroinvasive E. coli (EIEC), diffusely adhering E. coli (DAEC), enteroaggregative E. coli (EAEC), and enterohemorrhagic E. coli (EHEC) . According to the European Food Safety Authority (EFSA), the European Centre for Disease Prevention and Control (ECDC), and the One Health 2020 Zoonoses Report, STEC (Shiga toxin-producing Escherichia coli ) infections rank as the fourth most common zoonotic disease, following campylobacteriosis, salmonellosis, and yersiniosis . Currently, the pathogenicity of STEC is categorized by serotype, with the top five being O26, O103, O111, O145, and O157, which were previously the most frequently detected serotypes among patients with hemolytic uraemic syndrome (HUS) . Public health authorities mainly focus on O157 STEC infections due to their high pathogenicity. However, non-O157 STEC serogroups, including O26, O103, O111, O121, and O145, cause twice as many human infections . This serotype is a component of the gut microbiota in various animal species, with ruminants, particularly cattle, identified as a major reservoir . Wild boars have been previously identified as carriers of E. coli O15 and other STEC strains that pose potential pathogenic risks to humans . It is important to note that the reported prevalence rates of E. coli in game animals exhibit significant variability, as shown in . The published results regarding the prevalence of E. coli in game ungulates are summarized in . In a study by Lillehaug et al. , 104 isolates of potentially pathogenic serovars of E. coli were identified among the 207 pooled samples examined. The serovar E. coli O103 was detected in 41% of the pooled samples, whereas serovars O26 and O145 were found less frequently. Notably, serovars O111 and O157 were not observed in any of the samples. In contrast, in a study conducted in Switzerland , an analysis of 239 fecal samples from wild ungulates revealed that 53.1% were positive for E. coli . Similarly, in a study by Sánchez et al. , STEC strains were detected in 58 (23.9%) of the animals sampled. The prevalence of STEC was found to be 24.7% (51 out of 206) in red deer, 5% (1/20) in roe deer, 33.3% (2/6) in fallow deer, and 36.4% (4/11) in mouflon. Additionally, two different STEC strains were identified in seven of the animals. Díaz-Sánchez et al. isolated STEC from deer and wild boar in the carcass and fecal samples. The overall prevalence of STEC in fecal samples was 21.6% (124/574), while in carcass samples it was 21.3% (125/585). Non-STEC O157 strains were isolated in 34% (89/264) of deer fecal samples, 4% (11/301) of wild boar fecal samples, 7% (19/271) of deer carcass samples, and 4% (12/310) of wild boar carcass samples. Similarly, in a study by Mora et al. , STEC strains were recovered from 52.5% of the roe deer (94/179) and 8.4% of the wild boars (22/262). Lauzi et al. found similar data in the analyzed deer feces, with a prevalence of STEC strains of 19.9%. Out of 536 fecal samples tested from wild boars tested by Plaza-Rodríguez et al. , 37 yielded STEC (6.9%). Considering the species, STEC was recovered from 37% of the red deer samples (37/101) and 14% from wild boar (8/56) samples. These results followed the European trend, with a higher STEC rate identified in cervids . The findings obtained by Szczerba-Turek et al. demonstrated that red deer and roe deer serve as potential carriers of non-O-157 STEC isolates, which may pose pathogenic risks to humans. STEC strains were identified in 21.65% and 24.63% of rectal swabs from red deer and roe deer, respectively. This finding is particularly significant given Europe’s steadily increasing red deer population and the direct and indirect interactions between wildlife and humans, domestic animals, water bodies, and the broader environment. The results further corroborate that wildlife, including wild ungulates, can act as reservoirs and disseminators of STEC within the environment. In a study by Díaz-Sánchez et al. , E. coli O157 was specifically detected and isolated from deer fecal samples at four of the thirty-three hunting estates sampled, corresponding to a prevalence of 12% at the estate level and 1.5% (4/264) at the sample level. Likewise, Navarro-Gonzalez et al. reported that E. coli O157:H7 was found in 4/117 wild boars, yielding a prevalence of 3.41%, and in 2/160 Iberian ibexes, with a prevalence of 1.25%. Similar findings were obtained in the Czech Republic , where E. coli O157 was detected in 2/242 fecal samples (0.8%). In Serbia , during the 2013–2014 hunting season, fecal samples collected from roe deer, deer, and wild boars were sent to the bacteriology laboratory for isolation. Out of 105 fecal samples, E. coli was isolated from 100 samples, indicating a high prevalence rate of 95.23%. Similarly, in Portugal, during the same investigation period, 67 fecal samples were collected from wild ungulates, including wild boar, red deer, and roe deer. E. coli was detected in 96% of the samples (64) . A high prevalence of E. coli (83.78%) was detected in Poland by Wasyl et al. , where, out of 660 fecal samples from wild boar, roe deer, red deer, and fallow deer, 553 were positive for E. coli . In Tuscany, Bertelloni et al., reported that 175 E. coli pure cultures were isolated from 200 tested animals (87.5%). These findings highlight a comparably high prevalence of E. coli in wild, ungulate populations across different regions of Europe. Based on the data, it is noteworthy that various species of wild ungulates across different regions of Europe serve as significant reservoirs for STEC and other pathogenic E. coli strains. The prevalence rates of STEC and non-O157 STEC isolates vary considerably among species and regions, with red deer, roe deer, and wild boars frequently identified as carriers. Studies consistently report notable prevalence rates, such as 21.6% in deer fecal samples and 6.9% in wild boar fecal samples , indicating a widespread distribution of these pathogens in wildlife. Additionally, specific pathogenic serovars like E. coli O103, O26, and O145 are detected more frequently, while O157 remains relatively rare in some studies but present in others at varying levels . These findings underscore the role of wild ungulates as important vectors in the environmental dissemination of STEC, posing potential risks to human and animal health due to their increasing interactions with humans and domestic animals. The genus Yersinia , belonging to the bacterial family Enterobacteriaceae , comprises 28 distinct species. Among these, three species, Y. pestis , Y. pseudotuberculosis , and Y. enterocolitica , have been identified as pathogenic to humans. Notably, Y. enterocolitica is prevalent across a variety of food sources, animal reservoirs, and environmental niches and comprises both pathogenic and non-pathogenic strains . In Europe, Y. enterocolitica strains (serotype O:3 and serotype O:9) are often associated with clinical cases in humans . Swine is an important reservoir of these bioserotypes, and they usually carry the agent asymptomatically in the tonsils . In the EU, yersiniosis is classified as a notifiable zoonotic disease, mandating reporting to authorities and inclusion in the European Food Safety Authority’s (EFSA) annual report . Most cases of human yersiniosis reported in Europe are caused by Y. enterocolitica, and only a few have been attributed to Y. pseudotuberculosis . According to the European Centre for Disease Prevention and Control, a total of 7663 cases of yersiniosis were reported across the European Union (including the UK) in 2019. Of these cases, 100 were attributed to Y. pseudotuberculosis , while 7563 were caused by Y. enterocolitica . The highest notification rates were observed in member states located in northeastern Europe . It is important to note that none of these infections were caused by consumption of game meat. In wildlife, European authors reported a prevalence between 58.2% and 2.5% in Spain and Italy , respectively. The overall prevalence for each country can be observed in . The published data on the prevalence of Yersinia spp. in game meat are succinctly summarized in . From the data analyzed across 6 European countries, Y. enterocolitica was the most frequently isolated species. This prevalence underscores the widespread distribution of Y. enterocolitica within the region, emphasizing its potential significance as a public health concern across diverse geographic areas in Europe . The contamination of game meat with Y. enterocolitica also remains insufficiently investigated . Pathogenic Y. enterocolitica was detected on the surface of 38.3% of raw game meat samples in Bavaria, Germany. In Switzerland, Fredriksson-Ahomaa et al. reported a 44% detection rate of enteropathogenic Yersinia in the tonsils of 153 wild boars using real-time PCR. Specifically, Y. enterocolitica was detected in 35% of the animals, while Y. pseudotuberculosis was found in 20%. Notably, both species were simultaneously detected in 10% of the sampled wild boars. In Italy , a total of 28 Yersinia spp. were isolated from 18 out of 251 animals (7.2%): ten wild boar (15.4%), four red deer (7.1%), three roe deer (4.9%) and one chamois (1.5%). Six of these were identified as Y. enterocolitica species; these six isolates were retrieved from one chamois, one roe deer, one red deer, and three wild boars. Similarly, in Spain, Arrausi-Subiza et al. reported that antibodies against Y. enterocolitica and Y. pseudotuberculosis were detected in 52.5% of the tested animals. Using PCR, Y. enterocolitica was identified in 33.3% of the wild boars, while Y. pseudotuberculosis was detected in 25% of the tonsil samples. Correspondingly, the study conducted by Sannö et al. provides valuable insights into the prevalence of Yersinia species in wild boars. A comprehensive analysis of 319 samples from 88 wild boars was performed using PCR, including 175 tonsil samples, 88 fecal samples, and 56 ILN samples. The results demonstrated a significant presence of pathogenic Yersinia species among the sampled population. Specifically, 20.5% (18/88) of the wild boars tested positive for Y. enterocolitica , while 19.3% (17/88) were positive for Y. pseudotuberculosis . Four individuals tested positive for both Y. enterocolitica and Y. pseudotuberculosis . In a study conducted by Von Altrock et al. , the tonsils of 111 wild boars hunted in Lower Saxony, Germany, were investigated. A total of 17.1% of the wild boar tonsils were positive for Y. enterocolitica , while two boars (1.8%) carried isolates identified to be Y. frederiksenii . Likewise, bacteriological examination of 302 rectal swabs from 151 wild boars in Poland resulted in the isolation of 40 Y. enterocolitica strains . Laboratory examination of 336 swabs collected from 56 wild ungulates carcasses in Poland revealed 52 Y. enterocolitica strains. These were identified in 12/20 (60%) of roe deer carcasses, 7/16 (43.8%) of red deer carcasses, and 11/20 (55%) of wild boar carcasses. The relatively high degree of carcass contamination with Y. enterocolitica is of concern due to the growing popularity of game meat among consumers . The findings of Syczyło et al. provide compelling evidence of the widespread presence of Y. enterocolitica among game animals in Poland. The study revealed that Y. enterocolitica isolates were detected in the rectal swabs of 21.7% (186/857) of the tested animals, the prevalence of Y. enterocolitica infections (110/434) was highest in wild boars, where 25.3% of the examined animals were infected. In comparison, only 21.6% of red deer (63/291), 9.4% of roe deer (11/117), and 13.3% of fallow deer (2/15) were infected. In a study conducted by Sannö et al. in Sweden, 31.0% (28/90) of wild boars tested positive for Y. enterocolitica , while 22.0% (20/90) were positive for Y. pseudotuberculosis . In the study conducted by Bonardi et al. , Y. enterocolitica was not isolated from carcasses or MLNs. However, 3/49 carcasses were contaminated with bacteria of the genus Yersinia . Specifically, one strain each of Y. frederiksenii , Y. bercovieri , and Y. aldovae was detected from these carcasses. Notably, these species are not recognized as causative agents of human yersiniosis. In contrast, Cilia et al. reported that 71 Yersinia isolates were obtained from rectal swabs of wild boars, accounting for 24.7% of the samples analyzed. Of these, 54 isolates (18.8%) were biochemically identified as Y. enterocolitica . The remaining 17 isolates were classified as Y. frederiksenii or Y. intermedia . Similarly, Siddi et al. reported an overall prevalence of Y. enterocolitica of 30.3% (20/66) in wild boars. Among the Y. enterocolitica -positive animals, 10% (2/20) tested positive in both colon content and carcass surface samples, and an additional 10% (2/20) were positive in both colon content and MLNs. Furthermore, 5% (1/20) of the animals were positive for Y. enterocolitica exclusively in carcass surface samples. Specifically, Y. enterocolitica was identified in 27.3% (18/66) of colon content samples, 4.5% (3/66) of MLN samples, and 6.1% (3/49) of carcass surface samples. In a study conducted by Floris et al. , Y. enterocolitica in 101 liver samples was analyzed, detecting a total of eight strains (7.9%), distributed as follows: five from wild boars, two from chamois, and one from deer. The prevalence of Y. enterocolitica in this study was approximately 8%, which is higher than the 2.5% prevalence reported for Liguria but lower than the prevalence documented in Tuscany . This comprehensive review of studies across various European countries highlights the significant presence and varying prevalence rates of Y. enterocolitica and other Yersinia species in wild ungulates, particularly wild boars. Despite the observed variations in prevalence rates, the overall findings indicate a widespread distribution of these pathogens, with substantial evidence identifying wild boars as significant reservoirs. Under these circumstances, it is imperative to characterize the strains within wild boar populations to ascertain their serotype, biotype, and, most critically, their pathogenic potential. The reviewed studies highlight the importance of ongoing monitoring and detailed characterization of Yersinia strains in wild game populations to enhance our understanding of their pathogenic potential and to inform and develop effective public health strategies. AMR has been identified by the World Health Organization (WHO) as one of the top ten global public health threats confronting humanity. Without intervention, it is estimated that global deaths attributable to AMR could reach 10 million annually by 2050 . The administration of antibiotics is a cornerstone of contemporary medicine; however, the rise of AMR, particularly within the Enterobacteriaceae , is escalating into a global crisis . The emergence of strains resistant to most, if not all, available antimicrobials presents significant challenges to public health . Most AMR observed in Enterobacteriaceae arises from acquiring mobile genetic elements, such as plasmids, which facilitate horizontal gene transfer across species and even genera boundaries. Mutations in chromosomal genes also contribute significantly, enhancing resistance to various antimicrobial classes, including both newly introduced and older antimicrobials that are being reconsidered for treating MDR organisms . Wild animals, due to limited direct exposure to antimicrobials, were traditionally expected to exhibit low levels of AMR. Enterobacteriaceae bacteria isolated from wild ungulate meat exhibited the highest resistance to tetracycline (TET) and ampicillin (AMP), followed by resistance to amoxicillin–clavulanic acid (AMC), as reflected in . However, increasing interactions with humans and livestock have significant impacts on their bacterial flora . Dias et al. assessed AMR in E. coli and Salmonella isolated from wild ungulates, testing multiple antimicrobials. E. coli isolates exhibited the highest resistance to ampicillin (9.87%), followed by tetracycline (8.55%), streptomycin (4.61%), and co-trimoxazole (3.95%). MDR was detected in 3.3% of isolates, predominantly from wild boar and red deer. Conversely, a Salmonella strain from a wild boar sample demonstrated susceptibility to all tested antimicrobials. In Norway, Lillehaug et al. reported AMR in only 3 out of 137 E. coli strains (2.2%) from moose, red deer, and roe deer. Among reindeer isolates, three exhibited MDR, with streptomycin resistance being the most frequent. Costa et al. found varied resistance levels in E. coli from wild animals in Portugal, with resistance to tetracycline, streptomycin, ampicillin, and trimethoprim–sulfamethoxazole ranging from 19% to 35%. Bertelloni et al. observed high resistance levels in E. coli isolated from wild boars in Tuscany, particularly against β-lactam antimicrobials. Most isolates showed resistance to cephalothin, amoxicillin–clavulanic acid, and ampicillin (165/175, 152/175, and120/175), respectively, with lower resistance to enrofloxacin and gentamicin (24/175), while minimal resistance was noted for trimethoprim–sulfamethoxazole (3/175) and chloramphenicol (1/175). Even if these percentages seem uncommon and worrying, in Italy, different authors detected similar levels of resistance among Enterobacteriaceae from wild animals other than wild boar . In Poland, Wasyl et al. detected resistance in E. coli from feces (n = 660) of wild ungulates (red, roe, fallow deer, European bison, and wild boar), with the highest resistance to sulfamethoxazole, streptomycin, ampicillin, trimethoprim, and tetracycline (1.3–6.6%). No significant differences were observed between boar and ruminant isolates. Most deer and bison isolates showed no resistance. In Serbia, Velhner et al. detected MDR phenotypes in seven isolates, each exhibiting distinct genomic macrorestriction profiles. PCR analysis and sequencing revealed diverse resistance genes, gene cassettes, and cassette arrays in these MDR isolates. Fluoroquinolone resistance was observed in five E. coli isolates, specifically in two from roe deer, one from deer, and two from wild boar. Mercato et al. comprehensively evaluated antimicrobial susceptibility profiles for 16 ESBL-producing E. coli strains. The study revealed 100% non-susceptibility to penicillins, 3rd-generation cephalosporins (3GCs), 4th-generation cephalosporins (4GCs), tetracyclines, and monobactams. Resistance rates were 75% to trimethoprim–sulfamethoxazole, 37.5% to chloramphenicol, 62.5% to ciprofloxacin, and 31.25% to levofloxacin. Among aminoglycosides, resistance was observed at 31.25% to tobramycin, 37.5% to gentamicin, and 18.75% to amoxicillin/clavulanate. All isolates were fully susceptible to carbapenems, amikacin, tigecycline, fosfomycin, piperacillin/tazobactam, and colistin. Notably, all suspected ESBL-producing E. coli exhibited an MDR profile, with a significant proportion (greater than 60%) showing non-susceptibility to fluoroquinolones. This finding is of particular concern due to the frequent clinical use of fluoroquinolones for treating infections. Elsby et al. found significant resistance to tetracycline and cefpodoxime in AMR E. coli from deer fecal samples in Scotland, although no resistance to meropenem was detected. Razzuoli et al. investigated the prevalence of AMR Salmonella spp. strains within the wild boar population in Liguria, Italy. Of the 260 strains analyzed, 94.6% (246/260) exhibited resistance to at least one of the tested antimicrobials. Specifically, 40% (98/260) were resistant to two or more antimicrobials, 17.3% (45/260) to three or more, and 9.6% (25/260) to four or more antimicrobials. The highest resistance rates were observed against a combination of sulfadiazine, sulfamerazine, and sulfamethazine, with 96% of the strains demonstrating resistance to these compounds. Conversely, less than 1% of the strains were resistant to chloramphenicol, colistin, ceftazidime, enrofloxacin, and nalidixic acid, and no strains showed resistance to ciprofloxacin. Intermediate susceptibility was most commonly observed for kanamycin (43%), streptomycin (30.2%), and tetracycline (23.4%). The observed AMR to these molecules is lower than that reported in other studies conducted in wild boars; however, these studies considered a lower number of Salmonella spp. strains . Siddi et al. reported that all Salmonella isolates (three/three, 100%) were susceptible to all tested antimicrobials. Regarding Y. enterocolitica isolates, three different AMR profiles were identified: 10/24 (41.7%) showed resistance to amoxicillin–clavulanic acid, ampicillin, and cefoxitin (AmcAmpFox); 11/24 (48.8%) showed resistance to amoxicillin–clavulanic acid and ampicillin (AmcAmp); 1/24 (4.2%) showed resistance to ampicillin (Amp); and 2/24 (8.3%) were sensible to all antimicrobials tested. Overall, 22/24 (91.7%) of Y. enterocolitica isolates showed phenotypic resistance to at least one beta-lactam compound. Comparably, Modesto et al. reported that 61.9% (n = 78) exhibited resistance to at least one antimicrobial among the Yersinia isolates tested. Specifically, 85.71% of the isolates were resistant to ampicillin, 23.8% to Triple-Sulfa and sulfisoxazole, and 7.14% to ceftiofur. Resistance to chloramphenicol and enrofloxacin was not detected, and the strains demonstrated very low resistance to streptomycin and tetracycline (0.79%); an increasing trend in resistance was observed for ampicillin, Triple-Sulfa, sulfisoxazole, and ceftiofur. Additionally, regarding MDR, twelve strains were resistant to two antimicrobials, fourteen to three antimicrobials, five to four antimicrobials, and nine to five antimicrobials. The data indicate a consistent pattern of rising AMR in wild ungulates across Europe. Although these animals are less exposed to antibiotics, they are increasingly showing resistance due to their interactions with human activities. This trend underscores the critical need for ongoing surveillance and robust stewardship of antimicrobial use. Effective monitoring should encompass both domestic and wild animal populations to address the spread of resistance comprehensively. Developing novel therapeutic strategies and alternative approaches to antimicrobial use is essential to counteract the escalating threat of AMR. The rising prevalence of AMR in wild ungulates highlights the interconnectedness of ecosystems and the importance of a One Health approach to managing AMR. This approach recognizes that human, animal, and environmental health are intrinsically linked and must be addressed together to mitigate the global threat of AMR. The analysis of game meat production and the prevalence of foodborne pathogens in wild ungulates across Europe reveals significant public health implications. While game meat is a traditional and valued part of European diets, it presents unique challenges due to its association with various pathogens and AMR bacteria. However, the economic impact of the summarized results cannot be accurately estimated because game meat consumption occupies different weights in the consumer’s diet within European countries. Furthermore, the possible negative impact on human and domesticated animals’ health cannot be neglected because wildlife has started to adapt to semi-urban or even urban areas, constituting an important reservoir for the persistence of drug-resistant pathogenic strains within the environment. The reviewed studies highlight that wild boars are major reservoirs for Salmonella spp., E. coli , and Yersinia spp., with considerable variability in pathogen prevalence across different species and regions. Specifically, wild boars and other ungulates like deer are frequently implicated in the environmental dissemination of pathogenic E. coli and Yersinia . The rising levels of AMR in these wildlife populations, despite limited direct exposure to antimicrobials, is a growing concern. This resistance is likely linked to human activities and environmental contamination. To mitigate these risks, it is crucial to implement stringent monitoring and surveillance programs for both game meat safety and AMR. Emphasizing a One Health approach, which integrates human, animal, and environmental health strategies, will be essential in addressing these issues comprehensively. Future research should focus on the detailed characterization of pathogens and resistance patterns in wildlife to develop effective public health interventions and ensure the safety of game meat. |
Best Achievements in Clinical Thyroidology in 2020 | a9d3a523-f746-4dca-a19f-028e8dfbf626 | 7937845 | Physiology[mh] | In 2020, clinical studies in thyroidology reported outstanding results. Specifically, intriguing questions about thyroid dysfunction and thyroid cancer were answered through well-designed, randomized clinical trials. This review summarizes the important research published in 2020.
Original, peer-reviewed research articles published between January 2020 and December 2020 were extracted through an independent literature review. A brief summary of these articles is presented along with their clinical utility or implications. The publications of interest discussed below dealt with the following topics: thyroid dysfunction, risk of thyroid cancer, molecular diagnostics and new therapeutics for thyroid cancer, and thyroid disease in the coronavirus disease 2019 (COVID-19) pandemic era.
The impacts of subclinical hypothyroidism on cardiovascular morbidity and mortality and the benefits of levothyroxine (LT4) replacement remain inconclusive because only limited prospective cohort studies or randomized controlled trials have investigated LT4 replacement. In February 2020, Inoue et al. sought to clarify to what extent subclinical hypothyroidism is associated with cardiovascular mortality in a representative sample of 9,020 adults in the United States enrolled in the National Health and Nutrition Examination Survey, and demonstrated that cardiovascular disease mediated 14.3% and 5.9% of the associations of subclinical hypothyroidism and high-normal thyroid-stimulating hormone (TSH) levels with all-cause mortality, respectively. This finding suggests that investigations are needed to examine the clinical benefits of medical interventions for people with elevated TSH levels. Following those results, two preliminary but important results were reported from double-blind, randomized controlled trials of LT4 replacement in subjects with subclinical hypothyroidism. Jabbar et al. investigated the effects of LT4 in patients with subclinical hypothyroidism presenting with acute myocardial infarction, but failed to show any benefits for outcomes such as left ventricular function, adverse events, and quality of life after 52 weeks of LT4 treatment. Furthermore, de Montmollin et al. reported that LT4 replacement failed to improve hypothyroid symptoms or tiredness scores at 1 year in subjects aged ≥65 years with subclinical hypothyroidism (4.6≤ TSH ≤11.9 mU/L) in a secondary analysis of the randomized, placebo-controlled Thyroid Hormone Replacement for Untreated Older Adults with Subclinical Hypothyroidism Trial (TRUST). During pregnancy, maternal deficiency of thyroid hormone is associated with low birth weight ; however, the impact of subclinical hypothyroidism has remained unclear. A recent systematic review and individual-participant data meta-analysis of 48,145 mother-child pairs from 36 cohorts provided evidence that maternal subclinical hypothyroidism during pregnancy ( n =1,275) was associated with a higher risk of small for gestational age (SGA; odds ratio [OR], 1.24; 95% confidence interval [CI], 1.04 to 1.48) and lower birthweight, while isolated hypothyroxinemia ( n =929) was associated with a lower risk of SGA (OR, 0.7; 95% CI, 0.55 to 0.91) and higher birthweight. There was an inverse, dose-response association of maternal TSH and free thyroxine (T4; even within the normal range) with birthweight, suggesting the rationale of thyroid function screening during the prenatal period for better postpartum outcomes.
Graves’ disease is characterized by the presence of auto-antibodies that stimulate the TSH receptor, resulting in hyperthyroidism. The first-line treatment of thyrotoxicosis is thionamide, an anti-thyroid drug (ATD) . A randomized trial investigated the efficacy of a blocking dose of ATD with LT4 replacement compared to ATD monotherapy dose titration. The primary outcome was the percentage of patients with normal TSH levels between 6 months and 3 years after treatment, and the secondary outcomes included adverse event frequency and remission/relapse at 4 years. The study showed no evidence to suggest that blocking and replacement is associated with improved outcomes compared to monotherapy . The conventional treatment modalities for Graves’ disease have remained unchanged for the past 70 years . Recently, novel therapeutic agents targeting the CD40-CD154 co-stimulatory pathway or the insulin-like growth factor I receptor (IGF-IR) were developed . Kahaly et al. conducted an open-level phase II proof-of-concept study of iscalimab, an anti-CD40 monoclonal blocking antibody in 15 Graves’ disease patients. Iscalimab induced euthyroid status in seven patients (47%) with a 12- to 20-week treatment period, but four (57%) of those patients relapsed after discontinuation. Twelve (80%) patients showed at least one reversible adverse event with a mild to moderate degree of severity . The efficacy of the IGF-IR inhibitor teprotumumab on Graves’ orbitopathy (GO), which is a serious extrathyroidal manifestation of Graves’ disease associated with activation of the IGF-IR pathway , also showed promising results ; a 24-week treatment course of IGF-IR led to a reduction in proptosis (≥2 mm) in 78% of 41 patients versus 7% of 42 controls. It also resulted in better secondary outcomes with respect to the Clinical Activity Score (frequency of score 0–1: 59% vs. 21%), diplopia (response in 68% vs. 29%) and quality of life (GO-QOL overall score: 13.79 vs. 4.43) than placebo, with scarce adverse events .
The results from a population-based nested case-control study from Nordic population-based national cancer registry data (Denmark, Finland, Norway, and Sweden) were published in December 2020, demonstrating the impact of early-life risk exposures on the risk of thyroid cancer . The thyroid cancer risk in offspring was analyzed in relation to maternal comorbidities and birth outcomes among 2,437 thyroid cancer cases (81.4% with papillary thyroid cancer [PTC], 77.1% women) matched with up to 10 non-cancer controls based on birth year, sex, and country, and county of birth. Postpartum outcomes (higher birth weight, congenital hypothyroidism, postpartum hemorrhage) and maternal comorbidities (diabetes before pregnancy, thyroid dysfunction, goiter, and benign thyroid neoplasms) were each associated with an increased risk of thyroid cancer in offspring. Of note, maternal thyroid comorbidity markedly increased the risk, with ORs of 67.36 (95% CI, 39.89 to 113.76) for goiter, 22.50 (95% CI, 6.93 to 73.06) for benign neoplasms, 18.12 (95% CI, 10.52 to 31.20) for hypothyroidism, and 11.91 (95% CI, 6.77 to 20.94) for hyperthyroidism. Fetal congenital hypothyroidism also showed high OR of 4.55 (95% CI, 1.58 to 13.08). This study provides evidence to support an association between intrauterine exposures, particularly those related to maternal thyroid status during pregnancy, and later risk of thyroid cancer, although some genetic predisposition for thyroid disease could not be excluded. Although, thyroid hormonal status is known to be associated with the risk of thyroid cancer, whether thyroid dysfunction plays a causal role in the development of cancer remains inconclusive. Tran et al. provided additional evidence regarding that issue by demonstrating that both hyperthyroidism and hypothyroidism were associated with higher risks of thyroid cancer (pooled risk ratio, 4.49; 95% CI, 2.84 to 7.12 for hyperthyroidism) (pooled risk ratio, 3.31; 95% CI, 1.20 to 9.13 for hypothyroidism) as compared to euthyroidism in a meta-analysis of 13 million subjects from 15 studies. However, two recent results on the association of thyroid cancer with genetic variants related to thyroid function demonstrated contrary results. Zhou et al. performed a genome-wide association study (GWAS) meta-analysis for 22.4 million genetic markers in up to 119,715 individuals from three famous studies—the Nord-Trøndelag Health Study (HUNT study), Michigan Genomics Initiative (MGI), and the ThyroidOmics consortium—and found 74 susceptible loci for TSH levels that explained 13.3% of its variance. Unexpectedly, phenome-wide association tests for the polygenic scores of the TSH variants showed an association between high TSH polygenic scores and low thyroid cancer risk, and two-sample Mendelian randomization analysis also suggested that the TSH variants associated with elevating levels could potentially reduce thyroid cancer risk in several independent populations. In a similarly designed GWAS meta-analysis in up to 72,167 European-descent individuals from the Breast Cancer Association Consortium and UK Biobank, Yuan et al. showed that genetically predicted TSH levels (OR, 0.47; 95% CI, 0.30 to 0.73) and hypothyroidism (OR, 0.7; 95% CI, 0.51 to 0.98) were inversely associated with thyroid cancer. Although, these studies suggested that TSH and hypothyroidism may play a role in thyroid cancer, the causal relationship should be further elucidated in future studies.
The Bethesda classification defines the possibility of malignancy according to cytopathology findings derived from fine-needle aspiration for thyroid nodules . Indeterminate cytology is a cumbersome category, with a difficult-to-define risk of cancer, and comprises approximately 20% of thyroid nodules. Recent molecular testing (gene panel tests) have been proposed to reduce the need for diagnostic lobectomy in patients with nodules with indeterminate cytology. Livhits et al. investigated the effectiveness of molecular testing techniques, and in particular sought to determine whether an RNA test (Afirma genomic sequencing classifier) or a DNA-RNA test (ThyroSeq v3 multigene genomic classifier) offered superior performance in estimating the risk of malignancy of thyroid nodules with indeterminate cytology. In their randomized clinical trial of 346 patients with 372 indeterminate thyroid nodules, the prevalence of malignancy was 20%. The RNA test and the DNA-RNA test showed no statistically significant difference in performance, including sensitivity (100% vs. 97%, respectively), specificity (80% vs. 85%, respectively), and positive predictive value (53% vs. 63%) allowing half of patients with indeterminate nodules to avoid diagnostic surgery (51% vs. 49%, respectively). Advanced computing and imaging techniques are also being applied to the diagnosis of thyroid nodules. A systematic review and meta-analysis of 19 studies involving 4,781 nodules evaluated the efficacy and accuracy of machine learning-based diagnosis . The diagnostic performance of deep learning-based diagnosis was comparable to that of radiologists (sensitivity, 0.87 [95% CI, 0.78 to 0.93] vs. 0.87 [95% CI, 0.5 to 0.89]; specificity, 0.85 [95% CI, 0.76 to 0.91] vs. 0.87 [95% CI, 0.81 to 0.91]; diagnostic OR, 40.12 [95% CI, 15.58 to 103.33] vs. 44.88 [95% CI, 30.71 to 65.57]). Radiomics-assisted diagnosis based on ultrasound imaging of thyroid lesions adequately predicted the risk of lymph node metastasis in PTC . Imaging data were collected by three ultrasound instruments (GE, SuperSonic, and Kretztechnik) and analyzed by four models with quantitative indexes (statistical model, traditional radiomics model, nontransfer learning radiomics and transfer learning radiomics [TLR]). The TLR model achieved an area under the curve of 0.95, indicating its accuracy for predicting in lymph node metastasis, and was validated in a different set from another hospital. By predicting the presence of neck lymph node metastasis, the TLR model can be applied to determine candidates for active surveillance of PTC .
Anaplastic thyroid carcinoma (ATC) remains one of the most aggressive and fatal solid tumors. Recently, a combination of dabrafenib and trametinib therapy demonstrated substantial survival improvement in patients harboring the BRAF V600E variation . Furthermore, neoadjuvant BRAF -directed therapy showed the feasibility of complete resection and local disease control . Based on these findings, the emerging use of targeted therapy, immunotherapy, surgery and radiation therapy might improve overall survival (OS) in patients with ATC . In a single-institution (the University of Texas MD Anderson Cancer Center) cohort study of 479 patients with ATC over nearly 20 years, 1- and 2-year OS significantly increased from 35% and 18% in the 2000 to 2013 period to 47% and 25% in the 2014 to 2016 period, and 59% and 42% in the 2017 to 2019 period, respectively . They found that a harmonious multidisciplinary approach could improve OS, with hazard ratios of 0.45 (95% CI, 0.39 to 0.63) for targeted therapy, 0.58 (95% CI, 0.36 to 0.94) for the addition of immunotherapy to targeted therapy, and 0.29 (95% CI, 0.10 to 0.78) for surgery following neoadjuvant BRAF -directed therapy. The last group ( n =20) showed a 1-year survival of 94% with a median follow-up of 1.21 years. The researchers proposed a treatment algorithm for patients with ATC based on the staging and BRAF mutational status, and showed that preemptive genetic profiling and directed immunotherapy might lead to better treatment outcomes for patients with ATC, who are desperate due to the grave prognosis of the cancer. Another disease entity that has lacked breakthrough therapies is radioactive iodine therapy-refractory differentiated thyroid cancer (DTC). Multikinase inhibitors targeting the growth factor signaling pathways (sorafenib and lenvatinib) were developed and approved as anti-cancer drugs for patients with advanced progressive DTC. Recently, novel target molecules including rearranged during transfection (RET), tropomyosin receptor kinase (TRK), and somatostatin receptors are emerging. The RET proto-oncogene encodes a transmembrane receptor tyrosine kinase that is constitutively activated in several types of cancers. Germline or somatic RET mutations were found in approximately 70% of medullar thyroid cancers, and RET fusions are found in fewer than 10% of DTC . Selpercatinib, a selective RET inhibitor, proved its efficacy and safety in a phase 1–2 trial of 162 patients with RET -mutant medullary thyroid cancer (MTC) . The response rate was 73% (95% CI, 62% to 82%) in 88 drug-naïve MTCs, and 69% (95% CI, 55% to 81%) and 79% (95% CI, 54% to 94%) in previously treated RET -mutant ( n =55) and RET -fusion positive ( n =17) MTCs, respectively. Selpercatinib showed durable efficacy with mainly low-grade toxic effects. The most common adverse event (grade 3 or higher) was hypertension, which was observed in 21% of patients. This study suggests that effective molecular screening for RET mutations will be essential in selecting proper patients. The fusion of the TRK gene with another gene occurs in DTC and ATC, and activates carcinogenesis and promotes progression. There are two Food and Drug Administration-approved drugs for DTC patients: larotrectinib and entrectinib . Cabanillas et al. analyzed data from 28 patients with advanced metastasis harboring neurotrophin tyrosine receptor kinase ( NTRK ) gene fusion pooled from two larotrectinib phase 1–2 clinical trials (NCT02122913 and NCT02576431). The objective response rate was 75% (95% CI, 55% to 89%) and the duration of response ranged from 1.9 to 41.0 months. Adverse events were mostly grade 1–2. These findings suggested that larotrectinib was highly efficacious and its safety profile was favorable. Entrectinib, another NTRK inhibitor, showed a response in one out of five patients with thyroid cancer in an integrated analysis of three phase 1–2 trials (ALKA-372–001, STARTRK-1, and STARTRK-2) . Somatostatin receptor type 2 (SSTR2) is highly expressed in tumor tissues . Thakur et al. suggested that SSTR2 may function as a promising target molecule in thyroid cancer. SSTR2 was expressed more intensively in thyroid cancer lesions than in normal tissues, and a radiolabeled analog of SSTR2 ( 68 Ga-DOTA-TATE) showed higher uptake in thyroid cancer patients, particularly in Hürthle cell thyroid cancer resistant to radioactive iodine therapy. Treatment with 177 Lu-DOTA-EB-TATE, with higher theragnostic efficacy, reduced tumor size and extended survival in a mouse model. This novel radiolabeled analog of SSTR2 has the potential to be implicated in the treatment and diagnosis of resistant thyroid cancer.
The outbreak of COVID-19, caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is causing overwhelming challenges for health systems . Several retrospective reports have described the prevalence of thyroid-related diseases and investigated changes in treatment strategies as well as access to medical services. Lania et al. analyzed thyroid function in 287 patients hospitalized with COVID-19 infection in a non-intensive care unit setting to determine whether this infection contributed to abnormalities in thyroid function. Seventy-three patients (25.4%) had thyroid dysfunction, including thyrotoxicosis in 58 patients and hypothyroidism in 15 patients. Low TSH levels were associated with high levels of the inflammatory cytokine interleukin-6, suggesting that COVID-19 may be associated with a high risk of thyrotoxicosis . A similar study was also performed in China. Chen et al. reported that low TSH levels were present in 56% of patients with COVID-19. After recovery, the thyroid hormone levels of COVID-19 patients and control groups were not significantly different. Muller et al. evaluated the frequency of subacute thyroiditis in COVID-19 patients requiring high intensity of care units (HICUs) or low intensity of care units (LICUs) as compared to non-COVID patients admitted to HICUs in Italy. Their study demonstrated that 93 COVID-19 patients in HICUs had lower TSH levels and higher C-reactive protein levels than non-COVID-19 patients in HICUs or COVID-19 patients in LICUs initially. Although more research is needed, these studies suggest that COVID-19 is associated with systemic immune activation that may possibly cause thyroid inflammation and result in hyperthyroidism or thyroiditis.
There have been rapid advances in understanding thyroid diseases in the last decades. This has been the result of breakthroughs in computational and genetic technology and the fruit of tremendous demands for personalized medicine. Endocrinology and Metabolism looks forward to publishing excellent and promising results in this field in 2021.
|
Gastrointestinal Endoscopy in Patients with Coronavirus Disease 2019 | 0fc0c405-141d-4bc4-a273-9e3a1494fb8d | 9678816 | Internal Medicine[mh] | • SARS-CoV-2 transmission seems to mainly occur via respiratory particles (respiratory droplets and smaller aerosols that are expelled from the respiratory tract during speaking, breathing, and coughing) and close contact with infected persons. • WHO and CDC advise using respirator masks, such as N95s, when performing procedures that might pose higher risk for transmission if the patient has SARS-CoV-2 infection (eg, that generate potentially infectious aerosols or involving anatomic regions where viral loads might be higher, such as the nose and throat, oropharynx, and respiratory tract). • Endoscopic findings in patients with COVID-19 suggest that SARS-CoV-2 does not seem to behave as a highly invasive and injurious pathogen to gastrointestinal mucosa.
The spread of coronavirus disease 2019 (COVID-19), caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), was declared a pandemic by the World Health Organization (WHO) on March 11, 2020. Since the outbreak was first identified in December 2019 in Wuhan, China, the public health and social impact of the disease and the cumulative morbidity and mortality across the world has been enormous. As with any new or emerging pathogen, early in the pandemic, there was limited evidence and understanding of how SARS-CoV-2 was transmitted, limited testing capability, and resource constraints, especially in the availability of personal protective equipment (PPE). Endoscopy centers shut down and the volume of endoscopic procedures plummeted save for only urgent, lifesaving, or time-sensitive procedures. In line with international consensus statements and guidelines as well as local state- and health system-level policies, endoscopy centers slowly opened up and increased their volume of procedures with the paramount goal of reducing the potential risk of infection for patients and health care workers (HCWs). , , , Many studies showed drastic reductions in endoscopy volumes during the onset of the pandemic and persistent reductions in procedural volumes for sustained periods thereafter. , , This article summarizes the evolution of our understanding of SARS-CoV-2 infection, the performance of safe endoscopy, as well as indications and endoscopic findings in patients with COVID-19. Understanding Modes of Transmission of Respiratory Viruses A critical aspect of managing any pandemic from a respiratory virus requires a clear understanding of how an infectious pathogen is transmitted and the equipment or protection that is therefore needed to minimize transmission. Respiratory viruses are transmitted between individuals when the virus is released from the respiratory tract of an infected person and is transferred through the environment, to infect the respiratory tract of an exposed and susceptible person. The major modes of transmission of a respiratory virus from one person to another include large droplets, aerosols, direct contact, or indirect contact (fomites). Often, the relative contributions of different modes to a successful transmission and the relative effects of each mode, as well as modifications of risk by viral, host, and environmental factors, are unknown . Understanding Modes of Transmission of Severe Acute Respiratory Syndrome Coronavirus 2 Our current understanding of SARS-CoV-2 transmission has shifted and evolved since the beginning of the pandemic. According to the WHO, SARS-CoV-2 transmission seems to occur mainly via respiratory particles and close contact with infected symptomatic cases. These particles not only include respiratory droplets but also droplets as small as 5 μm, and smaller aerosols that are expelled from the respiratory tract during speaking, breathing, and coughing. The risk of transmission via aerosols is influenced by many factors including the concentration and mass of particles emitted, the viral load, the proximity and duration of exposure, and the circulation of air in the environment. The relative contribution according to particle size in virus transmission, however, is unknown. Epidemiologic evidence suggests that the risk of transmission is predominantly from short-range exposure from a person who generates significant amounts of virus. The SARS-CoV-2 virus has been detected in the air with a half-life of just more than 1 hour, and this evidence was offered as proof of “viable” virus that could be transmitted via aerosolization. However, this study was significantly limited in that it was conducted in a laboratory setting under an artificially created environment and not representative of real-world data. Human-to-human transmission can also occur from unknown infected persons (eg, asymptomatic carriers or individuals with mild symptoms), as well as individuals with virus shedding during the preincubation period before symptoms develop. A potentially compounding factor for transmission events is the contagiousness and transfer of SARS-CoV-2 infectious particles from fomites or contaminated surfaces (eg, door handles). As other coronaviruses and respiratory viruses are known to be transmitted this way, spread through fomites may be an additional source of transmission. In early studies of hospitalized patients with COVID-19 positive SARS-CoV-2 samples were identified in various locations around patients’ rooms, including the bed, sink, bathroom, light switches, and doors. In addition, positive samples were found on the shoes and stethoscopes of staff exiting patient rooms, but no contamination was found in the anteroom or corridor outside the room. These studies raised concerns about environmental contamination by patients with SARS-CoV-2 through respiratory droplets and fecal shedding. Despite the consistent evidence of SARS-CoV-2 contamination and survival of the virus on certain surfaces, there have been no specific reports demonstrating direct fomite transmission and the risk is generally thought to be small. People who come into contact with potentially infectious surfaces often also have close contact with the infectious person, thus making the distinction between respiratory droplet and fomite transmission difficult to differentiate. Viral SARS-CoV-2 particles have been isolated from various bodily fluids, including feces, urine, saliva, semen, and tears, raising concerns about possible transmission through these routes; however, the presence of viral particles in these fluids has not been shown to correlate with clinical symptoms. , The detection of viral particles in the stool was of particular importance because coronaviruses can have direct pathogenicity in the gastrointestinal (GI) tract and cause enteric diseases; this raised concerns about fecal-oral spread as well as safety of endoscopy because aerosolization and increased exposure to fecal material may pose additional infectious risk. According to one systematic review of 35 studies that included 1636 patients with laboratory-confirmed COVID-19 who received fecal, anal, and/or rectal swab SARS-CoV-2 RNA examinations, the pooled prevalence of fecal SARS-CoV-2 was 43% with about half of these patients demonstrating persistent shedding even after respiratory samples turned negative, and shedding was found more commonly in patients with GI symptoms. Despite these data, no cases of direct fecal-oral transmission were reported thereby questioning the viability and infectivity of SARS-CoV-2 virus found in fecal matter. Importantly, wastewater evaluation has been a useful surveillance strategy for tracking and predicting rates of prevalent COVID-19 for health care utilization. The Role of Personal Protective Equipment in Minimizing Risk of Infection from Severe Acute Respiratory Syndrome Coronavirus 2 PPE includes gowns gloves, eye protection (eg, face shield or goggles), and surgical/medical or respirator masks. Surgical masks (also known as medical masks) are fluid resistant and often used for droplet precautions, because they are designed to block large particles, but are less effective in blocking small particle aerosols (<5 μm). Surgical masks provide a barrier to prevent droplets reaching the wearer’s nose, mouth, and respiratory tract. Most masks are not designed to fit closely to the face, which means that airborne particles (aerosols <100 microns) could potentially pass though the gap between the mask and the face. In contrast, respirator masks are designed to block aerosols. Respiratory protection for airborne precautions in health care commonly follows 2 filtering device paths, N95 or N99 masks/respirators or filtering facepiece respirators (such as FFP2 or FFP3) and powered air-purifying respirators (PAPRs). The N95 masks filter at least 95% of aerosols (<5 μm) and droplet-size (5–50 μm) particles and are not resistant to oil. Lightweight, no-hose PAPRs are a highly effective alternative to face masks that force air through a large, multilayer filter housed in the helmet and provide positive pressure within the face-shield compartment. These devices are approved by US National Institute for Occupational Safety and Hazard and can provide high-level protection from common airborne viruses that exceeds that for N95 face masks without the need for “fit-testing” and also have the advantage of providing head and neck protection. Maximum protection is achieved only with proper donning and doffing techniques. Requirements for Personal Protective Equipment During Endoscopy Owing to the high risk of human-to-human transmission and the potential for transmission of infection with SARS-CoV-2 during routine performance of endoscopy, there was a lack of clarity regarding the necessity of PPE. Since the initial SARS infection in the early 2000s, there was ongoing recognition that certain medical interventions, labeled aerosol-generating procedures (AGPs), increased the risk of potential infection due to aerosol generation. According to the WHO, an AGP is any medical or patient care procedure that results in the production of airborne particles, or aerosols that are “associated with an increased risk of pathogen transmission” and therefore require enhanced precautions. Per the WHO, the following procedures were considered AGPs: open airway suction, sputum induction, cardiopulmonary resuscitation, endotracheal intubation and extubation, noninvasive ventilation such as bilevel positive airway pressure and continuous positive airway pressure, bronchoscopy, and manual ventilation. The quantitative evidence to support this categorization was, however, limited to retrospective cohort/case-control studies that were all deemed as very low quality. The gastroenterology community had a significant controversy as to whether upper or lower endoscopy qualified as AGPs. AGP classification was critical in informing infection prevention and control policies, specifically the requirements for respiratory protective devices, such as N95 or N99 masks/respirators or filtering facepiece respirators (such as FFP2 or FFP3) or masks at endoscopy. In the context of COVID-19, a classification of a procedure as an AGP necessitated a higher grade of PPE to protect against aerosolized virus and potential airborne transmission risk. Although certain interventions such as intubation and bronchoscopy were acknowledged as high risk, there was a lot more uncertainty about endoscopic procedures. Possible sources of aerosolization during endoscopy include intubation and removal of the endoscope, coughing, belching during endoscopy, heavy breathing from sedation, patient expulsion of gas and liquid, and dispersion of contaminated fluid during insertion and removal of tools through the working channel of the endoscope, adjustment of the air/water button, retrieval of tissue from a biopsy channel, and during precleaning of the endoscope. Our knowledge of the role of aerosol generation during endoscopy has expanded during the course of the pandemic. Several investigators, using various techniques, have studied this phenomenon to help us better understand the degree and quantity of aerosolization that is generated during routine endoscopy. These newer studies are summarized in . A major criticism of this approach to categorizing AGPs into discrete dichotomous categories (AGP vs non-AGP and high-risk vs low-risk AGPs) is that this categorization does not consider the continuum of procedure-related aerosol generation and the different levels of transmission risks. Thus, there is likely a hierarchy of AGPs with each intervention conveying a different degree of transmission risk. Further complicating this issue is that numerous studies have shown that certain respiratory events, such as coughing, can generate vastly greater numbers of droplets and aerosols, considerably more aerosol particles than aerosols generated from currently classified AGPs. , , , , In addition, some studies have found that traditional AGPs pose no greater risk than talking or breathing. It is difficult to infer risk of infection from these studies because aerosols may not necessarily contain viable virus material, and the amount and quantity of aerosol generation does not equate to infectivity from endoscopy. In summary, aerosol generation occurs as a continuum and endoscopy is associated with variable degrees of aerosolization. Risk of infection from aerosolized viral particles is, however, associated not only with the degree of aerosolization but also with other factors such as quantity of infective virus, proximity to source, and room ventilation. Based on these studies, however, there is increasing consensus that upper GI endoscopy should be classified as an AGP and periprocedural management including PPE recommendations should follow the AGP protocols to minimize transmission. Current recommendations by the WHO and Centers for Disease Control and Prevention (CDC) advise the use of respirator masks, such as N95s or N99s, when performing surgical procedures that might pose higher risk for transmission if the patient has SARS-CoV-2 infection. These procedures generate potentially infectious aerosols or involve anatomic regions where viral loads might be higher, such as the nose and throat, oropharynx, or respiratory tract. Respirator masks are warranted in caring for individuals with COVID-19 or when community transmission levels increase, but standard surgical masks are adequate for routine care not involving aerosol-generating procedures. , A systematic review of 172 observational studies on COVID-19, SARS-CoV-1, and Middle East respiratory syndrome coronavirus indicated that people, including HCWs, are strongly protected by wearing surgical face masks (adjusted odds ratio, 0.15 95% confidence interval, 0.07–0.34), with eye protection potentially conferring additional benefit. Early Impact of the Coronavirus Disease 2019 Pandemic on Endoscopy Units In March 2020 when the COVID-19 outbreak was declared a global pandemic all endoscopy services came to a virtual halt. Considering the escalating rates of hospitalizations and deaths, limited PPE availability, limited COVID-19 test availability, and the burden on the health care system, routine elective endoscopy services were temporarily discontinued. HCWs, physicians, and nursing staff were redeployed, and protocols were developed for triaging of endoscopies to identify and perform only endoscopic procedures for urgent or emergent indications. Although there were variations in how procedures were prioritized, many centers limited procedures for the following indications: active GI bleeding, acute cholangitis, food impactions, GI obstructions, and cancer diagnosis/staging/treatment. This strategy was aimed to reduce the risk of spreading infection, reducing use of limited PPE supplies, and reducing use of hospital resources. Numerous studies from the United States, United Kingdom, The Netherlands, Canada, China, Spain, Japan, and Taiwan reporting on endoscopy volumes during the initial 3 to 4 months of the pandemic demonstrated reductions in total number of upper endoscopies and colonoscopies of 51% to 72% and 59% to 85%, respectively (compared with the same period from prior years). , , , , , , After the initial phase, many centers resumed limited endoscopy services with the implementation of stringent infection prevention and control policies and worked to reduce the backlog of colonoscopies by offering patients noninvasive stool-based tests for colorectal cancer screening. , Resumption of Endoscopy with a Focus on Safety During the Coronavirus Disease 2019 Pandemic An important framework for managing health and safety interventions used by the CDC to develop infection control policies was the Hierarchy of Controls, which recommended using strategies to reduce risks of exposure to the virus in addition to the use of PPE. Such strategies included eliminating hazards by avoiding admission/treatment of people with active infection and using COVID-19 testing to segregate patients with the infection. Engineering controls such as physical barriers, and administrative controls to facilitate physical distancing, were also included in the hierarchy. And finally given the physical proximity required to deliver many elements of care, the use of PPE was also a required control measure within the health care environment. Following the hierarchy of controls framework, various operational changes were implemented across endoscopy suites and centers to safely reopen endoscopy units while mitigating the risk of infection. These changes were implemented based on local factors such as availability of resources, local prevalence of COVID-19, patient demographics, procedure indication, and hospital/endoscopy unit policies. The common goals of these changes were to maintain endoscopic volume and efficiency, while minimizing risk of transmission and infection to patients, staff, and HCWs. Sources of human-to-human transmission could occur from unknown infected persons (eg, asymptomatic carriers or individuals with mild symptoms), as well as individuals with virus shedding during the presymptomatic incubation period. Sources of risk during endoscopy included aerosols generated during endoscopy, which could increase the potential for subsequent airborne transmission, infection from respiratory secretions from patients, and potential contamination from other sources of bodily fluid (stool and patient saliva). Many authorities issued guidance on how to safely restart routine endoscopy and advocated for stringent infection control policies that included universal masking of patients, symptom screening before endoscopy, COVID testing before endoscopy, and use of high-level PPE , , , , , , . Box 1 Overview of modifications implemented across various endoscopy centers during various stages of the pandemic before the availability of vaccines Preprocedure modifications Triage and risk stratification used a screening questionnaire for (1) symptoms of COVID-19 (such as cough, shortness of breath, and persistent fever), (2) known history of contact with a patient with COVID-19, and (3) travel to high risk areas. These were performed in all cases at least 24 to 72 hours before endoscopy Preprocedure SARS-CoV-2 testing: individualized protocols for outpatient preprocedural testing of patients 24 to 72 hours before the scheduled appointment depending on local prevalence rates and institutional policies. Reverse transcription-polymerase chain reaction testing was performed in all asymptomatic patients before endoscopic procedures to risk stratify and determine PPE needs (see section later). • Patient reassurance about safety precautions taken to decrease transmission from patient to patient Procedural modifications for patients All patients required to wear surgical masks and keep at least 1 to 2 m distance from others. Arrangements made in advance to reduce patient congestion in the waiting area. Chairs and beds spaced to avoid the transmission of viral particles to noninfected patients. Informed consent includes informing individuals about the possible risk of nosocomial infection (COVID-19 infection) during endoscopy Patients informed to report back if experiencing any de novo symptoms postprocedure. Triage and screening questionnaire: at the time of presentation to the endoscopy, questions asked again regarding (1) symptoms of COVID-19 (such as cough, shortness of breath, and persistent fever), (2) known history of contact with a patient with COVID-19, and (3) travel to high-risk areas. These were performed in all cases at least 24 to 72 hours before endoscopy High-risk patients, classified by the presence of respiratory tract symptoms, previous travel to COVID-19 locations in the past 14 days, and close contact with COVID-19-positive patients, prompted procedure cancellation and self-quarantine Temperature measurements before entering the endoscopy unit Patient’s relative/caregiver or driver required to wait offsite and return after the procedure is completed. If this is not feasible, the waiting area should be appropriately distanced. Procedural modifications for HCWs Barriers such as glass or plastic walls/shields set up in check-in areas Safe distancing in the preoperative area as well as decreased numbers of patients that nursing staff can receive for preprocedure care. Endoscopy staff with preexisting conditions at higher risk of contracting COVID-19 have been assigned nonclinical duties Use of PPE mandated by all health care systems to minimize the risk of transmission All endoscopy team members required to wear surgical masks, gloves, hair coverings, face shields or goggles, water-proof disposable gowns, and shoe covers or boots. Initially use of highest level of PPE mandated by all health care systems to minimize the risk of transmission Eventually PPE for endoscopy personnel adjusted according to patient risk stratification with full PPE required for high-risk or confirmed COVID-19-positive patients. In low-resource settings, reusable respirators, face shields, goggles, and boots deemed acceptable after appropriate sterilization and decontamination methods Training and adherence to strict precautions of properly donning and doffing Staff required to complete questionnaire about symptoms before their daily work. Similar distances should be maintained between individuals. Staff required to keep at least 1 to 2 m of distance from staff and patients For COVID-19-positive (or suspected) cases, procedures performed in a negative pressure endoscopic unit, if available, or portable industrial-grade high-efficiency particulate air filters placed in endoscopy rooms In low-resource situations, adequate ventilation of the room was acceptable As much as possible, all required documentation should be performed outside the endoscopy room. Minimal number of workers should be in procedure room to minimize risk Team switching during procedures discouraged to minimize PPE usage and decrease contamination risks Postprocedure modifications Procedural downtime and room turnover time needed to allow for dispersion of potential virus-laden aerosols depends on rate of air changes per hour. The precise time needed for closure of the room depends on the use of negative pressure and air-exchange rate Patients with COVID-19: some centers used only negative pressure rooms (room maintained under negative pressure for at least 30 minutes, and in the absence of negative pressure, for 60 minutes, before the next patient) Initially patients are monitored in the recovery area, with no family available in the waiting room Eventually limited family available in the waiting room with adequate spacing between seats and requirement of face masks Postprocedure telephone follow-ups with patients used to enquire about developing any new COVID-19-related symptoms (traced and contacted after 7 and 14 days) Endoscopy Room and Endoscope Cleaning Enhanced cleaning procedures with cleaning of all horizontal surfaces, especially frequently touched surfaces, with particular emphasis on areas within a few feet of the patent (using standard hospital-grade disinfectant solution with viricidal agents) were implemented by most endoscopy units. , , Endoscope cleaning and decontamination processes remained unchanged; as per guidelines, mechanical and detergent cleaning followed by high-level disinfection (a process that eliminates or kills all vegetative bacteria, mycobacteria, fungi, and viruses, except for small numbers of bacterial spores, and reduces the number of microorganisms and organic debris by 4 logs, or 99.99%). Preprocedure Testing: Changing Recommendations Through the Course of the Pandemic The use of preprocedure testing in asymptomatic individuals became a common path to triage for risk stratification. A critical aspect of resuming endoscopy services included providing reassurance to patients and importantly to reassure HCWs, including endoscopists, nurses, and staff. At the pandemic onset, in the absence of available diagnostic tests and knowledge of treatments for COVID-19, one of the earliest evidence-based guidelines was developed by the American Gastroenterological Association (AGA); the guideline panel members made a strong recommendation to use N95 (or N99 or PAPR) masks (along with gowns, shoe covers, goggles, and face shields) instead of surgical masks for all HCWs performing upper endoscopies. Recommendations also included wearing double gloves and using negative pressure rooms, placing a high value on minimizing risks to HCWs, despite having low or very low certainty of evidence for risk of transmission of infection, because of documented community spread during a pandemic. In addition to limited resources for testing, limitations of PPE availability necessitated reuse or prolonged use of N95 masks. Finally, the decision to extend the recommendation to lower GI tract procedures was based on limited evidence of possible aerosolization during colonoscopy and the uncertain risks associated with evidence of the presence of SARS-CoV-2 RNA in fecal samples. These recommendations assumed the absence of widespread reliable testing for the diagnosis of COVID-19 infection or immunity and unclear data on prevalence. As the number of COVID tests that received Emergency Use Authorization approval increased, preprocedure testing became more readily available, and questions arose regarding the role of routine preprocedure testing of all individuals to minimize risk for patients and HCWs. At the individual patient level, testing in symptomatic patients helped identify individuals who could be isolated to prevent the spread of disease. At the population level, widespread testing of individuals (symptomatic and asymptomatic) was critical to determine the true prevalence of disease and the provision of health care services, and to reintroduce endoscopy across health care systems and ambulatory care centers. Recommendations developed by the AGA provided a framework for routine preprocedure testing before endoscopy (for all asymptomatic persons) that accounted for local contextual factors such as the local prevalence of SARS-CoV2 and availability of PPE and weighed the pros and cons of a pretesting strategy. Based on a systematic review and meta-analysis of the tests (available at that time), the authors of this guideline conducted and made conditional recommendations against endoscopy centers adopting routine preprocedural testing to triage patients into low- and high-prevalence settings because of concerns about the accuracy of test results and the potential downsides for individuals with false-positive or false-negative test results. It was suggested that all HCWs wear N95s (or higher) masks, if available, and forego testing. For endoscopy centers where the prevalence of asymptomatic SARS-CoV-2 infection was intermediate (0.5%–2%), the AGA suggested implementing a pretesting strategy, if tests were available, to determine the type of PPE (such as use of surgical masks in individuals who tested negative). Alternatively, in settings where the logistics of testing were challenging and the downsides outweighed the benefits, HCWs could choose to wear N95 (or higher) masks and again forego testing. The changing prevalence of COVID-19 was an important factor as new variants emerged and created documented new waves of infection. The rapid development of vaccines and the widespread implementation of vaccination programs worldwide helped decrease morbidity and mortality from COVID-19. Another important positive change was the availability of relatively effective treatments. Furthermore, within the GI community, as our understanding of disease transmission increased and data on infection rates from endoscopy and universal screening and testing became available, and PPE became widely available, many endoscopy centers again revised their testing policies. In contrast to early reports of high rates of HCW infections early in the pandemic (in the setting of limited PPE) accumulating evidence demonstrated low rates of COVID-19 infections among HCWs performing endoscopy. This evidence along with data demonstrating the relative effectiveness of vaccines in decreasing rates of transmission of infection prompted a recommendation against routine preprocedure testing emphasizing the downsides of testing at the patient level (of burden, cost, and access) and at the population level (low rates of screening and surveillance endoscopies leading to lower rates of screening, surveillance, and diagnosis of various GI cancers). Endoscopic Indications and Findings In patients with COVID-19, several systematic reviews and meta-analyses have described the prevalence of GI symptoms including diarrhea (8%–17%), nausea or vomiting (4%–20%), loss of appetite (2%–21%), abdominal pain (3%–20%), anorexia (8%–10%), abdominal distension (1%) and loss of taste (1%–3%). , , , , , , , Most GI symptoms associated with COVID-19 are mild. Diarrhea caused by SARS-CoV-2 may be the initial symptom in patients with COVID-19. A small subset of patients with COVID-19 may develop isolated GI symptoms throughout the disease (2.9%–16%). Our understanding of the endoscopic findings in COVID-19 is limited. Several case series and retrospective and prospective cohort studies have helped us to understand the direct and indirect effects of COVID-19 on the GI tract. , , , GI endoscopy for GI bleeding in patients with COVID-19 is reviewed in the article by Cappell and Friedel in this issue. Mechanistically, viruses in the GI tract, including coronaviruses, can contribute to disease by interacting with the mucous layer, epithelial cells, and potentially lamina propria immune cells. SARS-CoV-2 infection can disrupt the tight and adherent junctions of the endothelium and intestinal epithelium, which may lead to a leaky gut, local and systemic invasion of normal microbiota, and consequent immune activation. In one retrospective, single-center study of 95 laboratory-confirmed cases of SARS-CoV-2 infection from Zhuhai, China, 6 patients with GI symptoms underwent upper endoscopy and 2 underwent proctoscopy. Biopsies were taken from the esophagus, stomach, duodenum, and rectum for viral RNA detection. One patient with severe symptoms of GI bleeding localized to the esophagus and attributed to multiple round herpetic erosions and ulcers, each with a diameter of 4 to 6 mm. SARS-CoV-2 RNA was detected in the esophageal erosions and bleeding site, as well as in the stomach, duodenum, and rectum. In the other 5 patients (cases 2–6) no erosions, ulcers, or bleeding was noted. SARS-CoV-2 RNA could also be detected in the esophagus, stomach, duodenum, and rectum of another patient with severe COVID-19 infection (case 2). In contrast, the virus was only detected in the duodenum of the nonsevere case 3 and could not be detected in any GI specimens of the nonsevere cases 4 to 6. In a case report of a patient with COVID-19 who underwent endoscopy, biopsies revealed no damage to the epithelium of the esophagus, stomach, duodenum, and rectum, but infiltrates of occasional lymphocytes were observed in esophageal squamous epithelium and numerous infiltrating plasma cells and lymphocytes with interstitial edema were observed in the lamina propria of the stomach, duodenum, and rectum. In a retrospective study from Lombardy, Italy, 38 patients with confirmed SARS-CoV-2 underwent endoscopic evaluation (24 EGDs, 20 colonoscopies). Endoscopic lesions were observed in 18 of 24 EGDs (75%) and in 14 of 20 colonoscopies (70%). The main findings were esophagitis (20.8%), bulbar ulcer (20.8%), erosive gastritis (16.6%), neoplasm (8.3%), and Mallory-Weiss tear (4.1%). Colonoscopy revealed segmental colitis associated with diverticulosis (25%), colonic ischemia (20%), diffuse hemorrhagic colitis (5%), and colonic neoplasms (5%). Finally, in a multicenter cohort of ∼2000 hospitalized patients with COVID-19 across a geographically diverse network of medical centers in North America, only 1.2% of patients (n = 24) underwent endoscopy despite a high prevalence of GI symptoms and substantial burden of critical or prolonged illness. Most endoscopic procedures were performed for either emergency cases (eg, ongoing GI bleeding or biliary obstruction) or for placement of enteral access tubes. Among those who underwent endoscopy, the indications and findings were judged more likely to reflect overall systemic illness or related to prolonged hospitalization rather than direct viral injury from COVID-19. The investigators did not observe inflammatory pathology and concluded that SARS-CoV-2 did not seem to behave as a highly invasive and injurious pathogen to GI mucosa.
A critical aspect of managing any pandemic from a respiratory virus requires a clear understanding of how an infectious pathogen is transmitted and the equipment or protection that is therefore needed to minimize transmission. Respiratory viruses are transmitted between individuals when the virus is released from the respiratory tract of an infected person and is transferred through the environment, to infect the respiratory tract of an exposed and susceptible person. The major modes of transmission of a respiratory virus from one person to another include large droplets, aerosols, direct contact, or indirect contact (fomites). Often, the relative contributions of different modes to a successful transmission and the relative effects of each mode, as well as modifications of risk by viral, host, and environmental factors, are unknown .
Our current understanding of SARS-CoV-2 transmission has shifted and evolved since the beginning of the pandemic. According to the WHO, SARS-CoV-2 transmission seems to occur mainly via respiratory particles and close contact with infected symptomatic cases. These particles not only include respiratory droplets but also droplets as small as 5 μm, and smaller aerosols that are expelled from the respiratory tract during speaking, breathing, and coughing. The risk of transmission via aerosols is influenced by many factors including the concentration and mass of particles emitted, the viral load, the proximity and duration of exposure, and the circulation of air in the environment. The relative contribution according to particle size in virus transmission, however, is unknown. Epidemiologic evidence suggests that the risk of transmission is predominantly from short-range exposure from a person who generates significant amounts of virus. The SARS-CoV-2 virus has been detected in the air with a half-life of just more than 1 hour, and this evidence was offered as proof of “viable” virus that could be transmitted via aerosolization. However, this study was significantly limited in that it was conducted in a laboratory setting under an artificially created environment and not representative of real-world data. Human-to-human transmission can also occur from unknown infected persons (eg, asymptomatic carriers or individuals with mild symptoms), as well as individuals with virus shedding during the preincubation period before symptoms develop. A potentially compounding factor for transmission events is the contagiousness and transfer of SARS-CoV-2 infectious particles from fomites or contaminated surfaces (eg, door handles). As other coronaviruses and respiratory viruses are known to be transmitted this way, spread through fomites may be an additional source of transmission. In early studies of hospitalized patients with COVID-19 positive SARS-CoV-2 samples were identified in various locations around patients’ rooms, including the bed, sink, bathroom, light switches, and doors. In addition, positive samples were found on the shoes and stethoscopes of staff exiting patient rooms, but no contamination was found in the anteroom or corridor outside the room. These studies raised concerns about environmental contamination by patients with SARS-CoV-2 through respiratory droplets and fecal shedding. Despite the consistent evidence of SARS-CoV-2 contamination and survival of the virus on certain surfaces, there have been no specific reports demonstrating direct fomite transmission and the risk is generally thought to be small. People who come into contact with potentially infectious surfaces often also have close contact with the infectious person, thus making the distinction between respiratory droplet and fomite transmission difficult to differentiate. Viral SARS-CoV-2 particles have been isolated from various bodily fluids, including feces, urine, saliva, semen, and tears, raising concerns about possible transmission through these routes; however, the presence of viral particles in these fluids has not been shown to correlate with clinical symptoms. , The detection of viral particles in the stool was of particular importance because coronaviruses can have direct pathogenicity in the gastrointestinal (GI) tract and cause enteric diseases; this raised concerns about fecal-oral spread as well as safety of endoscopy because aerosolization and increased exposure to fecal material may pose additional infectious risk. According to one systematic review of 35 studies that included 1636 patients with laboratory-confirmed COVID-19 who received fecal, anal, and/or rectal swab SARS-CoV-2 RNA examinations, the pooled prevalence of fecal SARS-CoV-2 was 43% with about half of these patients demonstrating persistent shedding even after respiratory samples turned negative, and shedding was found more commonly in patients with GI symptoms. Despite these data, no cases of direct fecal-oral transmission were reported thereby questioning the viability and infectivity of SARS-CoV-2 virus found in fecal matter. Importantly, wastewater evaluation has been a useful surveillance strategy for tracking and predicting rates of prevalent COVID-19 for health care utilization.
PPE includes gowns gloves, eye protection (eg, face shield or goggles), and surgical/medical or respirator masks. Surgical masks (also known as medical masks) are fluid resistant and often used for droplet precautions, because they are designed to block large particles, but are less effective in blocking small particle aerosols (<5 μm). Surgical masks provide a barrier to prevent droplets reaching the wearer’s nose, mouth, and respiratory tract. Most masks are not designed to fit closely to the face, which means that airborne particles (aerosols <100 microns) could potentially pass though the gap between the mask and the face. In contrast, respirator masks are designed to block aerosols. Respiratory protection for airborne precautions in health care commonly follows 2 filtering device paths, N95 or N99 masks/respirators or filtering facepiece respirators (such as FFP2 or FFP3) and powered air-purifying respirators (PAPRs). The N95 masks filter at least 95% of aerosols (<5 μm) and droplet-size (5–50 μm) particles and are not resistant to oil. Lightweight, no-hose PAPRs are a highly effective alternative to face masks that force air through a large, multilayer filter housed in the helmet and provide positive pressure within the face-shield compartment. These devices are approved by US National Institute for Occupational Safety and Hazard and can provide high-level protection from common airborne viruses that exceeds that for N95 face masks without the need for “fit-testing” and also have the advantage of providing head and neck protection. Maximum protection is achieved only with proper donning and doffing techniques.
Owing to the high risk of human-to-human transmission and the potential for transmission of infection with SARS-CoV-2 during routine performance of endoscopy, there was a lack of clarity regarding the necessity of PPE. Since the initial SARS infection in the early 2000s, there was ongoing recognition that certain medical interventions, labeled aerosol-generating procedures (AGPs), increased the risk of potential infection due to aerosol generation. According to the WHO, an AGP is any medical or patient care procedure that results in the production of airborne particles, or aerosols that are “associated with an increased risk of pathogen transmission” and therefore require enhanced precautions. Per the WHO, the following procedures were considered AGPs: open airway suction, sputum induction, cardiopulmonary resuscitation, endotracheal intubation and extubation, noninvasive ventilation such as bilevel positive airway pressure and continuous positive airway pressure, bronchoscopy, and manual ventilation. The quantitative evidence to support this categorization was, however, limited to retrospective cohort/case-control studies that were all deemed as very low quality. The gastroenterology community had a significant controversy as to whether upper or lower endoscopy qualified as AGPs. AGP classification was critical in informing infection prevention and control policies, specifically the requirements for respiratory protective devices, such as N95 or N99 masks/respirators or filtering facepiece respirators (such as FFP2 or FFP3) or masks at endoscopy. In the context of COVID-19, a classification of a procedure as an AGP necessitated a higher grade of PPE to protect against aerosolized virus and potential airborne transmission risk. Although certain interventions such as intubation and bronchoscopy were acknowledged as high risk, there was a lot more uncertainty about endoscopic procedures. Possible sources of aerosolization during endoscopy include intubation and removal of the endoscope, coughing, belching during endoscopy, heavy breathing from sedation, patient expulsion of gas and liquid, and dispersion of contaminated fluid during insertion and removal of tools through the working channel of the endoscope, adjustment of the air/water button, retrieval of tissue from a biopsy channel, and during precleaning of the endoscope. Our knowledge of the role of aerosol generation during endoscopy has expanded during the course of the pandemic. Several investigators, using various techniques, have studied this phenomenon to help us better understand the degree and quantity of aerosolization that is generated during routine endoscopy. These newer studies are summarized in . A major criticism of this approach to categorizing AGPs into discrete dichotomous categories (AGP vs non-AGP and high-risk vs low-risk AGPs) is that this categorization does not consider the continuum of procedure-related aerosol generation and the different levels of transmission risks. Thus, there is likely a hierarchy of AGPs with each intervention conveying a different degree of transmission risk. Further complicating this issue is that numerous studies have shown that certain respiratory events, such as coughing, can generate vastly greater numbers of droplets and aerosols, considerably more aerosol particles than aerosols generated from currently classified AGPs. , , , , In addition, some studies have found that traditional AGPs pose no greater risk than talking or breathing. It is difficult to infer risk of infection from these studies because aerosols may not necessarily contain viable virus material, and the amount and quantity of aerosol generation does not equate to infectivity from endoscopy. In summary, aerosol generation occurs as a continuum and endoscopy is associated with variable degrees of aerosolization. Risk of infection from aerosolized viral particles is, however, associated not only with the degree of aerosolization but also with other factors such as quantity of infective virus, proximity to source, and room ventilation. Based on these studies, however, there is increasing consensus that upper GI endoscopy should be classified as an AGP and periprocedural management including PPE recommendations should follow the AGP protocols to minimize transmission. Current recommendations by the WHO and Centers for Disease Control and Prevention (CDC) advise the use of respirator masks, such as N95s or N99s, when performing surgical procedures that might pose higher risk for transmission if the patient has SARS-CoV-2 infection. These procedures generate potentially infectious aerosols or involve anatomic regions where viral loads might be higher, such as the nose and throat, oropharynx, or respiratory tract. Respirator masks are warranted in caring for individuals with COVID-19 or when community transmission levels increase, but standard surgical masks are adequate for routine care not involving aerosol-generating procedures. , A systematic review of 172 observational studies on COVID-19, SARS-CoV-1, and Middle East respiratory syndrome coronavirus indicated that people, including HCWs, are strongly protected by wearing surgical face masks (adjusted odds ratio, 0.15 95% confidence interval, 0.07–0.34), with eye protection potentially conferring additional benefit.
In March 2020 when the COVID-19 outbreak was declared a global pandemic all endoscopy services came to a virtual halt. Considering the escalating rates of hospitalizations and deaths, limited PPE availability, limited COVID-19 test availability, and the burden on the health care system, routine elective endoscopy services were temporarily discontinued. HCWs, physicians, and nursing staff were redeployed, and protocols were developed for triaging of endoscopies to identify and perform only endoscopic procedures for urgent or emergent indications. Although there were variations in how procedures were prioritized, many centers limited procedures for the following indications: active GI bleeding, acute cholangitis, food impactions, GI obstructions, and cancer diagnosis/staging/treatment. This strategy was aimed to reduce the risk of spreading infection, reducing use of limited PPE supplies, and reducing use of hospital resources. Numerous studies from the United States, United Kingdom, The Netherlands, Canada, China, Spain, Japan, and Taiwan reporting on endoscopy volumes during the initial 3 to 4 months of the pandemic demonstrated reductions in total number of upper endoscopies and colonoscopies of 51% to 72% and 59% to 85%, respectively (compared with the same period from prior years). , , , , , , After the initial phase, many centers resumed limited endoscopy services with the implementation of stringent infection prevention and control policies and worked to reduce the backlog of colonoscopies by offering patients noninvasive stool-based tests for colorectal cancer screening. ,
An important framework for managing health and safety interventions used by the CDC to develop infection control policies was the Hierarchy of Controls, which recommended using strategies to reduce risks of exposure to the virus in addition to the use of PPE. Such strategies included eliminating hazards by avoiding admission/treatment of people with active infection and using COVID-19 testing to segregate patients with the infection. Engineering controls such as physical barriers, and administrative controls to facilitate physical distancing, were also included in the hierarchy. And finally given the physical proximity required to deliver many elements of care, the use of PPE was also a required control measure within the health care environment. Following the hierarchy of controls framework, various operational changes were implemented across endoscopy suites and centers to safely reopen endoscopy units while mitigating the risk of infection. These changes were implemented based on local factors such as availability of resources, local prevalence of COVID-19, patient demographics, procedure indication, and hospital/endoscopy unit policies. The common goals of these changes were to maintain endoscopic volume and efficiency, while minimizing risk of transmission and infection to patients, staff, and HCWs. Sources of human-to-human transmission could occur from unknown infected persons (eg, asymptomatic carriers or individuals with mild symptoms), as well as individuals with virus shedding during the presymptomatic incubation period. Sources of risk during endoscopy included aerosols generated during endoscopy, which could increase the potential for subsequent airborne transmission, infection from respiratory secretions from patients, and potential contamination from other sources of bodily fluid (stool and patient saliva). Many authorities issued guidance on how to safely restart routine endoscopy and advocated for stringent infection control policies that included universal masking of patients, symptom screening before endoscopy, COVID testing before endoscopy, and use of high-level PPE , , , , , , . Box 1 Overview of modifications implemented across various endoscopy centers during various stages of the pandemic before the availability of vaccines Preprocedure modifications Triage and risk stratification used a screening questionnaire for (1) symptoms of COVID-19 (such as cough, shortness of breath, and persistent fever), (2) known history of contact with a patient with COVID-19, and (3) travel to high risk areas. These were performed in all cases at least 24 to 72 hours before endoscopy Preprocedure SARS-CoV-2 testing: individualized protocols for outpatient preprocedural testing of patients 24 to 72 hours before the scheduled appointment depending on local prevalence rates and institutional policies. Reverse transcription-polymerase chain reaction testing was performed in all asymptomatic patients before endoscopic procedures to risk stratify and determine PPE needs (see section later). • Patient reassurance about safety precautions taken to decrease transmission from patient to patient Procedural modifications for patients All patients required to wear surgical masks and keep at least 1 to 2 m distance from others. Arrangements made in advance to reduce patient congestion in the waiting area. Chairs and beds spaced to avoid the transmission of viral particles to noninfected patients. Informed consent includes informing individuals about the possible risk of nosocomial infection (COVID-19 infection) during endoscopy Patients informed to report back if experiencing any de novo symptoms postprocedure. Triage and screening questionnaire: at the time of presentation to the endoscopy, questions asked again regarding (1) symptoms of COVID-19 (such as cough, shortness of breath, and persistent fever), (2) known history of contact with a patient with COVID-19, and (3) travel to high-risk areas. These were performed in all cases at least 24 to 72 hours before endoscopy High-risk patients, classified by the presence of respiratory tract symptoms, previous travel to COVID-19 locations in the past 14 days, and close contact with COVID-19-positive patients, prompted procedure cancellation and self-quarantine Temperature measurements before entering the endoscopy unit Patient’s relative/caregiver or driver required to wait offsite and return after the procedure is completed. If this is not feasible, the waiting area should be appropriately distanced. Procedural modifications for HCWs Barriers such as glass or plastic walls/shields set up in check-in areas Safe distancing in the preoperative area as well as decreased numbers of patients that nursing staff can receive for preprocedure care. Endoscopy staff with preexisting conditions at higher risk of contracting COVID-19 have been assigned nonclinical duties Use of PPE mandated by all health care systems to minimize the risk of transmission All endoscopy team members required to wear surgical masks, gloves, hair coverings, face shields or goggles, water-proof disposable gowns, and shoe covers or boots. Initially use of highest level of PPE mandated by all health care systems to minimize the risk of transmission Eventually PPE for endoscopy personnel adjusted according to patient risk stratification with full PPE required for high-risk or confirmed COVID-19-positive patients. In low-resource settings, reusable respirators, face shields, goggles, and boots deemed acceptable after appropriate sterilization and decontamination methods Training and adherence to strict precautions of properly donning and doffing Staff required to complete questionnaire about symptoms before their daily work. Similar distances should be maintained between individuals. Staff required to keep at least 1 to 2 m of distance from staff and patients For COVID-19-positive (or suspected) cases, procedures performed in a negative pressure endoscopic unit, if available, or portable industrial-grade high-efficiency particulate air filters placed in endoscopy rooms In low-resource situations, adequate ventilation of the room was acceptable As much as possible, all required documentation should be performed outside the endoscopy room. Minimal number of workers should be in procedure room to minimize risk Team switching during procedures discouraged to minimize PPE usage and decrease contamination risks Postprocedure modifications Procedural downtime and room turnover time needed to allow for dispersion of potential virus-laden aerosols depends on rate of air changes per hour. The precise time needed for closure of the room depends on the use of negative pressure and air-exchange rate Patients with COVID-19: some centers used only negative pressure rooms (room maintained under negative pressure for at least 30 minutes, and in the absence of negative pressure, for 60 minutes, before the next patient) Initially patients are monitored in the recovery area, with no family available in the waiting room Eventually limited family available in the waiting room with adequate spacing between seats and requirement of face masks Postprocedure telephone follow-ups with patients used to enquire about developing any new COVID-19-related symptoms (traced and contacted after 7 and 14 days)
Enhanced cleaning procedures with cleaning of all horizontal surfaces, especially frequently touched surfaces, with particular emphasis on areas within a few feet of the patent (using standard hospital-grade disinfectant solution with viricidal agents) were implemented by most endoscopy units. , , Endoscope cleaning and decontamination processes remained unchanged; as per guidelines, mechanical and detergent cleaning followed by high-level disinfection (a process that eliminates or kills all vegetative bacteria, mycobacteria, fungi, and viruses, except for small numbers of bacterial spores, and reduces the number of microorganisms and organic debris by 4 logs, or 99.99%).
The use of preprocedure testing in asymptomatic individuals became a common path to triage for risk stratification. A critical aspect of resuming endoscopy services included providing reassurance to patients and importantly to reassure HCWs, including endoscopists, nurses, and staff. At the pandemic onset, in the absence of available diagnostic tests and knowledge of treatments for COVID-19, one of the earliest evidence-based guidelines was developed by the American Gastroenterological Association (AGA); the guideline panel members made a strong recommendation to use N95 (or N99 or PAPR) masks (along with gowns, shoe covers, goggles, and face shields) instead of surgical masks for all HCWs performing upper endoscopies. Recommendations also included wearing double gloves and using negative pressure rooms, placing a high value on minimizing risks to HCWs, despite having low or very low certainty of evidence for risk of transmission of infection, because of documented community spread during a pandemic. In addition to limited resources for testing, limitations of PPE availability necessitated reuse or prolonged use of N95 masks. Finally, the decision to extend the recommendation to lower GI tract procedures was based on limited evidence of possible aerosolization during colonoscopy and the uncertain risks associated with evidence of the presence of SARS-CoV-2 RNA in fecal samples. These recommendations assumed the absence of widespread reliable testing for the diagnosis of COVID-19 infection or immunity and unclear data on prevalence. As the number of COVID tests that received Emergency Use Authorization approval increased, preprocedure testing became more readily available, and questions arose regarding the role of routine preprocedure testing of all individuals to minimize risk for patients and HCWs. At the individual patient level, testing in symptomatic patients helped identify individuals who could be isolated to prevent the spread of disease. At the population level, widespread testing of individuals (symptomatic and asymptomatic) was critical to determine the true prevalence of disease and the provision of health care services, and to reintroduce endoscopy across health care systems and ambulatory care centers. Recommendations developed by the AGA provided a framework for routine preprocedure testing before endoscopy (for all asymptomatic persons) that accounted for local contextual factors such as the local prevalence of SARS-CoV2 and availability of PPE and weighed the pros and cons of a pretesting strategy. Based on a systematic review and meta-analysis of the tests (available at that time), the authors of this guideline conducted and made conditional recommendations against endoscopy centers adopting routine preprocedural testing to triage patients into low- and high-prevalence settings because of concerns about the accuracy of test results and the potential downsides for individuals with false-positive or false-negative test results. It was suggested that all HCWs wear N95s (or higher) masks, if available, and forego testing. For endoscopy centers where the prevalence of asymptomatic SARS-CoV-2 infection was intermediate (0.5%–2%), the AGA suggested implementing a pretesting strategy, if tests were available, to determine the type of PPE (such as use of surgical masks in individuals who tested negative). Alternatively, in settings where the logistics of testing were challenging and the downsides outweighed the benefits, HCWs could choose to wear N95 (or higher) masks and again forego testing. The changing prevalence of COVID-19 was an important factor as new variants emerged and created documented new waves of infection. The rapid development of vaccines and the widespread implementation of vaccination programs worldwide helped decrease morbidity and mortality from COVID-19. Another important positive change was the availability of relatively effective treatments. Furthermore, within the GI community, as our understanding of disease transmission increased and data on infection rates from endoscopy and universal screening and testing became available, and PPE became widely available, many endoscopy centers again revised their testing policies. In contrast to early reports of high rates of HCW infections early in the pandemic (in the setting of limited PPE) accumulating evidence demonstrated low rates of COVID-19 infections among HCWs performing endoscopy. This evidence along with data demonstrating the relative effectiveness of vaccines in decreasing rates of transmission of infection prompted a recommendation against routine preprocedure testing emphasizing the downsides of testing at the patient level (of burden, cost, and access) and at the population level (low rates of screening and surveillance endoscopies leading to lower rates of screening, surveillance, and diagnosis of various GI cancers).
In patients with COVID-19, several systematic reviews and meta-analyses have described the prevalence of GI symptoms including diarrhea (8%–17%), nausea or vomiting (4%–20%), loss of appetite (2%–21%), abdominal pain (3%–20%), anorexia (8%–10%), abdominal distension (1%) and loss of taste (1%–3%). , , , , , , , Most GI symptoms associated with COVID-19 are mild. Diarrhea caused by SARS-CoV-2 may be the initial symptom in patients with COVID-19. A small subset of patients with COVID-19 may develop isolated GI symptoms throughout the disease (2.9%–16%). Our understanding of the endoscopic findings in COVID-19 is limited. Several case series and retrospective and prospective cohort studies have helped us to understand the direct and indirect effects of COVID-19 on the GI tract. , , , GI endoscopy for GI bleeding in patients with COVID-19 is reviewed in the article by Cappell and Friedel in this issue. Mechanistically, viruses in the GI tract, including coronaviruses, can contribute to disease by interacting with the mucous layer, epithelial cells, and potentially lamina propria immune cells. SARS-CoV-2 infection can disrupt the tight and adherent junctions of the endothelium and intestinal epithelium, which may lead to a leaky gut, local and systemic invasion of normal microbiota, and consequent immune activation. In one retrospective, single-center study of 95 laboratory-confirmed cases of SARS-CoV-2 infection from Zhuhai, China, 6 patients with GI symptoms underwent upper endoscopy and 2 underwent proctoscopy. Biopsies were taken from the esophagus, stomach, duodenum, and rectum for viral RNA detection. One patient with severe symptoms of GI bleeding localized to the esophagus and attributed to multiple round herpetic erosions and ulcers, each with a diameter of 4 to 6 mm. SARS-CoV-2 RNA was detected in the esophageal erosions and bleeding site, as well as in the stomach, duodenum, and rectum. In the other 5 patients (cases 2–6) no erosions, ulcers, or bleeding was noted. SARS-CoV-2 RNA could also be detected in the esophagus, stomach, duodenum, and rectum of another patient with severe COVID-19 infection (case 2). In contrast, the virus was only detected in the duodenum of the nonsevere case 3 and could not be detected in any GI specimens of the nonsevere cases 4 to 6. In a case report of a patient with COVID-19 who underwent endoscopy, biopsies revealed no damage to the epithelium of the esophagus, stomach, duodenum, and rectum, but infiltrates of occasional lymphocytes were observed in esophageal squamous epithelium and numerous infiltrating plasma cells and lymphocytes with interstitial edema were observed in the lamina propria of the stomach, duodenum, and rectum. In a retrospective study from Lombardy, Italy, 38 patients with confirmed SARS-CoV-2 underwent endoscopic evaluation (24 EGDs, 20 colonoscopies). Endoscopic lesions were observed in 18 of 24 EGDs (75%) and in 14 of 20 colonoscopies (70%). The main findings were esophagitis (20.8%), bulbar ulcer (20.8%), erosive gastritis (16.6%), neoplasm (8.3%), and Mallory-Weiss tear (4.1%). Colonoscopy revealed segmental colitis associated with diverticulosis (25%), colonic ischemia (20%), diffuse hemorrhagic colitis (5%), and colonic neoplasms (5%). Finally, in a multicenter cohort of ∼2000 hospitalized patients with COVID-19 across a geographically diverse network of medical centers in North America, only 1.2% of patients (n = 24) underwent endoscopy despite a high prevalence of GI symptoms and substantial burden of critical or prolonged illness. Most endoscopic procedures were performed for either emergency cases (eg, ongoing GI bleeding or biliary obstruction) or for placement of enteral access tubes. Among those who underwent endoscopy, the indications and findings were judged more likely to reflect overall systemic illness or related to prolonged hospitalization rather than direct viral injury from COVID-19. The investigators did not observe inflammatory pathology and concluded that SARS-CoV-2 did not seem to behave as a highly invasive and injurious pathogen to GI mucosa.
In summary, the unprecedented COVID-19 pandemic led to significant disruptions in gastroenterology practice necessitating endoscopy centers to be adaptive, reactive, and innovative. With the emergence of new variants and the ever-present threat of new pandemics, lessons learned during these past few years will help maintain the safe practice of endoscopy and prepare for new and emerging pathogens. Although mechanistically SARS-CoV-2 may contribute to enteric disease, endoscopic findings in patients with COVID-19 are likely to reflect the underlying critical illness rather than the direct effect of the virus.
• Aerosolization during upper and lower endoscopy occurs along a continuum, and respirator masks, such as N95s, along with eye protection, gowns, and gloves are an important strategy to minimize risk of viral transmission • Endoscopy centers should incorporate several strategies based on the Hierarchy of Controls Model to reduce the risk of viral transmission • The role of preprocedure testing should be based on local prevalence, testing availability, PPE availability, and patient burden • Although SARS-CoV2 can be detected in stool, there have been no reports of infection via the fecal-oral route • Endoscopic and histologic findings in patients with COVID-19 are more consistent with prolonged and severe systemic illness and suggest no direct viral or inflammatory pathogenic effects
The author has nothing to disclose.
|
Proteomics-driven discovery of LCAT as a novel biomarker for liver metastasis in colorectal cancer | b0711aba-c3ad-4ff1-9124-4a85e7283f0b | 11909834 | Biochemistry[mh] | According to the latest global cancer statistics, the incidence and mortality rates of colorectal cancer (CRC) have reached 10.0% and 9.4%, respectively, ranking third in incidence and second in mortality among all malignant tumors . Although surgery and comprehensive treatment have significantly improved the diagnosis and efficacy of CRC treatment, the overall survival (OS) rate remains relatively low in patients with liver metastasis (LM) . LM is the most common type of distant metastasis, with an estimated 15–25% of patients showing signs of LM at the time of initial diagnosis, and approximately 20–25% of patients developing LM after resection of primary CRC . Furthermore, 40–75% of patients experience recurrence after liver resection . Therefore, in clinical practice, identifying molecular markers that affect the occurrence of LM in CRC is valuable for diagnosis and treatment and is of great significance for both short- and long-term prognosis in patients with CRC. Recent studies have uncovered numerous molecular markers that influence LM in CRC. In 2021, Xu et al. found that the polycomb protein BMI-1 plays a significant role in CRC LM. Their study revealed that BMI-1 is upregulated in CRC with LM and is associated with stage T4 and depth of invasion. Further cellular and animal studies demonstrated that BMI-1 overexpression promotes CRC invasiveness and epithelial-mesenchymal transition (EMT), suggesting it as a potential molecular target for treating CRC LM (CRCLM) . In 2022, Xi Liu et al. identified a mechanism by which MT2A influences LM in CRC. The study showed that overexpression of MT2A enhances the phosphorylation of MST1, LAST2, and YAP1, which inhibits the Hippo signaling pathway and reduces LM in CRC . In a 2024 study, Zhang et al. demonstrated that SLC14A1 interacts with and stabilizes the TβRII protein, preventing its K48-linked ubiquitination and degradation by Smurf1, thus enhancing the TGF-β/Smad signaling pathway and increasing CRC cell invasiveness. Furthermore, TGF-β1 upregulates SLC14A1 mRNA expression, creating a positive feedback loop. Clinical data analysis revealed that SLC14A1 is upregulated in CRC patients with LM, confirming its potential as a predictive marker for LM in CRC . As tumor research has advanced, an increasing number of studies have highlighted the significant role of cellular metabolism in cancer development . Lecithin cholesterol acyltransferase (LCAT) is an enzyme responsible for producing most cholesterol esters in plasma and plays a crucial role in the reverse cholesterol transport process. LCAT activity is essential for the formation of mature high-density lipoprotein (HDL) and the remodeling of HDL particles . Traditionally, LCAT has been considered "anti-atherosclerotic"; however, recent studies suggest that LCAT may also play a unique role in cancer. Overexpression of LCAT has been linked to certain cancers, including breast and ovarian cancers, where it may alter lipid metabolism in cancer cells and promote tumor growth and invasion . Given the special role of LCAT in cancer, it has emerged as a potential therapeutic target. Inhibition of LCAT activity or its expression may suppress tumor growth. However, the mechanisms by which LCAT affects the intracellular and extracellular lipid microenvironment in CRC, thereby contributing to LM, remain unclear and require further investigation. In this study, we aimed to identify molecular markers influencing LM after CRC surgery. Using proteomic mass spectrometry (MS), we identified differentially expressed proteins between CRC patients who developed LM after surgery and those who did not. Bioinformatics analysis was employed to select hub proteins, and data from the The Cancer Genome Atlas (TCGA) public tumor database were integrated to validate their expression and clinical significance. We collected clinical data and pathological tissue specimens from CRC patients meeting inclusion criteria at our center and established tissue microarray chips to analyze the clinical value of the hub proteins. Finally, a series of experiments were conducted using tissue samples and tumor cells to explore whether LCAT influences CRC cells, elucidating its mechanisms and identifying potential molecular markers affecting LM in CRC. This research provides valuable guidance for clinical practitioners, aiming to improve diagnosis, treatment plans, and ultimately the prognosis of patients with CRC.
Proteomics mass spectrometry analysis Essential solutions were prepared for protein extraction using 25 mM dithiothreitol (DTT), 100 mM iodoacetamide, and a phenol extraction reagent (sucrose). The concentration of extracted proteins was determined using the Bradford protein assay. SDS–polyacrylamide gel electrophoresis was then performed, followed by trypsin digestion, peptide desalting, and high-resolution mass spectrometry analysis using liquid chromatography-tandem mass spectrometry (LC–MS/MS). Detailed experimental methods can be found in Supplementary Material 1. Tissue microarray fabrication, staining, and scanning Each tissue specimen was cut into blocks measuring 5 × 15 × 15 mm and fixed in formalin. Tissue samples were dehydrated using an ASP300 automated tissue processor. The dehydrated samples were then embedded in paraffin blocks, which were sectioned into 4 µm thick tissue microarrays using an automated microtome. The prepared tissue microarrays were stained using immunohistochemical methods, and the stained arrays were scanned and quantified using a digital pathology slide scanner to obtain H-score values. Detailed methods for tissue microarray fabrication, staining, and quantitative scanning are provided in Supplementary Material 1. Enrichment analysis After identifying differentially expressed proteins, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were conducted using R (R packages AnnotationDbi, org.Hs.eg.db, and ClusterProfiler) to describe their functions. The GO/KEGG functional enrichment process involved using the species protein as the background list and the differential protein list as the candidate list. Enrichment significance of functional sets in the differential protein list was calculated using the hypergeometric distribution test to determine the P -value. Public data sources and analysis RNA-seq data and clinical information from 698 CRC patients were obtained from TCGA. The cohort was divided into high- and low-expression groups based on the median value for survival analysis. Differential expression between tumor and normal tissues was assessed through comparison of non-matched and matched sample groups. Additionally, the cohort was stratified into four groups based on tumor stage, and differential analysis was conducted to test for expression differences across these groups. Inclusion and exclusion criteria for the clinical cohort We retrospectively collected data from patients who underwent curative resection for CRC at our department between January 2011 and December 2019. The inclusion criteria were: (1) a confirmed pathological diagnosis of colorectal adenocarcinoma, (2) curative surgical resection, (3) no history of other malignant tumors, (4) at least three years of follow-up, and (5) complete clinical data. The exclusion criteria were: (1) synchronous distant metastasis at initial diagnosis, (2) LM within 3 months after surgery, (3) distant metastasis to other sites after surgery, and (4) loss to follow-up. Ultimately, 119 patients were enrolled and divided into two groups: (1) the LM group (60 patients), defined as those who developed CRC LM (CRLM) within 36 months after resection of the primary tumor, and (2) the non-LM group (59 patients), defined as those who did not develop distant metastasis (CRC non-LM, CRNLM) within 36 months after resection of the primary tumor . Patient information and grouping Clinical information included age, sex, preoperative tumor complications, T stage, N stage, tumor size, differentiation grade, vascular and neural invasion, postoperative adjuvant chemotherapy, number of lymph node metastases, and postoperative CEA and CA199 levels (measured three months after surgery). All data were obtained from clinical medical records, examination reports, and pathological materials. This study was approved by the Ethics Committee of Zhongshan Hospital, affiliated with Dalian University (Approval Number: KY2023-002–1). Follow-up Follow-up assessments were conducted every three months for the first two years, every six months for the next five years, and annually thereafter. The routine follow-up procedures included: (1) routine physical examinations and blood tests every three months for the first two years, every six months for five years, and annually thereafter; (2) chest radiography and abdominal CT scans every six months for the first two years, and annually thereafter; and (3) gastrointestinal endoscopies annually for the first two years. During follow-up, LM was confirmed using contrast-enhanced CT. Patients without metastasis were followed for at least three years after surgery. The follow-up period was defined as the time from the first day after CRC surgery to the appearance of LM or the end of follow-up. Quantitative real-time PCR Total RNA was extracted from CRC tissues and adjacent normal tissues using the TRIzol method, which involves tissue processing, Trizol lysis, chloroform separation, isopropanol precipitation, and ethanol washing. LCAT expression was detected using quantitative PCR (qPCR). The primer concentration was 10 μmol/L, and the reaction conditions were as follows: initial denaturation at 55 °C, denaturation at 95 °C, and annealing/extension at 60 °C for 40 cycles. Data analysis was performed using the 2 −ΔΔCt relative quantification method. The experiment was repeated three times, and a P -value < 0.05 was considered indicative of significant differences. For detailed experimental methods, see Supplementary Material 1. Western blotting Total protein was extracted from cancer and adjacent normal tissues. Tissue samples were minced, washed with phosphate-buffered saline (PBS), homogenized, and centrifuged in RIPA lysis buffer containing PMSF. The supernatant was collected and stored at −20 °C. For adherent cells, the same procedure was applied: the cells were washed with PBS, lysed in RIPA buffer containing PMSF on ice, and centrifuged to collect the supernatant for storage. The protein concentration was determined using the Bicinchoninic Acid (BCA) assay kit. Standards and samples were prepared, mixed with BCA working solution, and incubated at 37 °C. Protein concentration was calculated by measuring absorbance with a microplate reader. SDS-PAGE gels (resolved and stacked) were prepared, poured onto glass plates, and dried for later use. For the Western blotting experiment, 50 μg of protein samples were denatured at 100 °C, then subjected to SDS-PAGE. Proteins were transferred to a PVDF membrane, blocked, incubated with primary and secondary antibodies, and visualized using an ECL chemiluminescence substrate to detect protein expression levels. For detailed experimental methods, see Supplementary Material 1. Construction of LCAT stably expressing cells The lentiviral packaging process involved transfection and culture of 293 T cells. Prior to transfection, the cells were adjusted to an optimal density and switched to serum-free medium. The miRNA expression vectors pHelper1.0 and pHelper2.0 were then mixed with Lipofectamine 2000 to form transfection complexes, which were added to the cell culture medium. After transfection, the cells were cultured for an additional 48 h. The supernatant was collected and subjected to centrifugation, filtration, and ultracentrifugation to concentrate the virus. The virus was aliquoted and stored at −80 °C. Lentivirus titer determination was performed by infecting cells, selecting stably transfected cells with puromycin, and calculating the number of viable cells. Finally, the lentivirus was used to infect cancer cells. After infection, the medium was replaced, and puromycin was added for selection. LCAT expression was detected using qRT-PCR and Western blotting. For detailed experimental methods, see Supplementary Material 1. Transwell migration assay The basement membrane was hydrated for 30 min. A cell suspension was then prepared by serum-starving cells for 12–24 h, digesting with trypsin, centrifuging, washing with PBS, and resuspending in serum-free medium containing bovine serum albumin (BSA) to a concentration of 5 × 10 5 cells/ml. Next, 100 µL of the cell suspension was added to the upper chamber of the Transwell insert, and 600 µL of medium containing 10% FBS was added to the lower chamber. The cells were cultured for 12–48 h. After the culture period, the cells were fixed with methanol for 30 min and stained with crystal violet. Non-migrated cells were removed by wiping, and five random fields were selected under a microscope to count the migrated cells, which were used to assess migratory capacity. Nile red staining CRC cells with either overexpressed or downregulated LCAT were cultured at an optimal density. The cells were washed three times with PBS to remove the culture medium, then fixed with 4% paraformaldehyde for 15 min and washed again with PBS. The cells were incubated with diluted Nile Red staining solution at 37 °C for 15–30 min, followed by washing 2–3 times with PBS to remove unbound dye. The red or orange-yellow fluorescence of intracellular lipid droplets was observed and imaged using a fluorescence microscope (excitation wavelength: 543 nm; emission wavelength: 598 nm), and the experimental results were recorded. Statistical analysis Continuous variables that fit a normal distribution are expressed as mean ± standard deviation (mean ± SD), while those not fitting a normal distribution are expressed as median with interquartile range (Median, IQR). Categorical variables are expressed as frequencies and percentages. Differences between two groups were compared using the t-test or chi-square test, and differences among multiple groups were compared using analysis of variance (ANOVA). All tests were two-sided. Survival curves were plotted using the Kaplan–Meier method, and differences between survival curves were compared using the log-rank test. Odds ratios (OR) were calculated using logistic regression. Multivariate logistic regression analysis was performed to assess covariates significantly associated with the univariate analysis results. Statistical analyses were conducted using SPSS (version 27.0; IBM Corp., Armonk, NY, USA) and R (version 4.3.1; R Foundation for Statistical Computing, Vienna, Austria). Statistical significance was defined as P < 0.05.
Essential solutions were prepared for protein extraction using 25 mM dithiothreitol (DTT), 100 mM iodoacetamide, and a phenol extraction reagent (sucrose). The concentration of extracted proteins was determined using the Bradford protein assay. SDS–polyacrylamide gel electrophoresis was then performed, followed by trypsin digestion, peptide desalting, and high-resolution mass spectrometry analysis using liquid chromatography-tandem mass spectrometry (LC–MS/MS). Detailed experimental methods can be found in Supplementary Material 1.
Each tissue specimen was cut into blocks measuring 5 × 15 × 15 mm and fixed in formalin. Tissue samples were dehydrated using an ASP300 automated tissue processor. The dehydrated samples were then embedded in paraffin blocks, which were sectioned into 4 µm thick tissue microarrays using an automated microtome. The prepared tissue microarrays were stained using immunohistochemical methods, and the stained arrays were scanned and quantified using a digital pathology slide scanner to obtain H-score values. Detailed methods for tissue microarray fabrication, staining, and quantitative scanning are provided in Supplementary Material 1.
After identifying differentially expressed proteins, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analyses were conducted using R (R packages AnnotationDbi, org.Hs.eg.db, and ClusterProfiler) to describe their functions. The GO/KEGG functional enrichment process involved using the species protein as the background list and the differential protein list as the candidate list. Enrichment significance of functional sets in the differential protein list was calculated using the hypergeometric distribution test to determine the P -value.
RNA-seq data and clinical information from 698 CRC patients were obtained from TCGA. The cohort was divided into high- and low-expression groups based on the median value for survival analysis. Differential expression between tumor and normal tissues was assessed through comparison of non-matched and matched sample groups. Additionally, the cohort was stratified into four groups based on tumor stage, and differential analysis was conducted to test for expression differences across these groups.
We retrospectively collected data from patients who underwent curative resection for CRC at our department between January 2011 and December 2019. The inclusion criteria were: (1) a confirmed pathological diagnosis of colorectal adenocarcinoma, (2) curative surgical resection, (3) no history of other malignant tumors, (4) at least three years of follow-up, and (5) complete clinical data. The exclusion criteria were: (1) synchronous distant metastasis at initial diagnosis, (2) LM within 3 months after surgery, (3) distant metastasis to other sites after surgery, and (4) loss to follow-up. Ultimately, 119 patients were enrolled and divided into two groups: (1) the LM group (60 patients), defined as those who developed CRC LM (CRLM) within 36 months after resection of the primary tumor, and (2) the non-LM group (59 patients), defined as those who did not develop distant metastasis (CRC non-LM, CRNLM) within 36 months after resection of the primary tumor .
Clinical information included age, sex, preoperative tumor complications, T stage, N stage, tumor size, differentiation grade, vascular and neural invasion, postoperative adjuvant chemotherapy, number of lymph node metastases, and postoperative CEA and CA199 levels (measured three months after surgery). All data were obtained from clinical medical records, examination reports, and pathological materials. This study was approved by the Ethics Committee of Zhongshan Hospital, affiliated with Dalian University (Approval Number: KY2023-002–1).
Follow-up assessments were conducted every three months for the first two years, every six months for the next five years, and annually thereafter. The routine follow-up procedures included: (1) routine physical examinations and blood tests every three months for the first two years, every six months for five years, and annually thereafter; (2) chest radiography and abdominal CT scans every six months for the first two years, and annually thereafter; and (3) gastrointestinal endoscopies annually for the first two years. During follow-up, LM was confirmed using contrast-enhanced CT. Patients without metastasis were followed for at least three years after surgery. The follow-up period was defined as the time from the first day after CRC surgery to the appearance of LM or the end of follow-up.
Total RNA was extracted from CRC tissues and adjacent normal tissues using the TRIzol method, which involves tissue processing, Trizol lysis, chloroform separation, isopropanol precipitation, and ethanol washing. LCAT expression was detected using quantitative PCR (qPCR). The primer concentration was 10 μmol/L, and the reaction conditions were as follows: initial denaturation at 55 °C, denaturation at 95 °C, and annealing/extension at 60 °C for 40 cycles. Data analysis was performed using the 2 −ΔΔCt relative quantification method. The experiment was repeated three times, and a P -value < 0.05 was considered indicative of significant differences. For detailed experimental methods, see Supplementary Material 1.
Total protein was extracted from cancer and adjacent normal tissues. Tissue samples were minced, washed with phosphate-buffered saline (PBS), homogenized, and centrifuged in RIPA lysis buffer containing PMSF. The supernatant was collected and stored at −20 °C. For adherent cells, the same procedure was applied: the cells were washed with PBS, lysed in RIPA buffer containing PMSF on ice, and centrifuged to collect the supernatant for storage. The protein concentration was determined using the Bicinchoninic Acid (BCA) assay kit. Standards and samples were prepared, mixed with BCA working solution, and incubated at 37 °C. Protein concentration was calculated by measuring absorbance with a microplate reader. SDS-PAGE gels (resolved and stacked) were prepared, poured onto glass plates, and dried for later use. For the Western blotting experiment, 50 μg of protein samples were denatured at 100 °C, then subjected to SDS-PAGE. Proteins were transferred to a PVDF membrane, blocked, incubated with primary and secondary antibodies, and visualized using an ECL chemiluminescence substrate to detect protein expression levels. For detailed experimental methods, see Supplementary Material 1.
The lentiviral packaging process involved transfection and culture of 293 T cells. Prior to transfection, the cells were adjusted to an optimal density and switched to serum-free medium. The miRNA expression vectors pHelper1.0 and pHelper2.0 were then mixed with Lipofectamine 2000 to form transfection complexes, which were added to the cell culture medium. After transfection, the cells were cultured for an additional 48 h. The supernatant was collected and subjected to centrifugation, filtration, and ultracentrifugation to concentrate the virus. The virus was aliquoted and stored at −80 °C. Lentivirus titer determination was performed by infecting cells, selecting stably transfected cells with puromycin, and calculating the number of viable cells. Finally, the lentivirus was used to infect cancer cells. After infection, the medium was replaced, and puromycin was added for selection. LCAT expression was detected using qRT-PCR and Western blotting. For detailed experimental methods, see Supplementary Material 1.
The basement membrane was hydrated for 30 min. A cell suspension was then prepared by serum-starving cells for 12–24 h, digesting with trypsin, centrifuging, washing with PBS, and resuspending in serum-free medium containing bovine serum albumin (BSA) to a concentration of 5 × 10 5 cells/ml. Next, 100 µL of the cell suspension was added to the upper chamber of the Transwell insert, and 600 µL of medium containing 10% FBS was added to the lower chamber. The cells were cultured for 12–48 h. After the culture period, the cells were fixed with methanol for 30 min and stained with crystal violet. Non-migrated cells were removed by wiping, and five random fields were selected under a microscope to count the migrated cells, which were used to assess migratory capacity.
CRC cells with either overexpressed or downregulated LCAT were cultured at an optimal density. The cells were washed three times with PBS to remove the culture medium, then fixed with 4% paraformaldehyde for 15 min and washed again with PBS. The cells were incubated with diluted Nile Red staining solution at 37 °C for 15–30 min, followed by washing 2–3 times with PBS to remove unbound dye. The red or orange-yellow fluorescence of intracellular lipid droplets was observed and imaged using a fluorescence microscope (excitation wavelength: 543 nm; emission wavelength: 598 nm), and the experimental results were recorded.
Continuous variables that fit a normal distribution are expressed as mean ± standard deviation (mean ± SD), while those not fitting a normal distribution are expressed as median with interquartile range (Median, IQR). Categorical variables are expressed as frequencies and percentages. Differences between two groups were compared using the t-test or chi-square test, and differences among multiple groups were compared using analysis of variance (ANOVA). All tests were two-sided. Survival curves were plotted using the Kaplan–Meier method, and differences between survival curves were compared using the log-rank test. Odds ratios (OR) were calculated using logistic regression. Multivariate logistic regression analysis was performed to assess covariates significantly associated with the univariate analysis results. Statistical analyses were conducted using SPSS (version 27.0; IBM Corp., Armonk, NY, USA) and R (version 4.3.1; R Foundation for Statistical Computing, Vienna, Austria). Statistical significance was defined as P < 0.05.
Differential protein analysis Proteomic analysis was conducted on pathological tissues from five CRC patients with LM (CRLM) and five without (CRNLM) within three years post-surgery. Using a P -value < 0.05 and a fold change of |log2FC|≥ 1, cluster analysis heat maps (Fig. A) and volcano plots (Fig. B) identified 383 differentially expressed proteins: 212 upregulated and 171 downregulated. Enrichment analysis GO and KEGG enrichment analyses were performed on the 383 differentially expressed proteins. The GO enrichment analysis covered three main categories: biological process (BP), cellular component (CC), and molecular function (MF). The results indicated that these proteins were involved in biological processes such as steroid metabolism, alcohol metabolism, and acute inflammatory responses. They were associated with cellular components, including lipoprotein particles, protein-lipid complexes, and HDL particles, and exhibited molecular functions such as lipoprotein particle binding and protein-lipid complex binding (Fig. A). KEGG enrichment analysis revealed that these proteins participated in metabolic pathways, including cholesterol metabolism, retinol metabolism, tyrosine metabolism, biosynthesis of steroid hormones, glutathione metabolism, arachidonic acid metabolism, and ferroptosis (Fig. B). PPI analysis and TCGA database analysis to identify hub proteins Protein–protein interaction (PPI) network analysis of the differentially expressed proteins was conducted using the STRING website. The Cytoscape software CytoHubba plugin was used to calculate the connectivity of the nodes within the differentially expressed proteins and to identify the top 10 hub proteins (Fig. ). The hub proteins, ranked from first to tenth, were LCAT, APOA1, SERPINA1, HPX, APOA2, KNG1, C3, AFM, ORM1, and HRG (Supplementary Fig. 1). The analysis was conducted using transcriptome sequencing data and associated clinical data from 698 patients in the TCGA public database. Survival analysis revealed that LCAT significantly affected the prognosis of patients with CRC. However, APOA1, SERPINA1, HPX, APOA2, KNG1, C3, and ORM1 did not affect CRC prognosis (AFM and HRG had too many missing values in the TCGA database to yield meaningful results) (Supplementary Fig. 2). Differential expression analysis showed that LCAT expression levels in CRC tissues were significantly higher than those in adjacent normal tissues ( P < 0.05) (Supplementary Fig. 3). The 5-year OS (HR = 1.64, 95% CI: 1.16–2.34; Log-rank P = 0.006), disease-specific survival (DSS) (HR = 1.73, 95% CI: 1.10–2.74; Log-rank P = 0.018), and progression-free interval (PFI) (HR = 1.49, 95% CI: 1.09–2.03; Log-rank P = 0.012) were all lower in the high LCAT expression group than in the low LCAT expression group. Analysis of expression differences among tumor pathological stages revealed that LCAT expression levels were significantly higher in more advanced stages ( P < 0.05) (Supplementary Fig. 4). The expression level of LCAT was higher in CRLM than in CRNLM Surgical pathological tissues from 119 patients with CRC were collected for immunohistochemical staining and subsequent scoring. The results of immunohistochemical staining and intergroup scoring are shown in Fig. . Patients with CRC who developed LM within three years post-surgery exhibited strong LCAT expression in their tumor tissues, while patients who did not develop LM within three years post-surgery showed weak LCAT expression. A statistically significant difference was observed between the two groups ( P < 0.05). Western blot analysis confirmed that LCAT expression levels in CRLM tumor lesions were significantly higher than those in CRNLM (CRC without LM) (Fig. ). Clinical data analysis We collected clinical data from 60 patients with CRC who developed LM within 3 years after surgery and 59 patients who did not. The clinical baseline data of the patients are presented in Table . There were no statistically significant differences in sex, age, postoperative chemotherapy, tumor size, or differentiation degree between the LM and non-LM groups ( P > 0.05). However, significant differences were observed in preoperative tumor complications, T staging, N staging, neurovascular invasion, number of lymph node metastases, postoperative CEA, postoperative CA199, and LCAT scores ( P < 0.05). The results of the univariate and multivariate logistic regression analyses are presented in Table . Preoperative tumor complications, N staging, neurovascular invasion, number of lymph node metastases, postoperative CEA, postoperative CA199, and LCAT scores were identified as statistically significant risk factors for LM after CRC surgery ( P < 0.05). Multivariate logistic regression analysis revealed that LCAT scores (OR: 10.221 [95% CI: 2.287–45.679]; P = 0.002) and postoperative CEA levels (OR: 1.296 [95% CI: 1.054–1.593]; P = 0.014) were independent risk factors for LM after CRC surgery ( P < 0.05). Supplementary Table 1 presents the intergroup differences between the high and low LCAT expression groups. Significant differences were observed between the groups in preoperative tumor complications, T stage, perineural and vascular invasion, LM, and CEA and CA199 levels ( P < 0.05). However, no significant differences were found in age, sex, postoperative chemotherapy, tumor size, degree of differentiation, N stage, or number of lymph node metastases ( P > 0.05). LCAT promotes tumor cell migration by affecting lipid droplet aggregation in CRC The LOVO cell line was used for LCAT overexpression and knockdown (Supplementary Fig. 5), and Transwell migration assays were performed. The results showed that LOVO cells overexpressing LCAT exhibited significantly enhanced migration, whereas LCAT knockdown in LOVO cells significantly inhibited migration (Fig. ). To investigate the mechanisms through which LCAT affects CRC cells, Nile Red staining experiments were conducted. The results showed that CRC cells overexpressing LCAT exhibited more pronounced lipid droplet aggregation, whereas LCAT knockdown significantly reduced lipid droplet aggregation (Fig. ).
Proteomic analysis was conducted on pathological tissues from five CRC patients with LM (CRLM) and five without (CRNLM) within three years post-surgery. Using a P -value < 0.05 and a fold change of |log2FC|≥ 1, cluster analysis heat maps (Fig. A) and volcano plots (Fig. B) identified 383 differentially expressed proteins: 212 upregulated and 171 downregulated.
GO and KEGG enrichment analyses were performed on the 383 differentially expressed proteins. The GO enrichment analysis covered three main categories: biological process (BP), cellular component (CC), and molecular function (MF). The results indicated that these proteins were involved in biological processes such as steroid metabolism, alcohol metabolism, and acute inflammatory responses. They were associated with cellular components, including lipoprotein particles, protein-lipid complexes, and HDL particles, and exhibited molecular functions such as lipoprotein particle binding and protein-lipid complex binding (Fig. A). KEGG enrichment analysis revealed that these proteins participated in metabolic pathways, including cholesterol metabolism, retinol metabolism, tyrosine metabolism, biosynthesis of steroid hormones, glutathione metabolism, arachidonic acid metabolism, and ferroptosis (Fig. B).
Protein–protein interaction (PPI) network analysis of the differentially expressed proteins was conducted using the STRING website. The Cytoscape software CytoHubba plugin was used to calculate the connectivity of the nodes within the differentially expressed proteins and to identify the top 10 hub proteins (Fig. ). The hub proteins, ranked from first to tenth, were LCAT, APOA1, SERPINA1, HPX, APOA2, KNG1, C3, AFM, ORM1, and HRG (Supplementary Fig. 1). The analysis was conducted using transcriptome sequencing data and associated clinical data from 698 patients in the TCGA public database. Survival analysis revealed that LCAT significantly affected the prognosis of patients with CRC. However, APOA1, SERPINA1, HPX, APOA2, KNG1, C3, and ORM1 did not affect CRC prognosis (AFM and HRG had too many missing values in the TCGA database to yield meaningful results) (Supplementary Fig. 2). Differential expression analysis showed that LCAT expression levels in CRC tissues were significantly higher than those in adjacent normal tissues ( P < 0.05) (Supplementary Fig. 3). The 5-year OS (HR = 1.64, 95% CI: 1.16–2.34; Log-rank P = 0.006), disease-specific survival (DSS) (HR = 1.73, 95% CI: 1.10–2.74; Log-rank P = 0.018), and progression-free interval (PFI) (HR = 1.49, 95% CI: 1.09–2.03; Log-rank P = 0.012) were all lower in the high LCAT expression group than in the low LCAT expression group. Analysis of expression differences among tumor pathological stages revealed that LCAT expression levels were significantly higher in more advanced stages ( P < 0.05) (Supplementary Fig. 4).
Surgical pathological tissues from 119 patients with CRC were collected for immunohistochemical staining and subsequent scoring. The results of immunohistochemical staining and intergroup scoring are shown in Fig. . Patients with CRC who developed LM within three years post-surgery exhibited strong LCAT expression in their tumor tissues, while patients who did not develop LM within three years post-surgery showed weak LCAT expression. A statistically significant difference was observed between the two groups ( P < 0.05). Western blot analysis confirmed that LCAT expression levels in CRLM tumor lesions were significantly higher than those in CRNLM (CRC without LM) (Fig. ).
We collected clinical data from 60 patients with CRC who developed LM within 3 years after surgery and 59 patients who did not. The clinical baseline data of the patients are presented in Table . There were no statistically significant differences in sex, age, postoperative chemotherapy, tumor size, or differentiation degree between the LM and non-LM groups ( P > 0.05). However, significant differences were observed in preoperative tumor complications, T staging, N staging, neurovascular invasion, number of lymph node metastases, postoperative CEA, postoperative CA199, and LCAT scores ( P < 0.05). The results of the univariate and multivariate logistic regression analyses are presented in Table . Preoperative tumor complications, N staging, neurovascular invasion, number of lymph node metastases, postoperative CEA, postoperative CA199, and LCAT scores were identified as statistically significant risk factors for LM after CRC surgery ( P < 0.05). Multivariate logistic regression analysis revealed that LCAT scores (OR: 10.221 [95% CI: 2.287–45.679]; P = 0.002) and postoperative CEA levels (OR: 1.296 [95% CI: 1.054–1.593]; P = 0.014) were independent risk factors for LM after CRC surgery ( P < 0.05). Supplementary Table 1 presents the intergroup differences between the high and low LCAT expression groups. Significant differences were observed between the groups in preoperative tumor complications, T stage, perineural and vascular invasion, LM, and CEA and CA199 levels ( P < 0.05). However, no significant differences were found in age, sex, postoperative chemotherapy, tumor size, degree of differentiation, N stage, or number of lymph node metastases ( P > 0.05).
The LOVO cell line was used for LCAT overexpression and knockdown (Supplementary Fig. 5), and Transwell migration assays were performed. The results showed that LOVO cells overexpressing LCAT exhibited significantly enhanced migration, whereas LCAT knockdown in LOVO cells significantly inhibited migration (Fig. ). To investigate the mechanisms through which LCAT affects CRC cells, Nile Red staining experiments were conducted. The results showed that CRC cells overexpressing LCAT exhibited more pronounced lipid droplet aggregation, whereas LCAT knockdown significantly reduced lipid droplet aggregation (Fig. ).
Globally, the high incidence and mortality rates of CRC present significant challenges to the medical community. Despite advancements in surgical techniques and integrated therapeutic strategies that have notably improved CRC treatment, enhancing the OS rate of patients with liver metastases remains a complex issue. This study employed proteomic MS to identify differential proteins between patients with CRC who did and did not develop LM after surgery. Through comprehensive bioinformatics analysis, a key hub protein, LCAT, was selected. Our research further utilized data from the public tumor database TCGA to validate the correlation between LCAT expression levels and the clinical characteristics of patients with CRC, providing important molecular evidence for understanding the role of LCAT in CRC LM. Additionally, by collecting and analyzing clinical data and pathological tissues from patients with CRC at our center, we further confirmed the clinical significance of LCAT as a potential molecular marker. With the continuous advancement of multiomics techniques, significant progress has been made in understanding the molecular mechanisms underlying the LM of CRC. In particular, the role of tumor cell metabolism in CRC LM has gradually emerged. A 2013 study by Thomas et al. revealed significant lipid alterations in tumor regions of CRC LM using imaging MS (IMS) technology, underscoring the key role of lipids in this process . Furthermore, a 2022 study by Wang et al. identified inositol monophosphatase 2 (IMPA2) as a potential hub gene in the occurrence and LM of CRC. Its expression positively correlates with poor prognosis and advanced tumor staging. IMPA2 may influence CRC occurrence and LM by affecting tumor lipid metabolism and the EMT process, as well as regulating its expression through DNA methylation . The findings of this study further expand our understanding of this area. We observed significant differences in protein expression patterns between patients who did and did not develop LM after CRC surgery. Pathway enrichment analysis revealed that these differentially expressed proteins were enriched in various metabolic pathways, including cholesterol, retinol, tyrosine, steroid hormone biosynthesis, glutathione, arachidonic acid, and ferroptosis . Notably, LCAT, a key enzyme involved in cholesterol esterification and transport, occupied a central position among the significantly altered proteins. In a 2022 study by Zhang et al., LCAT activity in the serum of patients with liver cancer was significantly disrupted compared to that in normal individuals, and LCAT was strongly correlated with prognosis, immune cell infiltration, immune regulatory factors, sensitivity to anticancer drugs, and the proliferation marker KI67 . Moreover, studies by Liang Hong Guoqing Ouyang et al. confirmed the dysregulation of LCAT expression in hepatocellular carcinoma . Through bioinformatics analysis, we found that LCAT expression in CRC samples was significantly higher than in normal tissue samples, and that high LCAT expression was associated with lower OS, disease-free survival, progression-free survival, and advanced tumor staging. This suggests that, after the onset of CRC, changes in LCAT expression may mediate lipid metabolism within or outside tumor cells, thereby influencing tumor biological behavior. A 2020 study by Hyoung-Min Park and colleagues highlighted that LCAT is a biomarker for invasive breast cancer and is highly expressed in late-stage or highly metastatic tumors , further supporting the potential role of LCAT in tumor invasion and metastasis. In this study, we confirmed that CRC tissues with LM post-surgery exhibited significantly higher LCAT expression levels compared to those without LM post-surgery through immunohistochemical staining, Western blot (WB) experiments, and clinical data analysis. Univariate and multivariate regression analyses identified LCAT as an independent risk factor for LM after CRC surgery. Additionally, through cellular experiments and Nile Red staining, we found that LCAT might promote tumor cell migration by influencing lipid droplet aggregation in CRC cells, thereby affecting lipid metabolism, which ultimately contributes to postoperative LM. Although this study provides new insights into the role of LCAT in LM of CRC, it has several limitations. First, due to the relatively small sample size, there may be a certain degree of selection bias, which could affect the generalizability and applicability of the results. To address this limitation, we plan to expand the sample size in future studies to enhance the statistical power and reliability of our findings. Second, the specific mechanisms underlying the role of LCAT in CRC LM were not verified through animal experiments in this study. Therefore, follow-up studies are needed to design and implement a series of basic experiments to explore how LCAT affects LM in CRC. Despite these limitations, this study is the first to reveal that LCAT is a potential molecular marker of LM in CRC. By influencing lipid metabolism in CRC cells, LCAT may promotes LM, offering a new perspective for clinical treatment. This not only provides clinicians with a potential therapeutic target but also brings new treatment strategies and hope for patients. We look forward to conducting prospective multicenter studies in the future to further validate the clinical value of LCAT as a biomarker and to provide higher-level evidence and practical guidance for the clinical diagnosis and treatment of patients with CRC LM.
Our study identified LCAT as a crucial molecular biomarker of LM in CRC. LCAT serves as an independent risk factor for postoperative LM in patients with CRC, potentially by influencing lipid metabolism in CRC cells, thereby facilitating the development of LM after surgery. This study presents a novel therapeutic target for cancer patients and offers a predictive value for the occurrence of postoperative LM.
Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. Supplementary Material 4.
|
Encapsulation of Flavours and Fragrances into Polymeric Capsules and Cyclodextrins Inclusion Complexes: An Update | f3337fc4-623e-4195-927b-ecd285e5046a | 7763935 | Pharmacology[mh] | Flavours and fragrances are a large class of compounds widely employed as additives in different technological fields, including food, cosmetics, textiles and others, mainly to ameliorate the olfactory and gustatory sensations of the product . They comprise both synthetic and naturally occurring molecules, such as essential oils (EO) and aroma compounds . Especially those of natural origin, which are mostly derived from plants, possess, in addition to sensory properties, also various biological activities (e.g., antibacterial, antiviral, antifungal, antiprotozoal, insect-repellent, anticancer, antidiabetic, anti-inflammatory and antioxidant) that raise the interest around this class of compounds . Besides the large potential of exploitation, the major drawbacks regarding their use are related to the volatility and chemical instability . Indeed, most of these compounds are sensitive to light, heat or oxygen; therefore, they can be deteriorated during the manufacturing process and reduce or lose their shelf-life activity during storage and consumer manipulation . To overcome these concerns, different encapsulation strategies have been applied, aiming to prevent the evaporation of volatile compounds and protect them from degradation . Through encapsulation, the compounds are protected by a shell of a different nature (e.g., polymeric, inorganic, lipid or mixed), which acts as a diffusion barrier, thereby enhancing their retention, controlling the release and prolonging the chemical stability . Encapsulation can be achieved using several techniques depending on the nature of the wall material and the fragrance itself, leading to the formation of micro/nano cargoes such as capsules, spheres or vesicles. Both the encapsulation of flavours and fragrances in cargoes of nanometric (nanoencapsulation) and of micrometric (microencapsulation) size have been widely investigated. However, microencapsulation has some advantages over nanoencapsulation, such as a higher payload, better control on the release and an easier processing and industrial scalability . Although several materials of a different nature have been proposed as shells for the encapsulation of fragrances and flavours , polymers and cyclodextrins (CDs) still remain the most employed in all technological fields . Particularly, polymers both of natural or synthetic origin have been reported to successfully encapsulate flavour and fragrances into single or multi-layered core-shell micro- or nanocapsules . These capsules resulted in being highly versatile for the encapsulation of volatile compounds, thanks to the large variety of polymers and methodologies available (e.g., coacervation and interfacial polymerisation), through which their chemical–physical properties can be tuned . Therefore, polymeric capsules can provide an easy handling and processing of this class of chemical compounds, guaranteeing, at the same time, a satisfactory protection from evaporation or degradation, good mechanical properties and the possibility of modulating or controlling the release at different conditions . Besides, molecular inclusion complexation with CDs has been also widely exploited . CDs represent a simple and relatively affordable material, resulting effectively in the encapsulation of aroma and volatile compounds . CDs are a family of cyclic oligosaccharides (α-CD, β-CD and γ-CD), composed of six, seven or eight glucosyl units, having a hydrophilic outer surface and a hollow hydrophobic cavity, able to host lipophilic “guest” molecules with a defined size, shape and stoichiometry of interactions. They were employed for the encapsulations of a large variety of volatiles, such as EOs, plant flavours and spices, with the aim to mask unpleasant smells and tastes, to convert them into solid crystalline forms and to improve physical and/or chemical stability . The present review addresses the recent literature, mainly focusing on papers published between 2018 and 2020 related to the encapsulation of flavours and fragrances into polymeric capsules and inclusion complexes with CDs. Particular attention was devoted to the applications of these encapsulated systems in different technological fields, such as textiles, cosmetics, food and paper industries.
2.1. Polymeric Capsules Different methods have been reported in the literature for the encapsulation of flavours and fragrances in polymeric capsules . The choice of the most suitable technique depends on the different types of core and shell materials, on the properties that the final micro- and nanosystems have to possess in terms of size, shell thickness and permeability and on the desired release rate of the active molecule. In addition, the final application of the capsules can also affect the selection of the more suitable encapsulation process, which could be tailored as a function of the intended use. Generally, these techniques can be divided into three major categories—specifically, chemical methods (e.g., in situ polymerisation, emulsion polymerisation and interfacial polymerisation); physical-chemical methods (e.g., emulsification and coacervation) and physical-mechanical methods (e.g., spray drying, freeze-drying, electrodynamic methods and extrusion) . These methods have been extensively reviewed in the last years, highlighting their strengths and weaknesses, in relation also to the different applications for which they have been applied. An overview about the recent advances in the applied methods of micro/nanoencapsulation for fragrances and flavours in polymeric capsules and the formation of molecular inclusion complexes with CDs is presented here. Although chemical methods for encapsulation are suitable for capsules formed by shells made of synthetic polymers, the so-defined physical-chemical and physical-mechanical methods can be employed both for natural and synthetic polymers. However, chemical methods are generally preferred for synthetic polymers, since, in most cases, they are more effective in controlling the size, the shape of the capsules and assuring a high loading capacity and encapsulation efficiency . These methods include in situ polymerisation, emulsion polymerisation and interfacial polymerisation . Recently, a new approach has been proposed based on the free-radical crosslinking copolymerisation of a double oil-in-water-in-oil (O/W/O) emulsion to prepare synthetic polymeric capsules encapsulating fragrances. This strategy has the advantage of separating the polymerisation process, occurring in the aqueous phase that contains monomers, crosslinkers and an initiator, from the fragrance compartment. In this way, possible undesired reactions involving the fragrance during the polymerisation process are avoided . Coacervation is a largely employed method since the 1950s for the micro- and nanoencapsulation of different compounds based on the physicochemical process of phase separation in which a polymeric dispersion can form a liquid polymer-rich phase, known as coacervate, at specific conditions . Coacervation can be classified as simple or complex. In simple coacervation, the polymer is salted out by the action of electrolytes, or desolvated by the addition of a water miscible nonsolvent, while complex coacervation is essentially driven by the attractive forces of oppositely charged polymers. The encapsulation process can be performed in an aqueous phase for the encapsulation of hydrophobic water insoluble materials or in the organic phase or via a preliminary double-emulsification step for the encapsulation of hydrophilic compounds . Therefore, coacervation allows the encapsulation of different kinds of functional ingredients (solid or liquid core materials), including flavours and fragrances, to be utilised in many industrial sectors, such as food, cosmetics or pharmaceuticals . The complex coacervation process has been largely exploited to obtained polymeric capsules containing fragrances, flavours and EOs in the core and biopolymers such as proteins (e.g., gelatin and silk fibroin) and polysaccharides (gum arabic, gum tragacanth, pectin, chitosan, agar, alginate, carrageenan and sodium carboxymethyl cellulose) as shell materials . Recently, polyelectrolyte complexes using cationised casein, as an alternative polycation, and sodium alginate were prepared via complex coacervation without crosslinking agents. These complexes were stable and suitable for a controlled release of vanillin fragrance . In another recent study, oregano EO was encapsulated through complex coacervation using gelatin and chia mucilage as an alternative to plant-derived gums. The obtained nanocapsules were compared to those prepared with the standard polyelectrolyte combination gelatin/arabic gum after a spray-drying process. A high EO entrapment both before and after spray-drying was achieved using the combination gelatin/chia mucilage. Moreover, the particle size after drying was actually lower than the control formulations, suggesting the potential use of a gelatin/mucilage combination for the encapsulation of EOs in different applications . Phase separation of a polymer from a colloidal dispersion can be also achieved using a vapour phase as the antisolvent, the so-called vapour-induced phase separation (VIPS). This technique has been widely employed for the preparation of films, membranes and hydrogels, but it has been recently proposed for the preparation of microcapsules. A complex mix of fragrances have been encapsulated in cellulose acetate microcapsules via the VIPS technique. The obtained capsules had a core-shell architecture, high encapsulation capacity and stability up to one year at room temperature, showing no fragrance diffusion without external stimuli at a dry state . Among physico-mechanical methods, the currently most employed for the encapsulation of flavours and fragrances is still spray-drying. It has been reported for flavour encapsulation that around 80–90% of the encapsulated products are obtained by spray-drying; then, by spray-chilling (5–10%), melt extrusion (2% to 3%) and melt injection (∼2%) . Specifically, spray-drying is one of the most common methods used for several reasons, such as equipment availability and simplicity, the possibility to use a wide variety of encapsulating agents, large-scale production, good efficiency and reduced processing costs . On the other side, a relevant loss of aroma compounds could occur during the spray-drying process due to the eventual chemical reactions activated at the operating temperature among the flavour and fragrance constituents or volatile diffusions through the shell and the consequent evaporation into the environment . Spray-drying has been extensively employed for the microencapsulation of EOs, using several wall materials, especially polysaccharides (e.g., chitosan and carrageenan) or gums . Specifically, the ingredient to be encapsulated is added to the carrier (the ratio of core-to-carrier can be optimised for each individual combination), and then, the dispersion is fed into the spray-drying chamber, passing through an atomiser (e.g., spray nozzle). The atomisation occurs thanks to the circulating hot air that allows the evaporation of the aqueous medium. The dispersed carrier materials should be soluble in water and have low viscosity at high concentrations to assure efficient drying properties . The factors influencing the spray-drying process, as well as the characteristics of the obtained EO-loaded capsules, have been investigated. In a study, the impact of the wall composition (whey protein isolate, maltodextrin and sodium alginate) has been evaluated in terms of the formation and stability of cinnamon EO microcapsules produced by spray-drying . In another one, the effect of using a reduced pressure and an oxygen-free environment during the spray-drying process (vacuum spray-drying, VSD technique) was examined in comparison to the conventional spray-dryer (SD technique) for the encapsulation of orange EO using maltodextrin and octenyl succinic anhydride-modified starch as the wall material. The VSD technique provides microcapsules with a smaller size and higher encapsulation efficiency than those from the standard technique . Spray-chilling, also known as spray-cooling, spray-congealing or prilling, is another congener technique utilised for the microencapsulation of flavour compounds, especially when lipids are employed as wall materials . Spray-chilling is similar to spray-drying, but a cooling chamber instead of a drying chamber is required. This technique is also easy to use and to scale up with a lower loss of flavours by diffusion, thereby avoiding organic solvents and the application of a high inlet air temperature . One disadvantage is the poor control of the particle size and moderate yields. Electrohydrodynamic processes such as electrospinning and electro-spraying can be also used for the encapsulation of flavours and fragrances , allowing, generally, the production of micro- or nanofibers from a polymeric dispersion using a spinneret by applying a high voltage potential or particles in the nozzle through liquid atomisation by electric forces . Indeed, these techniques use different concentrations of the polymeric dispersion that give rise to nanofibers by electrospinning with high concentrations of the polymer or fine droplets/particles when a low polymer concentration is used in the electro-spraying . Since these methodologies do not require heating treatments, they are very promising for the encapsulation of heat-sensitive compounds such as flavours, fragrances and EOs . Different polymers have been evaluated for the formation of nanofibers encapsulating volatile compounds, such as cellulose derivatives , biodegradable polyesters , dendrimers or polysaccharides such as seed gums and mucilages . In the last years, new advances in the field of electrospinning-based techniques have been introduced as coaxial electrospinning/spraying and emulsion electrospinning/spraying, enabling the production of core-shell fibres and particles . An example is from the work of Dehcheshmeh and Fathi, in which an aqueous saffron extract was encapsulated in core-shell nanofibers via the coaxial electrospinning technique. The shell was formed by zein, while the core was made by gum tragacanth, in which the saffron extract was dispersed. The results of this research showed that produced core-shell nanofibers were thermostable, assuring the stability and a satisfactory entrapment for the saffron extract compounds, which were slowly released in saliva, hot water, a gastric simulant and intestinal simulant media . Core/shell nanofibers containing cinnamon oil were also successfully obtained by the emulsion electrospinning technique, using poly(vinyl alcohol) as the water phase. These nanofibers contained up to 20% w / w of cinnamon oil and showed a continuous release of the major volatile components (cinnamaldehyde, eugenol and caryophyllene) for up to 28 days . Recently, another electrospinning technique has been proposed, i.d. needleless electrospinning, more suitable for the production of large-scale batches, since no needles are used, thereby avoiding clogging limitations. Differently from the more common technique, in which the fibres form due to the mechanical forces and geometric characteristics of the needle, it is based on the self-formation of the electro-spun-induced fibres on an open-surface electrode . Needleless electrospinning has been employed for the nanoencapsulation of cinnamic aldehyde in zein nanofibers or the nanoencapsulation of thyme EO in chitosan/gelatin nanofibers . The obtained nanofibers showed bactericidal effects and, after mixing in sausage batter, do not alter the colour, texture and sensory characteristics of the final food product. Melt extrusion is another “traditional technique” employed in the past decades for the encapsulation of flavours and fragrances . It consists of the melting of the polymer with the plasticiser and the subsequent mixing of the compound to be encapsulated. The obtained melt is forced out of the extruder orifice under high pressure. Droplets originate from the action of the surface tension, gravitational or frictional forces, which result in the formation of solid particles when quickly dried. A variant is represented by the co-extrusion method, enabling the formation of core-shell particles. Specifically, the liquid active ingredient and the solubilised wall material are pumped, separately in two streams, through a concentric nozzle. Droplets are formed by applying vibration on a laminar jet, giving particles after drying. These techniques require “mild” operating conditions, and they have been employed for carbohydrates, dextrins or starch-based polymers . In a recent work, different blends of a modified-starch (i.d. octenyl succinate starch) and malto-polymers with different molecular weights were investigated to optimise the microencapsulation of orange oil through the twin-screw extrusion process. The study highlighted how the matrix composition, the amount of water in the mixture and the degree of starch gelatinisation affected the oil payload . Over the years, alternative novel methods have been investigated for the encapsulation of fragrances and flavours. Indeed, supercritical CO 2 (sCO 2 ) technologies have been employed to formulate particles or capsules with a wide variety of polymeric materials . In these processes, supercritical CO 2 can act either as a solvent, solute or antisolvent, giving rise to different techniques (e.g., Rapid Expansion of Supercritical Solutions, RESS and supercritical antisolvent, SAS techniques). CO 2 methodologies are versatile and scalable, allowing a formulation process in a completely anhydrous medium, by obtaining noncontaminated products, high encapsulation efficiencies, customised particle properties and good scalability. The characteristics of the particles/capsules can be tuned by employing supercritical CO 2 at different operating conditions (e.g., temperature, pressure) . Recently, the particle from a gas saturated solution (PGSS) technique, using sCO 2 as the solvent at a moderate pressure and temperature, was employed for the encapsulation of eucalyptol in polyethyleglycol/polycaprolactone microparticles and of Citrus aurantifolia EO in polyethylene glycol/lauric acid microparticles, demonstrating satisfactory entrapment efficiency and controlled release . sCO2-based technique alternatives to the conventional ones can be also employed for encapsulation, such as the supercritical fluid extraction of emulsions (SFEE). This technique is based on the removal of the organic phase solvent in fractions of seconds as the time scale by sCO2, leading to the rapid precipitation of compounds dissolved in it. In general, very small and highly homogeneous particles are obtained. Lima Reis et al. reported for the first time the encapsulation of an EO (i.d. Laurus nobilis EO) using the SFEE technique. A chemically modified food starch was used as the encapsulating agent. The efficiency of encapsulation by the SFEE process was found to be favoured by the increase in EO concentration and the final dried particles demonstrated to be effective in the protection of this highly volatile compound . provides a schematic representation of some micro/nanoencapsulation processes (complex coacervation, spray-drying, coaxial electrospinning and supercritical fluid technology) successfully employed for the encapsulation of fragrances and flavours. 2.2. Molecular Inclusion Complexes with CDs The formation of molecular inclusion complexes using CDs is a very common microencapsulation approach, widely investigated for different purposes . It is based on the stoichiometric hydrophobic interactions via dynamic equilibrium between CDs and the complexed substance, which is entrapped in the hydrophobic cavity of CDs . The formed host–guest complexes demonstrated being effective to improve the stability and prolonged release of large amounts of fragrances, flavours, EOs and volatiles . Apart from the production of continuous and extensive studies aimed to study the interactions and, therefore, the encapsulation capability of CDs towards volatile compounds , a flourishing literature has been recently produced regarding the processing of molecular inclusion complexes by electrospinning to obtain micro- and nanofibers in which fragrances and volatile compounds are incorporated . In the last years, the same research group has published several works reporting the encapsulation of molecular inclusion complexes of volatile compounds incorporated in polymeric fibres, mats or webs by electrospinning. A polymer-free electrospinning approach was applied on CD inclusion complexes to enhance the water solubility; improve the high temperature stability and control the release of carvacrol , tymol , camphor , menthol , limonene , citral , cineole and p-cymene and eugenol . In other studies, the volatiles/inclusion complexes with CDs were incorporated in a biopolymer matrix as zein , o pullulan , semisynthetic polymers such as cellulose acetate or synthetic polymers such as poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) via electrospinning. These polymers have been used for the formation of edible or biodegradable antimicrobial films, as well as porous membranes for packaging or biomedical applications.
Different methods have been reported in the literature for the encapsulation of flavours and fragrances in polymeric capsules . The choice of the most suitable technique depends on the different types of core and shell materials, on the properties that the final micro- and nanosystems have to possess in terms of size, shell thickness and permeability and on the desired release rate of the active molecule. In addition, the final application of the capsules can also affect the selection of the more suitable encapsulation process, which could be tailored as a function of the intended use. Generally, these techniques can be divided into three major categories—specifically, chemical methods (e.g., in situ polymerisation, emulsion polymerisation and interfacial polymerisation); physical-chemical methods (e.g., emulsification and coacervation) and physical-mechanical methods (e.g., spray drying, freeze-drying, electrodynamic methods and extrusion) . These methods have been extensively reviewed in the last years, highlighting their strengths and weaknesses, in relation also to the different applications for which they have been applied. An overview about the recent advances in the applied methods of micro/nanoencapsulation for fragrances and flavours in polymeric capsules and the formation of molecular inclusion complexes with CDs is presented here. Although chemical methods for encapsulation are suitable for capsules formed by shells made of synthetic polymers, the so-defined physical-chemical and physical-mechanical methods can be employed both for natural and synthetic polymers. However, chemical methods are generally preferred for synthetic polymers, since, in most cases, they are more effective in controlling the size, the shape of the capsules and assuring a high loading capacity and encapsulation efficiency . These methods include in situ polymerisation, emulsion polymerisation and interfacial polymerisation . Recently, a new approach has been proposed based on the free-radical crosslinking copolymerisation of a double oil-in-water-in-oil (O/W/O) emulsion to prepare synthetic polymeric capsules encapsulating fragrances. This strategy has the advantage of separating the polymerisation process, occurring in the aqueous phase that contains monomers, crosslinkers and an initiator, from the fragrance compartment. In this way, possible undesired reactions involving the fragrance during the polymerisation process are avoided . Coacervation is a largely employed method since the 1950s for the micro- and nanoencapsulation of different compounds based on the physicochemical process of phase separation in which a polymeric dispersion can form a liquid polymer-rich phase, known as coacervate, at specific conditions . Coacervation can be classified as simple or complex. In simple coacervation, the polymer is salted out by the action of electrolytes, or desolvated by the addition of a water miscible nonsolvent, while complex coacervation is essentially driven by the attractive forces of oppositely charged polymers. The encapsulation process can be performed in an aqueous phase for the encapsulation of hydrophobic water insoluble materials or in the organic phase or via a preliminary double-emulsification step for the encapsulation of hydrophilic compounds . Therefore, coacervation allows the encapsulation of different kinds of functional ingredients (solid or liquid core materials), including flavours and fragrances, to be utilised in many industrial sectors, such as food, cosmetics or pharmaceuticals . The complex coacervation process has been largely exploited to obtained polymeric capsules containing fragrances, flavours and EOs in the core and biopolymers such as proteins (e.g., gelatin and silk fibroin) and polysaccharides (gum arabic, gum tragacanth, pectin, chitosan, agar, alginate, carrageenan and sodium carboxymethyl cellulose) as shell materials . Recently, polyelectrolyte complexes using cationised casein, as an alternative polycation, and sodium alginate were prepared via complex coacervation without crosslinking agents. These complexes were stable and suitable for a controlled release of vanillin fragrance . In another recent study, oregano EO was encapsulated through complex coacervation using gelatin and chia mucilage as an alternative to plant-derived gums. The obtained nanocapsules were compared to those prepared with the standard polyelectrolyte combination gelatin/arabic gum after a spray-drying process. A high EO entrapment both before and after spray-drying was achieved using the combination gelatin/chia mucilage. Moreover, the particle size after drying was actually lower than the control formulations, suggesting the potential use of a gelatin/mucilage combination for the encapsulation of EOs in different applications . Phase separation of a polymer from a colloidal dispersion can be also achieved using a vapour phase as the antisolvent, the so-called vapour-induced phase separation (VIPS). This technique has been widely employed for the preparation of films, membranes and hydrogels, but it has been recently proposed for the preparation of microcapsules. A complex mix of fragrances have been encapsulated in cellulose acetate microcapsules via the VIPS technique. The obtained capsules had a core-shell architecture, high encapsulation capacity and stability up to one year at room temperature, showing no fragrance diffusion without external stimuli at a dry state . Among physico-mechanical methods, the currently most employed for the encapsulation of flavours and fragrances is still spray-drying. It has been reported for flavour encapsulation that around 80–90% of the encapsulated products are obtained by spray-drying; then, by spray-chilling (5–10%), melt extrusion (2% to 3%) and melt injection (∼2%) . Specifically, spray-drying is one of the most common methods used for several reasons, such as equipment availability and simplicity, the possibility to use a wide variety of encapsulating agents, large-scale production, good efficiency and reduced processing costs . On the other side, a relevant loss of aroma compounds could occur during the spray-drying process due to the eventual chemical reactions activated at the operating temperature among the flavour and fragrance constituents or volatile diffusions through the shell and the consequent evaporation into the environment . Spray-drying has been extensively employed for the microencapsulation of EOs, using several wall materials, especially polysaccharides (e.g., chitosan and carrageenan) or gums . Specifically, the ingredient to be encapsulated is added to the carrier (the ratio of core-to-carrier can be optimised for each individual combination), and then, the dispersion is fed into the spray-drying chamber, passing through an atomiser (e.g., spray nozzle). The atomisation occurs thanks to the circulating hot air that allows the evaporation of the aqueous medium. The dispersed carrier materials should be soluble in water and have low viscosity at high concentrations to assure efficient drying properties . The factors influencing the spray-drying process, as well as the characteristics of the obtained EO-loaded capsules, have been investigated. In a study, the impact of the wall composition (whey protein isolate, maltodextrin and sodium alginate) has been evaluated in terms of the formation and stability of cinnamon EO microcapsules produced by spray-drying . In another one, the effect of using a reduced pressure and an oxygen-free environment during the spray-drying process (vacuum spray-drying, VSD technique) was examined in comparison to the conventional spray-dryer (SD technique) for the encapsulation of orange EO using maltodextrin and octenyl succinic anhydride-modified starch as the wall material. The VSD technique provides microcapsules with a smaller size and higher encapsulation efficiency than those from the standard technique . Spray-chilling, also known as spray-cooling, spray-congealing or prilling, is another congener technique utilised for the microencapsulation of flavour compounds, especially when lipids are employed as wall materials . Spray-chilling is similar to spray-drying, but a cooling chamber instead of a drying chamber is required. This technique is also easy to use and to scale up with a lower loss of flavours by diffusion, thereby avoiding organic solvents and the application of a high inlet air temperature . One disadvantage is the poor control of the particle size and moderate yields. Electrohydrodynamic processes such as electrospinning and electro-spraying can be also used for the encapsulation of flavours and fragrances , allowing, generally, the production of micro- or nanofibers from a polymeric dispersion using a spinneret by applying a high voltage potential or particles in the nozzle through liquid atomisation by electric forces . Indeed, these techniques use different concentrations of the polymeric dispersion that give rise to nanofibers by electrospinning with high concentrations of the polymer or fine droplets/particles when a low polymer concentration is used in the electro-spraying . Since these methodologies do not require heating treatments, they are very promising for the encapsulation of heat-sensitive compounds such as flavours, fragrances and EOs . Different polymers have been evaluated for the formation of nanofibers encapsulating volatile compounds, such as cellulose derivatives , biodegradable polyesters , dendrimers or polysaccharides such as seed gums and mucilages . In the last years, new advances in the field of electrospinning-based techniques have been introduced as coaxial electrospinning/spraying and emulsion electrospinning/spraying, enabling the production of core-shell fibres and particles . An example is from the work of Dehcheshmeh and Fathi, in which an aqueous saffron extract was encapsulated in core-shell nanofibers via the coaxial electrospinning technique. The shell was formed by zein, while the core was made by gum tragacanth, in which the saffron extract was dispersed. The results of this research showed that produced core-shell nanofibers were thermostable, assuring the stability and a satisfactory entrapment for the saffron extract compounds, which were slowly released in saliva, hot water, a gastric simulant and intestinal simulant media . Core/shell nanofibers containing cinnamon oil were also successfully obtained by the emulsion electrospinning technique, using poly(vinyl alcohol) as the water phase. These nanofibers contained up to 20% w / w of cinnamon oil and showed a continuous release of the major volatile components (cinnamaldehyde, eugenol and caryophyllene) for up to 28 days . Recently, another electrospinning technique has been proposed, i.d. needleless electrospinning, more suitable for the production of large-scale batches, since no needles are used, thereby avoiding clogging limitations. Differently from the more common technique, in which the fibres form due to the mechanical forces and geometric characteristics of the needle, it is based on the self-formation of the electro-spun-induced fibres on an open-surface electrode . Needleless electrospinning has been employed for the nanoencapsulation of cinnamic aldehyde in zein nanofibers or the nanoencapsulation of thyme EO in chitosan/gelatin nanofibers . The obtained nanofibers showed bactericidal effects and, after mixing in sausage batter, do not alter the colour, texture and sensory characteristics of the final food product. Melt extrusion is another “traditional technique” employed in the past decades for the encapsulation of flavours and fragrances . It consists of the melting of the polymer with the plasticiser and the subsequent mixing of the compound to be encapsulated. The obtained melt is forced out of the extruder orifice under high pressure. Droplets originate from the action of the surface tension, gravitational or frictional forces, which result in the formation of solid particles when quickly dried. A variant is represented by the co-extrusion method, enabling the formation of core-shell particles. Specifically, the liquid active ingredient and the solubilised wall material are pumped, separately in two streams, through a concentric nozzle. Droplets are formed by applying vibration on a laminar jet, giving particles after drying. These techniques require “mild” operating conditions, and they have been employed for carbohydrates, dextrins or starch-based polymers . In a recent work, different blends of a modified-starch (i.d. octenyl succinate starch) and malto-polymers with different molecular weights were investigated to optimise the microencapsulation of orange oil through the twin-screw extrusion process. The study highlighted how the matrix composition, the amount of water in the mixture and the degree of starch gelatinisation affected the oil payload . Over the years, alternative novel methods have been investigated for the encapsulation of fragrances and flavours. Indeed, supercritical CO 2 (sCO 2 ) technologies have been employed to formulate particles or capsules with a wide variety of polymeric materials . In these processes, supercritical CO 2 can act either as a solvent, solute or antisolvent, giving rise to different techniques (e.g., Rapid Expansion of Supercritical Solutions, RESS and supercritical antisolvent, SAS techniques). CO 2 methodologies are versatile and scalable, allowing a formulation process in a completely anhydrous medium, by obtaining noncontaminated products, high encapsulation efficiencies, customised particle properties and good scalability. The characteristics of the particles/capsules can be tuned by employing supercritical CO 2 at different operating conditions (e.g., temperature, pressure) . Recently, the particle from a gas saturated solution (PGSS) technique, using sCO 2 as the solvent at a moderate pressure and temperature, was employed for the encapsulation of eucalyptol in polyethyleglycol/polycaprolactone microparticles and of Citrus aurantifolia EO in polyethylene glycol/lauric acid microparticles, demonstrating satisfactory entrapment efficiency and controlled release . sCO2-based technique alternatives to the conventional ones can be also employed for encapsulation, such as the supercritical fluid extraction of emulsions (SFEE). This technique is based on the removal of the organic phase solvent in fractions of seconds as the time scale by sCO2, leading to the rapid precipitation of compounds dissolved in it. In general, very small and highly homogeneous particles are obtained. Lima Reis et al. reported for the first time the encapsulation of an EO (i.d. Laurus nobilis EO) using the SFEE technique. A chemically modified food starch was used as the encapsulating agent. The efficiency of encapsulation by the SFEE process was found to be favoured by the increase in EO concentration and the final dried particles demonstrated to be effective in the protection of this highly volatile compound . provides a schematic representation of some micro/nanoencapsulation processes (complex coacervation, spray-drying, coaxial electrospinning and supercritical fluid technology) successfully employed for the encapsulation of fragrances and flavours.
The formation of molecular inclusion complexes using CDs is a very common microencapsulation approach, widely investigated for different purposes . It is based on the stoichiometric hydrophobic interactions via dynamic equilibrium between CDs and the complexed substance, which is entrapped in the hydrophobic cavity of CDs . The formed host–guest complexes demonstrated being effective to improve the stability and prolonged release of large amounts of fragrances, flavours, EOs and volatiles . Apart from the production of continuous and extensive studies aimed to study the interactions and, therefore, the encapsulation capability of CDs towards volatile compounds , a flourishing literature has been recently produced regarding the processing of molecular inclusion complexes by electrospinning to obtain micro- and nanofibers in which fragrances and volatile compounds are incorporated . In the last years, the same research group has published several works reporting the encapsulation of molecular inclusion complexes of volatile compounds incorporated in polymeric fibres, mats or webs by electrospinning. A polymer-free electrospinning approach was applied on CD inclusion complexes to enhance the water solubility; improve the high temperature stability and control the release of carvacrol , tymol , camphor , menthol , limonene , citral , cineole and p-cymene and eugenol . In other studies, the volatiles/inclusion complexes with CDs were incorporated in a biopolymer matrix as zein , o pullulan , semisynthetic polymers such as cellulose acetate or synthetic polymers such as poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) via electrospinning. These polymers have been used for the formation of edible or biodegradable antimicrobial films, as well as porous membranes for packaging or biomedical applications.
Micro- and nanocapsules/spheres, as well as molecular inclusion complexes with CDs, have been largely employed as protective carriers for aroma compounds (fragrances, aromas and flavours) in different technological fields . The following paragraphs summarise the main experimental studies recently conducted on the design and application of micro- and nanocapsules/spheres in the textile, food, cosmetic and paper production fields. 3.1. Textile Applications Textiles represent one of the most investigated applications for micro- and nanospheres/capsules encapsulating fragrances and aromas. These encapsulated volatile compounds have been employed for several years in textile-finishing processes, such as fabric conditioners to impart freshness and odour control . Through encapsulation, fragrances are retained and released for a long time . Moreover, the sensation of the added encapsulated fragrances can be preserved also after several washing-drying cycles (up to 25); therefore, the attractiveness of the product to the consumers is improved . Encapsulated perfumes and EOs have been added in scarves, ties, lingerie and other garments, as well for home textiles, such as sofa coverings, curtains and cushions for aromatherapy . Perfumes and aromas can be directly applied on textiles; however, their scarce affinity to fabric fibres and their chemical volatility limit their permanence. Thus, encapsulation promotes a prolonged duration of aroma sensations due to the controlled release of the fragrance. For this purpose, several types of fabrics can be processed with encapsulated fragrances and aromas, such as cotton, silk and synthetic fibres (polyamide or polyester). These micro- and nanocapsules/spheres can be added to textiles using different techniques, such as impregnation, spraying, coating or stamping . The encapsulation of fragrances and aromas is still achieved through traditional methods such as simple or complex coacervation, as well as the inclusion encapsulation method or interfacial polymerisation. However, other “innovative” encapsulation processes for fragrances and aromas have been recently explored in textile applications. Ye et al. proposed an electro-spraying method using aqueous media to prepare composite nanospheres made up of silk fibroin and β-CD encapsulating rose oxide or D-limonene . The nanospheres have an aroma encapsulation higher than 90% and were deposited directly on silk fabric. The fragrances were released with zero-order kinetics, guaranteeing a low rate and constant release profile. Noticeably, the composite nanospheres were retained at a higher percentage (more than 80%) after 10 runs of washing with water, demonstrating its applicability in the textile field . The retention of fragrances and aromas, especially after washing or rubbing, depends on the penetration of microcapsules and nanocapsules into the spacing of textiles during the finishing process. To address this, in a work, a series of micro-/nanocapsules, with a size suitable for the pore spacing of cotton textiles and formed by citronella oil as the core material and chitosan as the wall material, was prepared through a microemulsion approach. These micro-/nanocapsules were applied on the textile through vacuum impregnation. The matching between the spacing of the pore sizes of cotton textiles and the sizes of micro-/nanocapsules was assessed via the retention of aromatic compounds in the finished cotton textiles after several washing cycles (washing durability). Indeed, the aromatic retention of cotton textiles finished by nanocapsules was much greater than the same textiles finished with microcapsules (28.84% vs. 1.55%) after 10 cycles of washing. The authors demonstrated that nanocapsules can penetrate better into the pores of the cotton textiles . To overcome the issue related to the poor combination fastness and duration in the textiles, several approaches were employed in the past, using chemical binders or crosslinking agents. Recently, Ma et al. exploited electrostatic adsorption and immobilisation to retain nanocapsules loaded with lavender essence on cotton textiles. Firstly, the textile was positively charged through quaternary ammonium cationisation to promote the adsorption of nanocapsules with a negatively charged surface. The in-situ immobilisation was achieved via the diffusion and permeation of an alkali solution, leading to a chemical bond between nanocapsules and the textile fibres at the position of absorption. The encapsulated fragrance was released continuously for 120 days, and 91.19% of the essence still remained entrapped in the textile after five washing cycles. The authors proposed this method as a simple and “green” approach for the preparation of nanocomposite textile materials for different applications. On the other side, the encapsulation of fragrances and aromas was pursued recently for the fabrication of “smart textiles” with additional functional properties , such as antibacterial, UV protection, moisturising and skin treatments, body temperature regulation and insect repellence, depending on the action of the encapsulated fragrances, aromas or EOs . An example of encapsulation for UV protection in textiles is from the work of Chen et al., in which the one-step fabrication of cellulose/silica hybrid microcapsules via an emulsion solvent diffusion method was reported . These microcapsules were loaded with lavender fragrance oil and embedded into a polysiloxane coating. This coating ensured a controlled release of the EO and an excellent UV protective property, even after 30 repeated abrading/heating cycles, thanks to the grafting onto the particle shell of UV absorbers. The authors proposed the use of this material for sports clothing, curtains and other outdoor textiles . Among the different classes of functional textiles, those with the most potential exploitation are the cosmetic textiles or cosmetotextiles. They are defined from the European Cosmetic Directive (76/768/EEC) Article l as “any textile product containing a substance or preparation that is released over time on different superficial parts of the human body, notably on human skin, and containing special functionalities such as cleansing, perfuming, changing appearance, protection, keeping in good condition or the correction of body odours” . In these textiles, cosmetic ingredients are adsorbed or incorporated inside the cotton fibres of clothes and garments to be transferred after contact to the skin at a dose enough to impart some cosmetic benefits . The active ingredients, including fragrances and aromas, are generally encapsulated and released from the fabric to the skin upon the action of different triggering events, such as changes in the pH or temperature, sweating and rubbing . As for the other functional textiles, the encapsulation of the active ingredients allows for a prolonged release, even after a few washing–drying cycles . The washing durability is enhanced when the active ingredient is incorporated inside the fabric fibres with respect to the application by coating or impregnation. The encapsulated active ingredient embedded or adsorbed onto a cosmetotextile can exert any cosmetic action, including skincare, antiaging or odour control. Encapsulated aromas and fragrances have been incorporated in cosmetotextiles for perfuming or deodorising purposes, thereby controlling odours resultant from daily activities and physical exercise. In a recent work, two strategies were reported for the release of β-citronellol from cotton functionalised with Carbohydrate-Binding Module (CBM) proteins. The first strategy was based on the odorant-binding proteins (OBPs) as a container for the fragrance, while the second one exploited the high cargo capacity for β-citronellol of liposomes. These two carriers were bound to the cotton fabric via CBM proteins. These two approaches were able to differentiate and control the release of β-citronellol after exposure with an acid sweat solution. Indeed, the release was faster for the OBP-based approach with respect to the immobilised liposomes on the functionalised cotton (31.9% vs. 5.9% of the initial amount after 90 min, respectively). Therefore, the first strategy result is more suitable for applications in which a high amount of fragrances should be released in a shorter time, while the second strategy is potentially employed for fabrics from which the fragrance should desirably be released in a prolonged and controlled manner . The most used coolant agent, menthol, which is able to penetrate through the stratum corneum, reaching the nerve termination and providing a freshening sensation, was loaded in a core-shell nanocapsule impregnated within nonwoven fabric. The nanocapsules assured a rapid penetration of menthol inside the deeper skin layers, preferentially through hair follicles and trans-epidermal absorption routes . Similarly, citronella oil was encapsulated in acacia gum microcapsules, which were dripped onto a nonwoven fabric. Microencapsulation by spray-drying reduced the volatility, with a prolonged release up to 16 weeks, and decreased the irritation potential of nonencapsulated citronella oil, as evaluated by the nonanimal hen’s egg test-chorioallantoic membrane (HET-CAM) assay . and summarise the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for textile applications, respectively. 3.2. Food Applications Another application in which the micro- and nanoencapsulation of fragrances and flavours research has been focused on is related to food . As for the other active ingredients, the encapsulation of fragrances and flavours has been exploited in food processing and for the design of active food packaging . In the food industry, encapsulated flavours and fragrances have been widely used to ameliorate taste and/or odour, to adjust the nutritional value and to prolong the shelf-life of food . As such, food quality has improved, with positive implication in terms of consumer satisfaction and food consumption . For instance, fragrances and flavours are volatile compounds and are prone to evaporation during several food-processing operations or storage of the final product. Moreover, they can undergo chemical instability due to oxidation in the presence of air and light, moisture or high temperature, leading to chemical degradation and possible interactions with other food additives . In this regard, these compounds can be stabilised by encapsulation or complex formations. In addition to overcoming these concerns, encapsulation and/or complex formations improve also the manageability of these volatile food additives, guaranteeing stability and a simpler and standardised dosing. A classic example of the encapsulation of flavours in food technology is coffee aroma. Coffee aroma compounds are a mixture of pyridines, pyrazines, ketones, furans, etc. contained in the oil extracted from roasted coffee. These compounds are considered as flavouring agents to enrich the aroma, especially in instant coffee formulations. Roasted coffee oil is composed of several unsaturated fatty acids sensitive to oxidative degradation in the presence of air. Therefore, microencapsulation has been proposed as a strategy to preserve the freshly brewed coffee aroma in instant coffee products for a prolonged time after the first opening of the packaging. In addition, microencapsulation can be employed to control the release of these coffee aroma compounds over time. Specifically, roasted coffee oil was encapsulated in a modified food starch derived from waxy maize, and the resultant microcapsules were added to the formula of soluble coffee and instant cappuccino products with the aim of modulating the release of volatile organic compounds (VOC). The addition of microparticles improved the quality of the products in terms of aroma intensity, and the authors demonstrated how the composition of the product can affect the aroma release profile . Among all fragrances and aromas, EOs obtained from a large botanical variety of plants are the most encapsulated substances in food . They are used to provide a pleasant smell to food or to cover the original odour, configuring the olfactory sensation as a product identity marker. Being volatile liquids, their direct incorporation in food is not straightforward. Therefore, the food industry generally employed encapsulated or complex EOs, since these technological approaches both stabilised the components of EOs and increased their manageability. Complexation with β-CDs and encapsulation by simple or complex coacervation are still the most investigated in recent scientific studies. Different EOs have been employed for their antioxidant and antimicrobial effects, exploited for fruit preservation. Syringa EO was microencapsulated by the formation of complexes with β-CDs and used as an antifungal agent against Botrytis cinerea and Alternaria alternata to improve the quality attributes and storage behaviours of peaches . Similarly, microcapsules of Zingiber officinale EO were prepared using chitosan and carboxymethyl cellulose as shell materials to investigate the effects on the postharvest quality and prolonged the shelf-life of jujube fruits in terms of morphologic characteristics and some parameters as soluble solid contents, titratable acidity, the Red index and decay index and sensory quality . The EO extracted from the leaves of Eucalyptus and incorporated into carboxymethyl cellulose (CMC) was employed to control fungal growth causing soft rot on strawberries, configured as a valid alternative to synthetic fungicides for this preharvest treatment . Active packaging represents a fashionable option to preserve the quality and prolong the shelf life of food products. It refers to packaging materials, which are not inert, and does not exert only a mechanical function of enclosing food, but they “actively” interact with the atmosphere inside the packaging or directly with food products . In most cases, active packaging results in being effective in preventing chemical–physical or microbiological degradation by maintaining, at the same time, the organoleptic and nutritional properties of the product . Studies about active packaging have increased over the years, and several EOs have been incorporated to prepare materials with antioxidant and antimicrobial properties . In this field, besides the traditional encapsulating approaches such as β-CD complexation and simple or complex coacervations, nanofibers or microfibers of different compositions have been explored . Cinnamon EO as an antimicrobial agent for spoilage bacteria of edible fungi was encapsulated in polyvinyl alcohol/β-CD. Then, nanofibers were formed by electrospinning and chemical crosslinking to finally obtain a film. The film was applied on the inner surface of the box containing mushrooms. The packaging based on the nanofibrous film inhibited Gram-positive and Gram-negative bacteria and prolonged the shelf life of mushrooms, especially regarding quality parameters such as hardness and colour . Cinnamon EO was also encapsulated in CD nanosponges (CD-NS) as an antimicrobial agent for antimicrobial activity against foodborne pathogens, potentially employed in food packaging. CD-NS containing cinnamon EO displayed an effective antibacterial effect toward the tested bacteria. Notably, encapsulation enhanced the antibacterial activity of cinnamon EO with the respect to the nonencapsulated one, despite the slower release profile. According to the authors of the work, it represents the first study demonstrating the potential use of CD nanosponges to encapsulate and control the release of EOs in aqueous media . A biocomposite for active food packaging was prepared using chitosan, β-CD citrate (β-CDcit) and an oxidised nanocellulose (ONC) biopolymer. The obtained film was then impregnated with clove EO, which was retained possibly by the formation of inclusion complexes between the components. A higher activity toward Gram-negative than Gram-positive bacteria and toward fungi than yeast was observed in comparison to chitosan film alone . In another work, a saffron extract was encapsulated by electrospinning and electro-spraying techniques in zein matrices, yielding different microstructures as particles or fibres. This microstructure protected the encapsulated bioactive compounds from the saffron extract at different pH values, storage temperatures and UV light exposure, configuring these materials as potentially employed for food packaging and food healthy formulations . summarises the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for food applications. 3.3. Cosmetic Applications The European Union (EU) Cosmetics Regulation defines a cosmetic product as “any substance or mixture used for external parts of the human body (epidermis, hair system, nails, lips and external genital organs), teeth and mucous membranes of the oral cavity for cleaning them, perfuming them, changing their appearance and/or correcting body odors and/or protecting them or keeping them in good condition” . In the last years, the beauty and personal care industry have become a multibillion-dollar international business with a significant growth value in emerging markets, such as Brazil, China, India, Indonesia and Argentina . In general, there is an increasing interest in natural cosmetic formulations that generates the demand for new products reformulated by using botanical and bioactive ingredients, including fragrances and aromas, to contribute to health, beauty and wellness. Another goal to have success in such a competitive and demanding sector is the use emergent technologies, such as microencapsulation able to give innovation, functional properties and, thus, an additional value to a cosmetic product . In particular, microencapsulation technologies have been proposed to increase stability, to protect against degradation and, also, to direct and control the release of active ingredients . Fragrance ingredients are active ingredients commonly used in cosmetic products intended for the application to skin and hair with the purpose to release pleasant odours. In some cases, the products also labelled as “unscented” may contain fragrances to mask the unpleasant smell of other ingredients without giving a perceptible scent. The application of microencapsulation technology on the delivery of flavours and fragrances is a topic of relevant interest considering the need to improve the efficacy of a wide range of cosmetic (perfumes) and personal care (hand and body wash, toothpaste, etc.) products . Fragrances are small volatile substances with scents, and their volatility is fundamental for the sensory response, despite causing an often-undesired loss during storage, limiting their effective use as additives in various products . Different substances are often used to replace natural fragrances because of their poor chemical and physical stability. Among these compounds, there are, for example, synthetic nitro- and polycyclic musks used in perfumes, deodorants and detergents that are toxic and nonbiodegradable, with accumulations in the environment, aquatic organisms and, also, in human milk . Since natural fragrances represent a preferable alternative from a toxicological point of view, microencapsulation represents an effective strategy to overcome all the issues related to their delivery. Microencapsulation can improve the shelf life and the delivery of highly volatile fragrances, with a gradual release of the encapsulated functional ingredient. Furthermore, the encapsulation technique has a strong effect on different odour properties and consumer perceptions, such as wet odour impact, tenacity and long-lasting odour during use, that are fundamental concerns for a cosmetic product. On the other hand, the formulation of effective nano- or microcapsules needs to take into account different issues, such as the amphiphilicity of volatile compounds and the need and difficulty to obtain monodisperse microcapsules with precisely controllable shell thicknesses and shell materials. New preparation techniques have been tested to obtain microcapsules with precisely tunable sizes, highly efficient encapsulation and proper shell properties, such as a crosslinking density, polarity and thickness, to achieve the enhanced retention of fragrances . Another strategy is the use of chemically functionalised biodegradable polymeric carriers able to give enhanced properties over conventional carrier materials with the advantage of being nonreactive when in contact with the human body and metabolised and removed from the body via normal metabolic pathways . The most commonly used shell materials in cosmetics are polysaccharides (gums, starches, celluloses, CDs and chitosan) ; proteins (gelatin, casein and soy proteins) ; lipids (waxes, paraffin and oils) and synthetic polymers (acrylic polymers, polyvinyl alcohol and poly(vinylpyrrolidone)) . Inorganic materials (silicates, clays and polyphosphates) can also be used . Different examples can be found in the literature on the development of systems intended for the encapsulation of fragrances with cosmetic applications. Sansukcharearnpon et al. encapsulated six fragrances: camphor, citronellal, eucalyptol, limonene, menthol and 4-tert-butylcyclohexyl acetate using the solvent displacement method and different polymer blends of ethyl cellulose, hydroxypropyl methylcellulose and poly(vinyl alcohol) as polymeric carriers. The process gave a 40% fragrance loading capacity with an 80% of the encapsulation efficiency at the fragrance:polymer weight ratio of 1:1 . A more recent example was represented by the encapsulation of Kaffir lime oil, an EO from Kaffir lime leaves. It is known to possess some important bioactivities, such as antioxidant, antileukemic, antitussive, antihemorrhage, antioxidative stress and antibacterial properties, that make it a fragrance used in the food, perfumery and cosmetic industries. Nanoencapsulation were obtained through the coacervation process. Nanocapsules with uneven surface morphology and a mean size of 457.87 nm with an encapsulation efficiency of 79.07% were formulated . Novel biocompatible nanocapsules (mean diameter 100 nm) based on soya lecithin and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-(polyethylene glycol)-2000 (DSPE-PEG (2000)) as a polymeric shell and PLGA as a core material encapsulated a lily fragrance (LF-NPs) were formulated through the self-assembly technique, a simple and low-cost method. The encapsulation of lily fragrance was about 21.9%, and a sustained release was obtained . Another example is the encapsulation of rose fragrance, widely applied in the textile and cosmetics industry, characterised by the presence of many kinds of volatile compounds in this composition. Polybutylcyanoacrylate (PBCA) nanocapsules obtained via anionic polymerisation were successfully used to encapsulate this fragrance (encapsulation efficiency was 65.83%), providing sustained release properties inversely proportional to the nanocapsules size . The same technique has been used for the encapsulation of tuberose fragrance in chitosan nanoparticles characterised by promising controlled release and antibacterial properties . Apple aroma microcapsules were prepared by a complex coacervation–emulsion polymerisation technique using sodium alginate and tetradecylallydimethylammonium bromide as shell materials. The obtained microcapsules have a core-shell structure and a sphere-like shape (diameter from 20 to 50 μm). After the optimisation of the formulation, the microcapsules showed thermal stability up to 110 °C and a 10.8% aroma release after 100 h. The aroma release much increased once the microcapsules were broken by pressure, finding a potential application in cosmetic products . Microcapsules containing camellia oil were prepared using the heterocoagulation approach between chitosan and oleic acid. For the preparation, oleic acid was dissolved in camellia oil and chitosan in the continuous aqueous phase. The obtained core-shell microcapsules were tested as a dressing material to be applied on hair. Their mean diameters ranged from 1.5 μm to 4.5 μm and were adherent on the surface of human hair, being stable both before and after drying . A microparticulate system based on the zein and keratin proteins was developed for the release of fragrances on hair. Linalool and menthol were used as model fragrances. The zein/keratin microparticles were prepared using two approaches: (i) zein nanoparticles were firstly formed, and then, keratin was deposited onto the surface by electrostatic interactions, and (ii) zein was coprecipitated with keratin for microparticle formation. Microparticles were applied onto the hair, forming a film from which fragrances are released, thereby improving the hydration degree and mechanical properties of hair . EOs and volatile compounds can be also encapsulated in CDs in order to improve their water solubility; avoid oxygen-, light- or heat-induced degradation and loss during processing and storage and to stabilise them against unwanted changes. Moreover, the use of CD–flavour inclusion complexes allows the use of very small amounts of flavours . 1-Phenylethanol (1-PE) and 1-phenylethanol (2-PE) are important aromatic alcohols with rose-like fragrances that are the major constituents of rose-like flowers scents. The applications of the two isomers have been limited because of their low aqueous solubility, high volatility and thermal instability. For these reasons, CDs have been utilised for the formation of 1:1 stoichiometric inclusion complexes with α-CD, β-CD and HP-β-CD. The results showed that 1-PE and 2-PE can form inclusion complexes with β-CD in a solid state and greatly enhance their stability, indicating that β-CD was a suitable excipient for increasing not only the stability but, also, to achieve a controlled release of 1-PE and 2-PE. Thus, β-CD complexation technology might be a promising approach in terms of expanding the applications of 1-PE and 2-PE . 3.4. Paper Applications Another application of the aroma and fragrances encapsulation is the design of aromatic paper or scented paper. Aromatic paper is intended to provide a pleasant surrounding atmosphere on the basis of the aromatherapy principles. In this regard, research has been focused on the development of wallpaper with the aim of providing comfortable sensations and to enhance the psychological and physical well-being . Scented papers are, generally, wrapping or writing papers, in which perfumes or fragrances are added for voluptuary purposes or marketing appeal. These papers can be prepared by adding the nano/microspheres containing fragrances or aromas directly into the pulp during the processing operations, or, alternatively, the encapsulated materials can be adsorbed onto the paper surface in a further production step. Moreover, the scented encapsulated compounds can be applied on paper after dispersing them into a coating varnish or ink. Lavender oil microcapsules were prepared with ABA-type triblock copolymer (polyethylene oxide-polypropylene glycol-polyethylene oxide, PEO-b-PPG-b-PEO) and adsorbed onto the paper surface. The distribution of the microcapsules on the paper surface was homogeneous without degradation. The colour and gloss properties of the paper were also maintained in compliance with the standards . In another work, lavender EO was encapsulated by coacervation using gelatin/gum arabica as the shell material. The obtained microcapsules were dispersed into a UV-curable varnish at a selected microcapsule-to-varnish ratio. The varnish was characterised in terms of the control and protection of the encapsulated lavender EO major volatile components. Notably, the presence of the encapsulated materials does not interfere with the standard screen-printing process generally employed to produce a fragrant gift-wrapping paper . Recently, encapsulated fragrances with an antibacterial effect were applied on paper. Specifically, vanillin was encapsulated in chitosan/poly(lactic-co-glycolic acid (PLGA) nanocapsules to prepare an aromatic wallpaper with an additional antibacterial action. Thanks to the presence of chitosan, the nanospheres showed an antibacterial effect against Gram positive and Gram negative, and the adhesion on the wallpaper was also enhanced . In another work, the encapsulated EO had an antibacterial effect. Citronella EO was encapsulated in microcapsules, obtained by complex coacervation using as a coating material the mixture gelatin/carboxymethyl cellulose or gelatin/gum arabic or by the in-situ polymerisation of melamine–formaldehyde with a polyacrylic acid modifier. These nanocapsules were employed for the preparation of functional coatings intended for paper or cardboard secondary packaging. Both microencapsulation methods provided single-core microcapsules, but some differences were highlighted. Microcapsules from coacervation were more permeable and allowed a steady release of the EO, while those from in-situ polymerisation were impermeable, showing a high retention of the EO, which was released only after a mechanical pressure. The released vapour efficiently inhibited the growth of the tested microorganisms, configuring this manufacture as the first description of a pressure-activated coating for antimicrobial paper . summarises the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for paper applications.
Textiles represent one of the most investigated applications for micro- and nanospheres/capsules encapsulating fragrances and aromas. These encapsulated volatile compounds have been employed for several years in textile-finishing processes, such as fabric conditioners to impart freshness and odour control . Through encapsulation, fragrances are retained and released for a long time . Moreover, the sensation of the added encapsulated fragrances can be preserved also after several washing-drying cycles (up to 25); therefore, the attractiveness of the product to the consumers is improved . Encapsulated perfumes and EOs have been added in scarves, ties, lingerie and other garments, as well for home textiles, such as sofa coverings, curtains and cushions for aromatherapy . Perfumes and aromas can be directly applied on textiles; however, their scarce affinity to fabric fibres and their chemical volatility limit their permanence. Thus, encapsulation promotes a prolonged duration of aroma sensations due to the controlled release of the fragrance. For this purpose, several types of fabrics can be processed with encapsulated fragrances and aromas, such as cotton, silk and synthetic fibres (polyamide or polyester). These micro- and nanocapsules/spheres can be added to textiles using different techniques, such as impregnation, spraying, coating or stamping . The encapsulation of fragrances and aromas is still achieved through traditional methods such as simple or complex coacervation, as well as the inclusion encapsulation method or interfacial polymerisation. However, other “innovative” encapsulation processes for fragrances and aromas have been recently explored in textile applications. Ye et al. proposed an electro-spraying method using aqueous media to prepare composite nanospheres made up of silk fibroin and β-CD encapsulating rose oxide or D-limonene . The nanospheres have an aroma encapsulation higher than 90% and were deposited directly on silk fabric. The fragrances were released with zero-order kinetics, guaranteeing a low rate and constant release profile. Noticeably, the composite nanospheres were retained at a higher percentage (more than 80%) after 10 runs of washing with water, demonstrating its applicability in the textile field . The retention of fragrances and aromas, especially after washing or rubbing, depends on the penetration of microcapsules and nanocapsules into the spacing of textiles during the finishing process. To address this, in a work, a series of micro-/nanocapsules, with a size suitable for the pore spacing of cotton textiles and formed by citronella oil as the core material and chitosan as the wall material, was prepared through a microemulsion approach. These micro-/nanocapsules were applied on the textile through vacuum impregnation. The matching between the spacing of the pore sizes of cotton textiles and the sizes of micro-/nanocapsules was assessed via the retention of aromatic compounds in the finished cotton textiles after several washing cycles (washing durability). Indeed, the aromatic retention of cotton textiles finished by nanocapsules was much greater than the same textiles finished with microcapsules (28.84% vs. 1.55%) after 10 cycles of washing. The authors demonstrated that nanocapsules can penetrate better into the pores of the cotton textiles . To overcome the issue related to the poor combination fastness and duration in the textiles, several approaches were employed in the past, using chemical binders or crosslinking agents. Recently, Ma et al. exploited electrostatic adsorption and immobilisation to retain nanocapsules loaded with lavender essence on cotton textiles. Firstly, the textile was positively charged through quaternary ammonium cationisation to promote the adsorption of nanocapsules with a negatively charged surface. The in-situ immobilisation was achieved via the diffusion and permeation of an alkali solution, leading to a chemical bond between nanocapsules and the textile fibres at the position of absorption. The encapsulated fragrance was released continuously for 120 days, and 91.19% of the essence still remained entrapped in the textile after five washing cycles. The authors proposed this method as a simple and “green” approach for the preparation of nanocomposite textile materials for different applications. On the other side, the encapsulation of fragrances and aromas was pursued recently for the fabrication of “smart textiles” with additional functional properties , such as antibacterial, UV protection, moisturising and skin treatments, body temperature regulation and insect repellence, depending on the action of the encapsulated fragrances, aromas or EOs . An example of encapsulation for UV protection in textiles is from the work of Chen et al., in which the one-step fabrication of cellulose/silica hybrid microcapsules via an emulsion solvent diffusion method was reported . These microcapsules were loaded with lavender fragrance oil and embedded into a polysiloxane coating. This coating ensured a controlled release of the EO and an excellent UV protective property, even after 30 repeated abrading/heating cycles, thanks to the grafting onto the particle shell of UV absorbers. The authors proposed the use of this material for sports clothing, curtains and other outdoor textiles . Among the different classes of functional textiles, those with the most potential exploitation are the cosmetic textiles or cosmetotextiles. They are defined from the European Cosmetic Directive (76/768/EEC) Article l as “any textile product containing a substance or preparation that is released over time on different superficial parts of the human body, notably on human skin, and containing special functionalities such as cleansing, perfuming, changing appearance, protection, keeping in good condition or the correction of body odours” . In these textiles, cosmetic ingredients are adsorbed or incorporated inside the cotton fibres of clothes and garments to be transferred after contact to the skin at a dose enough to impart some cosmetic benefits . The active ingredients, including fragrances and aromas, are generally encapsulated and released from the fabric to the skin upon the action of different triggering events, such as changes in the pH or temperature, sweating and rubbing . As for the other functional textiles, the encapsulation of the active ingredients allows for a prolonged release, even after a few washing–drying cycles . The washing durability is enhanced when the active ingredient is incorporated inside the fabric fibres with respect to the application by coating or impregnation. The encapsulated active ingredient embedded or adsorbed onto a cosmetotextile can exert any cosmetic action, including skincare, antiaging or odour control. Encapsulated aromas and fragrances have been incorporated in cosmetotextiles for perfuming or deodorising purposes, thereby controlling odours resultant from daily activities and physical exercise. In a recent work, two strategies were reported for the release of β-citronellol from cotton functionalised with Carbohydrate-Binding Module (CBM) proteins. The first strategy was based on the odorant-binding proteins (OBPs) as a container for the fragrance, while the second one exploited the high cargo capacity for β-citronellol of liposomes. These two carriers were bound to the cotton fabric via CBM proteins. These two approaches were able to differentiate and control the release of β-citronellol after exposure with an acid sweat solution. Indeed, the release was faster for the OBP-based approach with respect to the immobilised liposomes on the functionalised cotton (31.9% vs. 5.9% of the initial amount after 90 min, respectively). Therefore, the first strategy result is more suitable for applications in which a high amount of fragrances should be released in a shorter time, while the second strategy is potentially employed for fabrics from which the fragrance should desirably be released in a prolonged and controlled manner . The most used coolant agent, menthol, which is able to penetrate through the stratum corneum, reaching the nerve termination and providing a freshening sensation, was loaded in a core-shell nanocapsule impregnated within nonwoven fabric. The nanocapsules assured a rapid penetration of menthol inside the deeper skin layers, preferentially through hair follicles and trans-epidermal absorption routes . Similarly, citronella oil was encapsulated in acacia gum microcapsules, which were dripped onto a nonwoven fabric. Microencapsulation by spray-drying reduced the volatility, with a prolonged release up to 16 weeks, and decreased the irritation potential of nonencapsulated citronella oil, as evaluated by the nonanimal hen’s egg test-chorioallantoic membrane (HET-CAM) assay . and summarise the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for textile applications, respectively.
Another application in which the micro- and nanoencapsulation of fragrances and flavours research has been focused on is related to food . As for the other active ingredients, the encapsulation of fragrances and flavours has been exploited in food processing and for the design of active food packaging . In the food industry, encapsulated flavours and fragrances have been widely used to ameliorate taste and/or odour, to adjust the nutritional value and to prolong the shelf-life of food . As such, food quality has improved, with positive implication in terms of consumer satisfaction and food consumption . For instance, fragrances and flavours are volatile compounds and are prone to evaporation during several food-processing operations or storage of the final product. Moreover, they can undergo chemical instability due to oxidation in the presence of air and light, moisture or high temperature, leading to chemical degradation and possible interactions with other food additives . In this regard, these compounds can be stabilised by encapsulation or complex formations. In addition to overcoming these concerns, encapsulation and/or complex formations improve also the manageability of these volatile food additives, guaranteeing stability and a simpler and standardised dosing. A classic example of the encapsulation of flavours in food technology is coffee aroma. Coffee aroma compounds are a mixture of pyridines, pyrazines, ketones, furans, etc. contained in the oil extracted from roasted coffee. These compounds are considered as flavouring agents to enrich the aroma, especially in instant coffee formulations. Roasted coffee oil is composed of several unsaturated fatty acids sensitive to oxidative degradation in the presence of air. Therefore, microencapsulation has been proposed as a strategy to preserve the freshly brewed coffee aroma in instant coffee products for a prolonged time after the first opening of the packaging. In addition, microencapsulation can be employed to control the release of these coffee aroma compounds over time. Specifically, roasted coffee oil was encapsulated in a modified food starch derived from waxy maize, and the resultant microcapsules were added to the formula of soluble coffee and instant cappuccino products with the aim of modulating the release of volatile organic compounds (VOC). The addition of microparticles improved the quality of the products in terms of aroma intensity, and the authors demonstrated how the composition of the product can affect the aroma release profile . Among all fragrances and aromas, EOs obtained from a large botanical variety of plants are the most encapsulated substances in food . They are used to provide a pleasant smell to food or to cover the original odour, configuring the olfactory sensation as a product identity marker. Being volatile liquids, their direct incorporation in food is not straightforward. Therefore, the food industry generally employed encapsulated or complex EOs, since these technological approaches both stabilised the components of EOs and increased their manageability. Complexation with β-CDs and encapsulation by simple or complex coacervation are still the most investigated in recent scientific studies. Different EOs have been employed for their antioxidant and antimicrobial effects, exploited for fruit preservation. Syringa EO was microencapsulated by the formation of complexes with β-CDs and used as an antifungal agent against Botrytis cinerea and Alternaria alternata to improve the quality attributes and storage behaviours of peaches . Similarly, microcapsules of Zingiber officinale EO were prepared using chitosan and carboxymethyl cellulose as shell materials to investigate the effects on the postharvest quality and prolonged the shelf-life of jujube fruits in terms of morphologic characteristics and some parameters as soluble solid contents, titratable acidity, the Red index and decay index and sensory quality . The EO extracted from the leaves of Eucalyptus and incorporated into carboxymethyl cellulose (CMC) was employed to control fungal growth causing soft rot on strawberries, configured as a valid alternative to synthetic fungicides for this preharvest treatment . Active packaging represents a fashionable option to preserve the quality and prolong the shelf life of food products. It refers to packaging materials, which are not inert, and does not exert only a mechanical function of enclosing food, but they “actively” interact with the atmosphere inside the packaging or directly with food products . In most cases, active packaging results in being effective in preventing chemical–physical or microbiological degradation by maintaining, at the same time, the organoleptic and nutritional properties of the product . Studies about active packaging have increased over the years, and several EOs have been incorporated to prepare materials with antioxidant and antimicrobial properties . In this field, besides the traditional encapsulating approaches such as β-CD complexation and simple or complex coacervations, nanofibers or microfibers of different compositions have been explored . Cinnamon EO as an antimicrobial agent for spoilage bacteria of edible fungi was encapsulated in polyvinyl alcohol/β-CD. Then, nanofibers were formed by electrospinning and chemical crosslinking to finally obtain a film. The film was applied on the inner surface of the box containing mushrooms. The packaging based on the nanofibrous film inhibited Gram-positive and Gram-negative bacteria and prolonged the shelf life of mushrooms, especially regarding quality parameters such as hardness and colour . Cinnamon EO was also encapsulated in CD nanosponges (CD-NS) as an antimicrobial agent for antimicrobial activity against foodborne pathogens, potentially employed in food packaging. CD-NS containing cinnamon EO displayed an effective antibacterial effect toward the tested bacteria. Notably, encapsulation enhanced the antibacterial activity of cinnamon EO with the respect to the nonencapsulated one, despite the slower release profile. According to the authors of the work, it represents the first study demonstrating the potential use of CD nanosponges to encapsulate and control the release of EOs in aqueous media . A biocomposite for active food packaging was prepared using chitosan, β-CD citrate (β-CDcit) and an oxidised nanocellulose (ONC) biopolymer. The obtained film was then impregnated with clove EO, which was retained possibly by the formation of inclusion complexes between the components. A higher activity toward Gram-negative than Gram-positive bacteria and toward fungi than yeast was observed in comparison to chitosan film alone . In another work, a saffron extract was encapsulated by electrospinning and electro-spraying techniques in zein matrices, yielding different microstructures as particles or fibres. This microstructure protected the encapsulated bioactive compounds from the saffron extract at different pH values, storage temperatures and UV light exposure, configuring these materials as potentially employed for food packaging and food healthy formulations . summarises the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for food applications.
The European Union (EU) Cosmetics Regulation defines a cosmetic product as “any substance or mixture used for external parts of the human body (epidermis, hair system, nails, lips and external genital organs), teeth and mucous membranes of the oral cavity for cleaning them, perfuming them, changing their appearance and/or correcting body odors and/or protecting them or keeping them in good condition” . In the last years, the beauty and personal care industry have become a multibillion-dollar international business with a significant growth value in emerging markets, such as Brazil, China, India, Indonesia and Argentina . In general, there is an increasing interest in natural cosmetic formulations that generates the demand for new products reformulated by using botanical and bioactive ingredients, including fragrances and aromas, to contribute to health, beauty and wellness. Another goal to have success in such a competitive and demanding sector is the use emergent technologies, such as microencapsulation able to give innovation, functional properties and, thus, an additional value to a cosmetic product . In particular, microencapsulation technologies have been proposed to increase stability, to protect against degradation and, also, to direct and control the release of active ingredients . Fragrance ingredients are active ingredients commonly used in cosmetic products intended for the application to skin and hair with the purpose to release pleasant odours. In some cases, the products also labelled as “unscented” may contain fragrances to mask the unpleasant smell of other ingredients without giving a perceptible scent. The application of microencapsulation technology on the delivery of flavours and fragrances is a topic of relevant interest considering the need to improve the efficacy of a wide range of cosmetic (perfumes) and personal care (hand and body wash, toothpaste, etc.) products . Fragrances are small volatile substances with scents, and their volatility is fundamental for the sensory response, despite causing an often-undesired loss during storage, limiting their effective use as additives in various products . Different substances are often used to replace natural fragrances because of their poor chemical and physical stability. Among these compounds, there are, for example, synthetic nitro- and polycyclic musks used in perfumes, deodorants and detergents that are toxic and nonbiodegradable, with accumulations in the environment, aquatic organisms and, also, in human milk . Since natural fragrances represent a preferable alternative from a toxicological point of view, microencapsulation represents an effective strategy to overcome all the issues related to their delivery. Microencapsulation can improve the shelf life and the delivery of highly volatile fragrances, with a gradual release of the encapsulated functional ingredient. Furthermore, the encapsulation technique has a strong effect on different odour properties and consumer perceptions, such as wet odour impact, tenacity and long-lasting odour during use, that are fundamental concerns for a cosmetic product. On the other hand, the formulation of effective nano- or microcapsules needs to take into account different issues, such as the amphiphilicity of volatile compounds and the need and difficulty to obtain monodisperse microcapsules with precisely controllable shell thicknesses and shell materials. New preparation techniques have been tested to obtain microcapsules with precisely tunable sizes, highly efficient encapsulation and proper shell properties, such as a crosslinking density, polarity and thickness, to achieve the enhanced retention of fragrances . Another strategy is the use of chemically functionalised biodegradable polymeric carriers able to give enhanced properties over conventional carrier materials with the advantage of being nonreactive when in contact with the human body and metabolised and removed from the body via normal metabolic pathways . The most commonly used shell materials in cosmetics are polysaccharides (gums, starches, celluloses, CDs and chitosan) ; proteins (gelatin, casein and soy proteins) ; lipids (waxes, paraffin and oils) and synthetic polymers (acrylic polymers, polyvinyl alcohol and poly(vinylpyrrolidone)) . Inorganic materials (silicates, clays and polyphosphates) can also be used . Different examples can be found in the literature on the development of systems intended for the encapsulation of fragrances with cosmetic applications. Sansukcharearnpon et al. encapsulated six fragrances: camphor, citronellal, eucalyptol, limonene, menthol and 4-tert-butylcyclohexyl acetate using the solvent displacement method and different polymer blends of ethyl cellulose, hydroxypropyl methylcellulose and poly(vinyl alcohol) as polymeric carriers. The process gave a 40% fragrance loading capacity with an 80% of the encapsulation efficiency at the fragrance:polymer weight ratio of 1:1 . A more recent example was represented by the encapsulation of Kaffir lime oil, an EO from Kaffir lime leaves. It is known to possess some important bioactivities, such as antioxidant, antileukemic, antitussive, antihemorrhage, antioxidative stress and antibacterial properties, that make it a fragrance used in the food, perfumery and cosmetic industries. Nanoencapsulation were obtained through the coacervation process. Nanocapsules with uneven surface morphology and a mean size of 457.87 nm with an encapsulation efficiency of 79.07% were formulated . Novel biocompatible nanocapsules (mean diameter 100 nm) based on soya lecithin and 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-(polyethylene glycol)-2000 (DSPE-PEG (2000)) as a polymeric shell and PLGA as a core material encapsulated a lily fragrance (LF-NPs) were formulated through the self-assembly technique, a simple and low-cost method. The encapsulation of lily fragrance was about 21.9%, and a sustained release was obtained . Another example is the encapsulation of rose fragrance, widely applied in the textile and cosmetics industry, characterised by the presence of many kinds of volatile compounds in this composition. Polybutylcyanoacrylate (PBCA) nanocapsules obtained via anionic polymerisation were successfully used to encapsulate this fragrance (encapsulation efficiency was 65.83%), providing sustained release properties inversely proportional to the nanocapsules size . The same technique has been used for the encapsulation of tuberose fragrance in chitosan nanoparticles characterised by promising controlled release and antibacterial properties . Apple aroma microcapsules were prepared by a complex coacervation–emulsion polymerisation technique using sodium alginate and tetradecylallydimethylammonium bromide as shell materials. The obtained microcapsules have a core-shell structure and a sphere-like shape (diameter from 20 to 50 μm). After the optimisation of the formulation, the microcapsules showed thermal stability up to 110 °C and a 10.8% aroma release after 100 h. The aroma release much increased once the microcapsules were broken by pressure, finding a potential application in cosmetic products . Microcapsules containing camellia oil were prepared using the heterocoagulation approach between chitosan and oleic acid. For the preparation, oleic acid was dissolved in camellia oil and chitosan in the continuous aqueous phase. The obtained core-shell microcapsules were tested as a dressing material to be applied on hair. Their mean diameters ranged from 1.5 μm to 4.5 μm and were adherent on the surface of human hair, being stable both before and after drying . A microparticulate system based on the zein and keratin proteins was developed for the release of fragrances on hair. Linalool and menthol were used as model fragrances. The zein/keratin microparticles were prepared using two approaches: (i) zein nanoparticles were firstly formed, and then, keratin was deposited onto the surface by electrostatic interactions, and (ii) zein was coprecipitated with keratin for microparticle formation. Microparticles were applied onto the hair, forming a film from which fragrances are released, thereby improving the hydration degree and mechanical properties of hair . EOs and volatile compounds can be also encapsulated in CDs in order to improve their water solubility; avoid oxygen-, light- or heat-induced degradation and loss during processing and storage and to stabilise them against unwanted changes. Moreover, the use of CD–flavour inclusion complexes allows the use of very small amounts of flavours . 1-Phenylethanol (1-PE) and 1-phenylethanol (2-PE) are important aromatic alcohols with rose-like fragrances that are the major constituents of rose-like flowers scents. The applications of the two isomers have been limited because of their low aqueous solubility, high volatility and thermal instability. For these reasons, CDs have been utilised for the formation of 1:1 stoichiometric inclusion complexes with α-CD, β-CD and HP-β-CD. The results showed that 1-PE and 2-PE can form inclusion complexes with β-CD in a solid state and greatly enhance their stability, indicating that β-CD was a suitable excipient for increasing not only the stability but, also, to achieve a controlled release of 1-PE and 2-PE. Thus, β-CD complexation technology might be a promising approach in terms of expanding the applications of 1-PE and 2-PE .
Another application of the aroma and fragrances encapsulation is the design of aromatic paper or scented paper. Aromatic paper is intended to provide a pleasant surrounding atmosphere on the basis of the aromatherapy principles. In this regard, research has been focused on the development of wallpaper with the aim of providing comfortable sensations and to enhance the psychological and physical well-being . Scented papers are, generally, wrapping or writing papers, in which perfumes or fragrances are added for voluptuary purposes or marketing appeal. These papers can be prepared by adding the nano/microspheres containing fragrances or aromas directly into the pulp during the processing operations, or, alternatively, the encapsulated materials can be adsorbed onto the paper surface in a further production step. Moreover, the scented encapsulated compounds can be applied on paper after dispersing them into a coating varnish or ink. Lavender oil microcapsules were prepared with ABA-type triblock copolymer (polyethylene oxide-polypropylene glycol-polyethylene oxide, PEO-b-PPG-b-PEO) and adsorbed onto the paper surface. The distribution of the microcapsules on the paper surface was homogeneous without degradation. The colour and gloss properties of the paper were also maintained in compliance with the standards . In another work, lavender EO was encapsulated by coacervation using gelatin/gum arabica as the shell material. The obtained microcapsules were dispersed into a UV-curable varnish at a selected microcapsule-to-varnish ratio. The varnish was characterised in terms of the control and protection of the encapsulated lavender EO major volatile components. Notably, the presence of the encapsulated materials does not interfere with the standard screen-printing process generally employed to produce a fragrant gift-wrapping paper . Recently, encapsulated fragrances with an antibacterial effect were applied on paper. Specifically, vanillin was encapsulated in chitosan/poly(lactic-co-glycolic acid (PLGA) nanocapsules to prepare an aromatic wallpaper with an additional antibacterial action. Thanks to the presence of chitosan, the nanospheres showed an antibacterial effect against Gram positive and Gram negative, and the adhesion on the wallpaper was also enhanced . In another work, the encapsulated EO had an antibacterial effect. Citronella EO was encapsulated in microcapsules, obtained by complex coacervation using as a coating material the mixture gelatin/carboxymethyl cellulose or gelatin/gum arabic or by the in-situ polymerisation of melamine–formaldehyde with a polyacrylic acid modifier. These nanocapsules were employed for the preparation of functional coatings intended for paper or cardboard secondary packaging. Both microencapsulation methods provided single-core microcapsules, but some differences were highlighted. Microcapsules from coacervation were more permeable and allowed a steady release of the EO, while those from in-situ polymerisation were impermeable, showing a high retention of the EO, which was released only after a mechanical pressure. The released vapour efficiently inhibited the growth of the tested microorganisms, configuring this manufacture as the first description of a pressure-activated coating for antimicrobial paper . summarises the recent studies reporting the microencapsulation and nanoencapsulation of fragrances and flavours for paper applications.
Flavours and fragrances are compounds of great importance, widely employed in different products to improve the quality and ameliorate the satisfaction of the consumers. Encapsulation protects them from evaporation and chemical degradation, thereby controlling the release and allowing a simpler handling for processing. This strategy has enabled the use of flavours and fragrances for different technological applications, including the textiles, food, cosmetic and paper industries. Although research is still ongoing in this field, the encapsulation in core-shell polymeric nanoparticles, as well as the formation of molecular inclusion complexes between volatile compounds and CDs, are the most employed techniques in the experimental studies published in the last years. Both techniques resulted in being effective in encapsulating flavours, aroma compounds and EOs in a stable form suitable for different applications. Specifically, remarkable advances have been achieved for the encapsulation of these compounds or their molecular inclusion complexes in micro- or nanofibers/particles via electrodynamic processes. Among all technological fields in which core-shell-encapsulated flavours and fragrances find a relevant usage, the textiles and food packaging industries are the most investigated, despite other applications, such as paper production or coating, that can also benefit from the potential development of these micro- or nanosystems.
|
Pain neuroscience education and motor imagery‐based exercise protocol for patients with fibromyalgia: A randomized controlled trial | af14549d-1160-4d1e-b8d4-ce74564b2216 | 11391019 | Patient Education as Topic[mh] | INTRODUCTION As a result of the studies initiated by the International Association for the Study of Pain (IASP), according to the IASP council, the definition of pain is “an unpleasant sensory and emotional experience associated with actual and/or potential tissue damage” (Raja et al., ). Fibromyalgia (FM) is a rheumatological disease characterized by widespread pain and tenderness. The presence of “chronic widespread pain” prevails without any tissue damage in the musculoskeletal system. Typical symptoms of FM are spontaneous pain in muscles and joints (allodynia), hyperalgesia, extreme thermal sensitivity, and extreme sensitivity to external stimuli such as chemicals, smells, sounds, and light. It also includes fatigue, sleep disturbance, morning stiffness, anxiety–depression, and cognitive disorders (Brum et al., ). Pain is a highly individual, multidimensional, and complex process; whether or not it was formed through a comprehensive evaluation, taking into account all our experiences, thoughts, feelings, and beliefs. It has been proven that the “brain” definitely decides for its emergence and that it has a neurophysiological basis. When the balance of the loads on the nervous system is disrupted, alarm bells begin to ring in the system and the brain decides that the system is in danger and creates pain (Butler & Moseley, ). There are biological, psychological, and sociological burdens on our body, and all of these stresses are effective in the formation of pain. Accordingly, this study argues that pain should be addressed within the framework of the biopsychosocial model, as in current pain approaches. Motor imagery (MI) is the mental execution of a movement without actually performing any movement and without stretching the muscles (Sengul et al., ). MI can be used to improve motor performance and learning motor tasks, inducing activation of various cortical areas, influencing the central nervous system, and causing changes in the brain (Grande‐Alonso et al., ). A neurocognitive rehabilitation approach with MI also improves pain recognition and perception in FM patients. It is effective in reducing pain and improving related symptoms (Paolucci et al., ). Pain neuroscience education (PNE), as a relatively new and promising approach, is an educational content that aims to reconceptualize pain by explaining the neurobiology and neurophysiology of pain related to pain experiences to people with chronic pain, rather than focusing only on tissue pathology (Puentedura & Flynn, ). PNE is a cognitive‐based intervention implemented to increase participants' knowledge about pain and change their attitudes and beliefs regarding pain (Willaert et al., ). It uses neurophysiological information to teach patients that pain can be overprotective and completely real, even in the absence of tissue damage (Ceballos‐Laita et al., ). Depending on the time of administration, PNE can be seen as a protection that allows taking precautions in acute pain situations and as a treatment/rehabilitation training in chronic pain situations (Louw et al., ). It has also taken on a health education role, aiming to provide up‐to‐date information on neuroscientific advances in the field of pain (Galan‐Martin et al., ). The main aim of this study is to find out whether applying both PNE and motor imagery‐based exercise protocol (MIEP) will primarily reduce the pain of FM. These therapies could show an evidence of improvement in FM patients. However, there are no studies evaluating their effectiveness in combination.
METHODS 2.1 Study design This was a single‐center, prospective, assessor‐blinded, randomized controlled trial study. Ethics approval was obtained from the Clinical Research Ethics Committee of Uskudar University (approval number: E‐99102440‐/2022‐11). The study was registered at ClinicalTrials.gov (NCT05890326). Data collection was performed between November 2022 and May 2023. Moher et al.’s detailed randomized clinical trial guideline, the Consolidated Trial Reporting Standards (CONSORT) was considered for this study (Moher et al., ) and CONSORT flow chart in Figure presents the research design. All participants were given all necessary and detailed information about the study procedure and consent forms were signed. This study, which wanted to investigate the effectiveness of pain neuroscience training with MI exercises on chronic back pain, was designed as a four‐group study, including three experiments and a control group [(1) Exercise Protocol based on Motor Imagery (MIEP); (2) Pain Neuroscience Training (PNE); (3) Exercise Protocol based on Motor Imagery (MIEP) + Pain Neuroscience Training (PNE); (4) Control Group (CG)]. 2.2 Participants, inclusion, and exclusion criteria Baseline and follow‐up examinations were performed by an experienced, blinded physical therapist, and neurologist who referred patients who met the study criteria to the study physical therapist. The incoming patients were diagnosed with FM by a physical therapist and neurologist who are experts in the field, according to the protocol described by Wolfe et al. . “The Fibromyalgia 2016 criteria require the following: (1) Widespread Pain Index (WPI) score ≥7 and Symptom Severity Score (SSS) score ≥5, or a WPI score 4–6 and SSS score ≥9; (2) the presence of widespread pain as defined above; and (3) symptoms of at least 3 months in duration (5).” Conditions required to participate in the study were: (1) experiencing widespread chronic back pain for more than 12 months; (2) pain in at least 12 or more of 18 tender points with pressure of 5 kg/cm 2 ; (3) ages 18–60; (4) not using pharmacological treatment; (5) not participating in any pain program; (6) not having participated in any physical exercise program in the last 2 years. The conditions that constitute obstacles to participate in the study are (1) pregnancy; (2) currently continuing a physical exercise program; (3) the presence of a psychiatric disorder under psychological treatment; (4) the presence of major neurological and/or mental diseases such as Alzheimer's, dementia, and epilepsy; (5) the presence of physical and mental disabilities. 2.3 Procedure Our study is a randomized controlled clinical study with four groups, one CG and three experimental groups. The experimental groups used “MIEP” and “PNE” applications both alone (2. group: MIEP, n = 12), (3. group: PNE; n = 12) as well as combined (1. group: MIEP + PNE; n = 14) there are three groups. There is also a fourth group that controls the experimental groups (4. group: CG; n = 12). MIEP was performed twice a week (24 sessions in total) and PNE was performed once every 2 weeks (six sessions in total), with the entire sample ( n = 50) being followed for 12 weeks. 2.4 Interventions: Motor imagery‐based exercise protocol MIEP was referenced from Paolucci et al.’s previous study. Patients in the groups including MIEP received sessions of maximum 60 min in groups of three to four people, twice a week for 12 weeks. The MIEP intervention carried out in the study was blended and designed to best suit the patient group in the study by examining previous studies using a MI protocol. While selecting the exercises in the program, care was taken to ensure that the cervical–thoracic spine worked in every plane and axis, that the cervical–thoracic muscles were used, and that they included nerve mobilization. Before starting the study, video recordings of the designated exercises were made and a summary brochure about MI was prepared to give to the participants. In the first session, the effects of MI were explained by giving a theoretical explanation, and at the beginning of each session, training on diaphragmatic breathing was given, as breathing exercises would be performed before MI. At the same time, at the end of each session, after MI, bodily awareness was stimulated by combining breathing and relaxation exercises. At the end of each session, people's opinions were received and their questions were answered. Relaxing meditation music was used in the background during MI and breathing exercises. 2.5 Interventions: Pain neuroscience education PNE was applied face‐to‐face by PNE‐certificated researcher (Selin Kircali) referenced from Saracoglu et al. . The training sessions were given as groups of five to six people, a maximum of 60 min and one in 2 weeks, for a total of six training sessions for 12 weeks. The PNE intervention performed in the study, as in similar studies in the literature, was mainly created by using the book “ Explain Pain ” written by David Butler and Lorimer Moseley for patients suffering from chronic pain; in addition, up‐to‐date information from current published articles on pain was also shared. Brochures summarizing the educational content were prepared before the start of the training sessions. In each training session, people's opinions and thoughts were taken into account, interactive participation was ensured by allowing them to give examples on the subject, and people's questions were answered. Trainings were held primarily as face‐to‐face sessions, only held online if individuals were unable to attend face‐to‐face for any reason. The training sessions were organized and explained by the physiotherapist who was trained in pain neurophysiology and pain management, holds a master's degree in neuroscience, and also conducted the research. Although neurophysiological terms were used during the explanations, they were reduced to a simplicity that people could understand and metaphorical examples were used to make them memorable. 2.6 Control group Participants in the CG did not receive any intervention and were followed for 12 weeks. Their final evaluation was carried out 12 weeks after the initial evaluation. 2.7 Randomization and blinding Individuals who met the criteria had similar pain intensities, and agreed to volunteer for participation were given a volunteer code in the order of application. Without the knowledge of the patients, the papers with numbers from 1 to 50 were determined by drawing lots for the four groups through simple randomization. Whichever group and which numbers fall on it; people with that volunteer code were included in the group with that number. Preliminary and final evaluations were made by the two physiotherapist (Selin Kircali and Öznur Özge Özcan) who managed the study and performed the applications and participated blindly in the recruitment of participants. The physical therapist (Öznur Özge Özcan) remained blind to the groups of patients she referred to as suitable for the study. The participants were also asked not to disclose any information about their treatment during the follow‐up assessments. 2.8 Outcome measurements 2.8.1 Visual Pain Scale Using a ruler, the score is determined by measuring the distance (mm) on the 10 cm line between the “no pain” anchor and the patient's mark, providing a score range of 0–10. A higher score indicates greater pain intensity. The Turkish validity study of the scale, the original version of which was developed by Freyd et al. in 2001, was conducted by Yaray et al. in 2011. As a result of the research, the Cronbach's alpha coefficient was 0.965, proving the validity of Visual Analog Scale (VAS) in Turkey (Yaray et al., ). Mease et al. reported that, in terms of clinical research on pain interventions in FM patients, these minimal clinical importance change differences (MCIDs) may correspond to decreases in scores of 32.3% and 34.2% from baseline. 2.8.2 The Pain Beliefs Questionnaire (PBQ) Beliefs about the causes and consequences of pain are collected in a total of 12 items, and eight of them (1, 2, 3, 5, 7, 8, 10, 11 substances) organic pain beliefs; four of them (4, 6, 9, 12 substances) express psychological pain beliefs. The evaluation is carried out with a six‐point Likert‐type scoring and the person marks one of the “always–almost always–often–sometimes–rarely–never” options according to the ratio of true/false items according to their own beliefs. The validity and reliability study of this scale in Turkey, which was first published by Edwards et al. in 1992, was conducted by Sertel et al. in 2006. As a result of the research, Cronbach alpha value for internal consistency was found to be 0.71 for organic beliefs and 0.73 for psychological beliefs, proving validity in Turkish (Sertel Berk, ). There is no reported reference MCID for Pain Beliefs Questionnaire (PBQ) in patients with FM. 2.8.3 The Pain Catastrophizing Scale‐9 The Pain Catastrophizing Scale‐9 (PCS‐9) is a 13‐item self‐report measure designed to assess catastrophic thinking related to pain among adults with or without chronic pain. Patients are asked to rate the degree to which they have any of the thoughts described in the questionnaire using a five‐point Likert scale ranging from 0 (never) to 4 (always). Scores range from 0 to 52, with higher scores indicating a higher level of disaster. The validity and reliability study of the scale, which was first developed by Sullivan et al. in 1995, in Turkey was conducted by Uğurlu et al. in 2017. As a result of the research, Cronbach reliability coefficients for helplessness, rumination, magnification subscales, and total score were found to be 0.909, 0.856, 0.906, and 0.955, respectively, and validity and reliability were proven (Süren et al., ). The MCID for the PCS has been found to range from 3.2 to 4.5 in patients (Darnall et al., ). 2.8.4 The Tampa Scale for Kinesiophobia The Tampa Scale for Kinesiophobia (TSK) is a 17‐item questionnaire that quantifies fear of movement. Individual item scores range from 1 to 4, with the negatively worded items (4, 8, 12, 16) having a reverse scoring (4–1). The 17‐item TSK total scores range from 17 to 68 where the lowest 17 means no or negligible kinesiophobia, and the higher scores indicate an increasing degree of kinesiophobia. This scale, which was developed but not published by Miller et al. in 1991, was published by Vlaeyen et al. in 1995. The reliability study in Turkey was published by Tunca et al. in 2011. Test–retest reliability was applied in the study and the intraclass correlation coefficient (ICC) value was 0.806, resulting in excellent reliability (Tunca et al., , 46). MCID for TSK was reported to be at least 4.5 points for patients with chronic musculoskeletal pain (Saracoglu et al., ). 2.8.5 Hospital Anxiety and Depression Scale In the 14‐item scale, seven evaluate anxiety (odd‐numbered items), seven evaluate depression (even‐numbered items), and scores between 0 and 7 are considered normal, scores between 8 and 10 are considered borderline, and 11 and above are considered abnormal. The Turkish validity and reliability study of the scale, which was first developed by Zigmond and Snaith in 1983, was conducted by Aydemir et al. in 1997, and the Cronbach's alpha coefficient was found to be 0.8525 for the anxiety subscale and 0.7784 for the depression subscale. Thus, the Turkish form was accepted as valid and safe (Aydemir et al., ). MCID for Hospital Anxiety and Depression Scale (HADS) was reported as 5.7 before (Longo et al., ). 2.8.6 Cognitive Emotion Regulation Questionnaire The scale, consisting of 36 items, measures under nine subheadings and these are “self‐blame, acceptance, focusing on negative thoughts, positive refocusing, refocusing on the plan, positive reconsideration, fixing the point of view, destruction, blaming others.” Evaluation is made with a five‐point Likert‐type scoring, and the person ticks one of the options “almost never–rarely–sometimes–often almost always” according to his/her own thoughts and feelings. The scale was first developed by Garnefski et al. in 2001, and its Turkish validity studies were conducted by Tuna and Bozo in 2012. As a result of the statistical analysis, the Cronbach alpha value was 0.90, revealing that Cognitive Emotion Regulation Questionnaire (CERQ) is a valid and reliable measurement tool (Tuna & Bozo, ). There is no reported reference MCID for CERQ in patients with FM. 2.8.7 Rosenberg Self‐Esteem Scale In the scale consisting of 10 items, participants choose one of the options “very true–true–wrong–very wrong” according to their thoughts. In his evaluation, the scoring options in the first five items were “4–3–2–1”. In the last five items, the order of the options is “1–2–3–4.” Between 30 and 40 points, a good level of self‐esteem. Between 26 and 29 points, a moderate level of self‐esteem. A score of 25 or less indicates low self‐esteem. The Turkish validity and reliability of the scale developed by Rosenberg in 1963 was proven by Çuhadaroğlu in 1986 (Tonga & Halisdemir, ). There is no reported reference MCID for Rosenberg Self‐Esteem Scale (RSS) in patients with FM. 2.8.8 The Body Awareness Questionnaire The Body Awareness Questionnaire (BAQ) is an 18‐item scale, with the total scale score calculated as a sum of the items. Items are scored on a 1–7 scale, with the total scale score calculated as a sum of the items. The questions with asterisks are reverse scored. The Turkish validity and reliability study of the questionnaire, which was developed by Shields and her friends in 1989, was conducted by Karaca in 2017. With the Cronbach's alpha value being 0.91, the Turkish version of BAQ was accepted as valid and reliable (Karaca, ). There is no reported reference MCID for BAQ in patients with FM. 2.9 Sample size The number of samples was calculated with the G‐Power 3.1.9.4 program, taking into account the significance level and effect size of the established hypothesis. Based on the correlation coefficient between pain intensity and depression ( r = 0.562, p < .05) obtained in Özer et al.’s study (2018), the high effect level was found to be 0.74 (effect size) (Özer et al., ). In order to find a significant difference in the study, when α = 0.05, 1 − β = 0.95, that is, the amount of error was 0.005 and the power of the test was 95%, the sample size was calculated as 44 people with a minimum of 11 in four groups (Özer et al., ). 2.10 Statistical analyses Statistical analyzes “IBM SPSS Statistics for Windows.” It was performed using Version 25.0 (Statistical Package for the Social Sciences, IBM Corp.). Descriptive statistics are presented as n and % for categorical variables and as mean ± SD for continuous variables. When the data of the study were examined in terms of normality assumptions, Kolmogorov–Smirnov values were determined as p > .05. Wilcoxon test, one of the nonparametric tests, was applied to determine whether there was a significant difference between the scale and subscales before and after within the groups. Kruskal–Wallis test was used to compare the scale and subscales between groups. To determine which groups there was a significant difference, the Games–Howell test, one of the post hoc tests, was used. p < .05 was considered statistically significant.
Study design This was a single‐center, prospective, assessor‐blinded, randomized controlled trial study. Ethics approval was obtained from the Clinical Research Ethics Committee of Uskudar University (approval number: E‐99102440‐/2022‐11). The study was registered at ClinicalTrials.gov (NCT05890326). Data collection was performed between November 2022 and May 2023. Moher et al.’s detailed randomized clinical trial guideline, the Consolidated Trial Reporting Standards (CONSORT) was considered for this study (Moher et al., ) and CONSORT flow chart in Figure presents the research design. All participants were given all necessary and detailed information about the study procedure and consent forms were signed. This study, which wanted to investigate the effectiveness of pain neuroscience training with MI exercises on chronic back pain, was designed as a four‐group study, including three experiments and a control group [(1) Exercise Protocol based on Motor Imagery (MIEP); (2) Pain Neuroscience Training (PNE); (3) Exercise Protocol based on Motor Imagery (MIEP) + Pain Neuroscience Training (PNE); (4) Control Group (CG)].
Participants, inclusion, and exclusion criteria Baseline and follow‐up examinations were performed by an experienced, blinded physical therapist, and neurologist who referred patients who met the study criteria to the study physical therapist. The incoming patients were diagnosed with FM by a physical therapist and neurologist who are experts in the field, according to the protocol described by Wolfe et al. . “The Fibromyalgia 2016 criteria require the following: (1) Widespread Pain Index (WPI) score ≥7 and Symptom Severity Score (SSS) score ≥5, or a WPI score 4–6 and SSS score ≥9; (2) the presence of widespread pain as defined above; and (3) symptoms of at least 3 months in duration (5).” Conditions required to participate in the study were: (1) experiencing widespread chronic back pain for more than 12 months; (2) pain in at least 12 or more of 18 tender points with pressure of 5 kg/cm 2 ; (3) ages 18–60; (4) not using pharmacological treatment; (5) not participating in any pain program; (6) not having participated in any physical exercise program in the last 2 years. The conditions that constitute obstacles to participate in the study are (1) pregnancy; (2) currently continuing a physical exercise program; (3) the presence of a psychiatric disorder under psychological treatment; (4) the presence of major neurological and/or mental diseases such as Alzheimer's, dementia, and epilepsy; (5) the presence of physical and mental disabilities.
Procedure Our study is a randomized controlled clinical study with four groups, one CG and three experimental groups. The experimental groups used “MIEP” and “PNE” applications both alone (2. group: MIEP, n = 12), (3. group: PNE; n = 12) as well as combined (1. group: MIEP + PNE; n = 14) there are three groups. There is also a fourth group that controls the experimental groups (4. group: CG; n = 12). MIEP was performed twice a week (24 sessions in total) and PNE was performed once every 2 weeks (six sessions in total), with the entire sample ( n = 50) being followed for 12 weeks.
Interventions: Motor imagery‐based exercise protocol MIEP was referenced from Paolucci et al.’s previous study. Patients in the groups including MIEP received sessions of maximum 60 min in groups of three to four people, twice a week for 12 weeks. The MIEP intervention carried out in the study was blended and designed to best suit the patient group in the study by examining previous studies using a MI protocol. While selecting the exercises in the program, care was taken to ensure that the cervical–thoracic spine worked in every plane and axis, that the cervical–thoracic muscles were used, and that they included nerve mobilization. Before starting the study, video recordings of the designated exercises were made and a summary brochure about MI was prepared to give to the participants. In the first session, the effects of MI were explained by giving a theoretical explanation, and at the beginning of each session, training on diaphragmatic breathing was given, as breathing exercises would be performed before MI. At the same time, at the end of each session, after MI, bodily awareness was stimulated by combining breathing and relaxation exercises. At the end of each session, people's opinions were received and their questions were answered. Relaxing meditation music was used in the background during MI and breathing exercises.
Interventions: Pain neuroscience education PNE was applied face‐to‐face by PNE‐certificated researcher (Selin Kircali) referenced from Saracoglu et al. . The training sessions were given as groups of five to six people, a maximum of 60 min and one in 2 weeks, for a total of six training sessions for 12 weeks. The PNE intervention performed in the study, as in similar studies in the literature, was mainly created by using the book “ Explain Pain ” written by David Butler and Lorimer Moseley for patients suffering from chronic pain; in addition, up‐to‐date information from current published articles on pain was also shared. Brochures summarizing the educational content were prepared before the start of the training sessions. In each training session, people's opinions and thoughts were taken into account, interactive participation was ensured by allowing them to give examples on the subject, and people's questions were answered. Trainings were held primarily as face‐to‐face sessions, only held online if individuals were unable to attend face‐to‐face for any reason. The training sessions were organized and explained by the physiotherapist who was trained in pain neurophysiology and pain management, holds a master's degree in neuroscience, and also conducted the research. Although neurophysiological terms were used during the explanations, they were reduced to a simplicity that people could understand and metaphorical examples were used to make them memorable.
Control group Participants in the CG did not receive any intervention and were followed for 12 weeks. Their final evaluation was carried out 12 weeks after the initial evaluation.
Randomization and blinding Individuals who met the criteria had similar pain intensities, and agreed to volunteer for participation were given a volunteer code in the order of application. Without the knowledge of the patients, the papers with numbers from 1 to 50 were determined by drawing lots for the four groups through simple randomization. Whichever group and which numbers fall on it; people with that volunteer code were included in the group with that number. Preliminary and final evaluations were made by the two physiotherapist (Selin Kircali and Öznur Özge Özcan) who managed the study and performed the applications and participated blindly in the recruitment of participants. The physical therapist (Öznur Özge Özcan) remained blind to the groups of patients she referred to as suitable for the study. The participants were also asked not to disclose any information about their treatment during the follow‐up assessments.
Outcome measurements 2.8.1 Visual Pain Scale Using a ruler, the score is determined by measuring the distance (mm) on the 10 cm line between the “no pain” anchor and the patient's mark, providing a score range of 0–10. A higher score indicates greater pain intensity. The Turkish validity study of the scale, the original version of which was developed by Freyd et al. in 2001, was conducted by Yaray et al. in 2011. As a result of the research, the Cronbach's alpha coefficient was 0.965, proving the validity of Visual Analog Scale (VAS) in Turkey (Yaray et al., ). Mease et al. reported that, in terms of clinical research on pain interventions in FM patients, these minimal clinical importance change differences (MCIDs) may correspond to decreases in scores of 32.3% and 34.2% from baseline. 2.8.2 The Pain Beliefs Questionnaire (PBQ) Beliefs about the causes and consequences of pain are collected in a total of 12 items, and eight of them (1, 2, 3, 5, 7, 8, 10, 11 substances) organic pain beliefs; four of them (4, 6, 9, 12 substances) express psychological pain beliefs. The evaluation is carried out with a six‐point Likert‐type scoring and the person marks one of the “always–almost always–often–sometimes–rarely–never” options according to the ratio of true/false items according to their own beliefs. The validity and reliability study of this scale in Turkey, which was first published by Edwards et al. in 1992, was conducted by Sertel et al. in 2006. As a result of the research, Cronbach alpha value for internal consistency was found to be 0.71 for organic beliefs and 0.73 for psychological beliefs, proving validity in Turkish (Sertel Berk, ). There is no reported reference MCID for Pain Beliefs Questionnaire (PBQ) in patients with FM. 2.8.3 The Pain Catastrophizing Scale‐9 The Pain Catastrophizing Scale‐9 (PCS‐9) is a 13‐item self‐report measure designed to assess catastrophic thinking related to pain among adults with or without chronic pain. Patients are asked to rate the degree to which they have any of the thoughts described in the questionnaire using a five‐point Likert scale ranging from 0 (never) to 4 (always). Scores range from 0 to 52, with higher scores indicating a higher level of disaster. The validity and reliability study of the scale, which was first developed by Sullivan et al. in 1995, in Turkey was conducted by Uğurlu et al. in 2017. As a result of the research, Cronbach reliability coefficients for helplessness, rumination, magnification subscales, and total score were found to be 0.909, 0.856, 0.906, and 0.955, respectively, and validity and reliability were proven (Süren et al., ). The MCID for the PCS has been found to range from 3.2 to 4.5 in patients (Darnall et al., ). 2.8.4 The Tampa Scale for Kinesiophobia The Tampa Scale for Kinesiophobia (TSK) is a 17‐item questionnaire that quantifies fear of movement. Individual item scores range from 1 to 4, with the negatively worded items (4, 8, 12, 16) having a reverse scoring (4–1). The 17‐item TSK total scores range from 17 to 68 where the lowest 17 means no or negligible kinesiophobia, and the higher scores indicate an increasing degree of kinesiophobia. This scale, which was developed but not published by Miller et al. in 1991, was published by Vlaeyen et al. in 1995. The reliability study in Turkey was published by Tunca et al. in 2011. Test–retest reliability was applied in the study and the intraclass correlation coefficient (ICC) value was 0.806, resulting in excellent reliability (Tunca et al., , 46). MCID for TSK was reported to be at least 4.5 points for patients with chronic musculoskeletal pain (Saracoglu et al., ). 2.8.5 Hospital Anxiety and Depression Scale In the 14‐item scale, seven evaluate anxiety (odd‐numbered items), seven evaluate depression (even‐numbered items), and scores between 0 and 7 are considered normal, scores between 8 and 10 are considered borderline, and 11 and above are considered abnormal. The Turkish validity and reliability study of the scale, which was first developed by Zigmond and Snaith in 1983, was conducted by Aydemir et al. in 1997, and the Cronbach's alpha coefficient was found to be 0.8525 for the anxiety subscale and 0.7784 for the depression subscale. Thus, the Turkish form was accepted as valid and safe (Aydemir et al., ). MCID for Hospital Anxiety and Depression Scale (HADS) was reported as 5.7 before (Longo et al., ). 2.8.6 Cognitive Emotion Regulation Questionnaire The scale, consisting of 36 items, measures under nine subheadings and these are “self‐blame, acceptance, focusing on negative thoughts, positive refocusing, refocusing on the plan, positive reconsideration, fixing the point of view, destruction, blaming others.” Evaluation is made with a five‐point Likert‐type scoring, and the person ticks one of the options “almost never–rarely–sometimes–often almost always” according to his/her own thoughts and feelings. The scale was first developed by Garnefski et al. in 2001, and its Turkish validity studies were conducted by Tuna and Bozo in 2012. As a result of the statistical analysis, the Cronbach alpha value was 0.90, revealing that Cognitive Emotion Regulation Questionnaire (CERQ) is a valid and reliable measurement tool (Tuna & Bozo, ). There is no reported reference MCID for CERQ in patients with FM. 2.8.7 Rosenberg Self‐Esteem Scale In the scale consisting of 10 items, participants choose one of the options “very true–true–wrong–very wrong” according to their thoughts. In his evaluation, the scoring options in the first five items were “4–3–2–1”. In the last five items, the order of the options is “1–2–3–4.” Between 30 and 40 points, a good level of self‐esteem. Between 26 and 29 points, a moderate level of self‐esteem. A score of 25 or less indicates low self‐esteem. The Turkish validity and reliability of the scale developed by Rosenberg in 1963 was proven by Çuhadaroğlu in 1986 (Tonga & Halisdemir, ). There is no reported reference MCID for Rosenberg Self‐Esteem Scale (RSS) in patients with FM. 2.8.8 The Body Awareness Questionnaire The Body Awareness Questionnaire (BAQ) is an 18‐item scale, with the total scale score calculated as a sum of the items. Items are scored on a 1–7 scale, with the total scale score calculated as a sum of the items. The questions with asterisks are reverse scored. The Turkish validity and reliability study of the questionnaire, which was developed by Shields and her friends in 1989, was conducted by Karaca in 2017. With the Cronbach's alpha value being 0.91, the Turkish version of BAQ was accepted as valid and reliable (Karaca, ). There is no reported reference MCID for BAQ in patients with FM.
Visual Pain Scale Using a ruler, the score is determined by measuring the distance (mm) on the 10 cm line between the “no pain” anchor and the patient's mark, providing a score range of 0–10. A higher score indicates greater pain intensity. The Turkish validity study of the scale, the original version of which was developed by Freyd et al. in 2001, was conducted by Yaray et al. in 2011. As a result of the research, the Cronbach's alpha coefficient was 0.965, proving the validity of Visual Analog Scale (VAS) in Turkey (Yaray et al., ). Mease et al. reported that, in terms of clinical research on pain interventions in FM patients, these minimal clinical importance change differences (MCIDs) may correspond to decreases in scores of 32.3% and 34.2% from baseline.
The Pain Beliefs Questionnaire (PBQ) Beliefs about the causes and consequences of pain are collected in a total of 12 items, and eight of them (1, 2, 3, 5, 7, 8, 10, 11 substances) organic pain beliefs; four of them (4, 6, 9, 12 substances) express psychological pain beliefs. The evaluation is carried out with a six‐point Likert‐type scoring and the person marks one of the “always–almost always–often–sometimes–rarely–never” options according to the ratio of true/false items according to their own beliefs. The validity and reliability study of this scale in Turkey, which was first published by Edwards et al. in 1992, was conducted by Sertel et al. in 2006. As a result of the research, Cronbach alpha value for internal consistency was found to be 0.71 for organic beliefs and 0.73 for psychological beliefs, proving validity in Turkish (Sertel Berk, ). There is no reported reference MCID for Pain Beliefs Questionnaire (PBQ) in patients with FM.
The Pain Catastrophizing Scale‐9 The Pain Catastrophizing Scale‐9 (PCS‐9) is a 13‐item self‐report measure designed to assess catastrophic thinking related to pain among adults with or without chronic pain. Patients are asked to rate the degree to which they have any of the thoughts described in the questionnaire using a five‐point Likert scale ranging from 0 (never) to 4 (always). Scores range from 0 to 52, with higher scores indicating a higher level of disaster. The validity and reliability study of the scale, which was first developed by Sullivan et al. in 1995, in Turkey was conducted by Uğurlu et al. in 2017. As a result of the research, Cronbach reliability coefficients for helplessness, rumination, magnification subscales, and total score were found to be 0.909, 0.856, 0.906, and 0.955, respectively, and validity and reliability were proven (Süren et al., ). The MCID for the PCS has been found to range from 3.2 to 4.5 in patients (Darnall et al., ).
The Tampa Scale for Kinesiophobia The Tampa Scale for Kinesiophobia (TSK) is a 17‐item questionnaire that quantifies fear of movement. Individual item scores range from 1 to 4, with the negatively worded items (4, 8, 12, 16) having a reverse scoring (4–1). The 17‐item TSK total scores range from 17 to 68 where the lowest 17 means no or negligible kinesiophobia, and the higher scores indicate an increasing degree of kinesiophobia. This scale, which was developed but not published by Miller et al. in 1991, was published by Vlaeyen et al. in 1995. The reliability study in Turkey was published by Tunca et al. in 2011. Test–retest reliability was applied in the study and the intraclass correlation coefficient (ICC) value was 0.806, resulting in excellent reliability (Tunca et al., , 46). MCID for TSK was reported to be at least 4.5 points for patients with chronic musculoskeletal pain (Saracoglu et al., ).
Hospital Anxiety and Depression Scale In the 14‐item scale, seven evaluate anxiety (odd‐numbered items), seven evaluate depression (even‐numbered items), and scores between 0 and 7 are considered normal, scores between 8 and 10 are considered borderline, and 11 and above are considered abnormal. The Turkish validity and reliability study of the scale, which was first developed by Zigmond and Snaith in 1983, was conducted by Aydemir et al. in 1997, and the Cronbach's alpha coefficient was found to be 0.8525 for the anxiety subscale and 0.7784 for the depression subscale. Thus, the Turkish form was accepted as valid and safe (Aydemir et al., ). MCID for Hospital Anxiety and Depression Scale (HADS) was reported as 5.7 before (Longo et al., ).
Cognitive Emotion Regulation Questionnaire The scale, consisting of 36 items, measures under nine subheadings and these are “self‐blame, acceptance, focusing on negative thoughts, positive refocusing, refocusing on the plan, positive reconsideration, fixing the point of view, destruction, blaming others.” Evaluation is made with a five‐point Likert‐type scoring, and the person ticks one of the options “almost never–rarely–sometimes–often almost always” according to his/her own thoughts and feelings. The scale was first developed by Garnefski et al. in 2001, and its Turkish validity studies were conducted by Tuna and Bozo in 2012. As a result of the statistical analysis, the Cronbach alpha value was 0.90, revealing that Cognitive Emotion Regulation Questionnaire (CERQ) is a valid and reliable measurement tool (Tuna & Bozo, ). There is no reported reference MCID for CERQ in patients with FM.
Rosenberg Self‐Esteem Scale In the scale consisting of 10 items, participants choose one of the options “very true–true–wrong–very wrong” according to their thoughts. In his evaluation, the scoring options in the first five items were “4–3–2–1”. In the last five items, the order of the options is “1–2–3–4.” Between 30 and 40 points, a good level of self‐esteem. Between 26 and 29 points, a moderate level of self‐esteem. A score of 25 or less indicates low self‐esteem. The Turkish validity and reliability of the scale developed by Rosenberg in 1963 was proven by Çuhadaroğlu in 1986 (Tonga & Halisdemir, ). There is no reported reference MCID for Rosenberg Self‐Esteem Scale (RSS) in patients with FM.
The Body Awareness Questionnaire The Body Awareness Questionnaire (BAQ) is an 18‐item scale, with the total scale score calculated as a sum of the items. Items are scored on a 1–7 scale, with the total scale score calculated as a sum of the items. The questions with asterisks are reverse scored. The Turkish validity and reliability study of the questionnaire, which was developed by Shields and her friends in 1989, was conducted by Karaca in 2017. With the Cronbach's alpha value being 0.91, the Turkish version of BAQ was accepted as valid and reliable (Karaca, ). There is no reported reference MCID for BAQ in patients with FM.
Sample size The number of samples was calculated with the G‐Power 3.1.9.4 program, taking into account the significance level and effect size of the established hypothesis. Based on the correlation coefficient between pain intensity and depression ( r = 0.562, p < .05) obtained in Özer et al.’s study (2018), the high effect level was found to be 0.74 (effect size) (Özer et al., ). In order to find a significant difference in the study, when α = 0.05, 1 − β = 0.95, that is, the amount of error was 0.005 and the power of the test was 95%, the sample size was calculated as 44 people with a minimum of 11 in four groups (Özer et al., ).
Statistical analyses Statistical analyzes “IBM SPSS Statistics for Windows.” It was performed using Version 25.0 (Statistical Package for the Social Sciences, IBM Corp.). Descriptive statistics are presented as n and % for categorical variables and as mean ± SD for continuous variables. When the data of the study were examined in terms of normality assumptions, Kolmogorov–Smirnov values were determined as p > .05. Wilcoxon test, one of the nonparametric tests, was applied to determine whether there was a significant difference between the scale and subscales before and after within the groups. Kruskal–Wallis test was used to compare the scale and subscales between groups. To determine which groups there was a significant difference, the Games–Howell test, one of the post hoc tests, was used. p < .05 was considered statistically significant.
RESULTS In a four‐group study involving 50 participants, 14 participants received “MI exercises and pain neuroscience training” together in the combined group (Group 1: MIEP + PNE); 12 participants were in the group that received only “MI exercises” (Group 2: MIEP); 12 participants were distributed by simple randomization into the group that received only “PNE” (Group 3: PNE); and 12 participants into the CG without any application (Group 4: CG). The distribution of these participants' sociodemographic information such as gender, age, and education in the groups is shown in Table . 3.1 Outcome measurements 3.1.1 Visual Pain Scale No significant difference was detected in the preliminary VAS (pre‐VAS) scores of the participants in the groups, thus the groups consist of participants with similar VAS scores. There was no significant difference in the final VAS (post‐VAS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). There was a statistically significant difference between pre‐ and post‐VAS scores in the MIEP + PNE ( p = .003, 95% confidence interval [CI], −4.707 to −0.992), MIEP ( p = .003, 95% CI, −5.480 to −1.019), and PNE ( p = .002, 95% CI, −3.613 to −1.546) groups. The post‐VAS score was not statistically but clinically (>32.3% points) decreased in the interventions group, especially in MIEP group. VAS scores were found to be lower after than before. In the CG, no significant difference was found in the pre‐ and postresult measurements of VAS (see Table ). 3.1.2 The Pain Beliefs Questionnaire A statistically significant difference was found between the MIEP + PNE and CG groups in the scores in the final PBQ (post‐PBQ) organic beliefs subscale of the groups ( p = .029, 95% CI, 1.724–9.495). The combined group was found to be superior to the CG (see Table ). A statistically significant difference was observed between pre‐ and post‐PBQ organic beliefs scores in the MIEP + PNE ( p = .017, 95% CI, −7.821 to −0.318) and PNE ( p = .003, 95% CI, −9.799 to −0.040) groups. The score was found to be lower after than before (see Table ). 3.1.3 The Pain Catastrophizing Scale‐9 No significant difference was detected in the final PCS‐9 (post‐PCS‐9) scores of the groups, thus the groups do not have any superiority over each other (see Table ). There is a statistically significant difference between pre‐ and post‐PCS‐9 in total ( p = .006, 95% CI, −15.54 to −0.793), rumination ( p = .007, 95% CI, −5.854 to −0.146), and helplessness ( p = .025, 95% CI, −7.628 to −0.031) subscores in the PNE group. PCS‐9 total, rumination, and helplessness scores were found to be lower after than before (see Table ) and clinically (>4.5 points) improved in the PNE group. 3.1.4 The Tampa Scale for Kinesiophobia No significant difference was detected in the final TSK (post‐TSK) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐TSK scores. TSK scores were found to be lower after than before (see Table ). 3.1.5 Hospital Anxiety and Depression Scale No significant difference was detected in the final HADS (post‐HADS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between pre‐ and post‐HADS, there was a statistically significant difference in the scores of the anxiety ( p = .026, 95% CI, −4.808 to 1.308) and depression ( p = .035, 95% CI, −5.547 to 1.867) subscales in the PNE group. HADS scores were found to be lower after than before (see Table ) and not clinically (>5.7 points) improved in the intervention groups. 3.1.6 Cognitive Emotion Regulation Questionnaire No significant difference was detected in the final CERQ (post‐CERQ) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between the pre‐ and post‐CERQ, the subscales refocusing on the plan ( p = .014, 95% CI, −0.42 to 3.42), positive reconsideration ( p = .005, 95% CI, −0.31 to 5.31), and destruction ( p = .007, 95% CI, −4.01 to 0.17) showed a statistically significant difference in the PNE group. “Destruction” subscores were found to be lower after than before, and “refocus on the plan” and “positive reexamination” subscores were found to be higher after than before (see Table ). 3.1.7 Rosenberg Self‐Esteem Scale No significant difference was detected in the final RSS (post‐RSS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐RSS scores. RSS scores were found to be lower after than before (see Table ). 3.1.8 The Body Awareness Questionnaire A statistically significant difference was found between the final BAQ (post‐BAQ) scores of the groups. Post‐BAQ score was found to be higher in the MIEP group compared to the other groups ( p = .008). As a result of the post hoc test, the MIEP group was statistically significantly superior to the PNE group ( p = .008, 95% CI, −21.2 to −5.7) and the MIEP + PNE group ( p = .046, 95% CI, 2.92–23.2; see Table ). There is a statistically significant difference between pre‐ and post‐BAQ scores in the MIEP group ( p = .041, 95% CI, −4.70 to 17.52). BAQ scores were found to be higher after than before (see Table ).
Outcome measurements 3.1.1 Visual Pain Scale No significant difference was detected in the preliminary VAS (pre‐VAS) scores of the participants in the groups, thus the groups consist of participants with similar VAS scores. There was no significant difference in the final VAS (post‐VAS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). There was a statistically significant difference between pre‐ and post‐VAS scores in the MIEP + PNE ( p = .003, 95% confidence interval [CI], −4.707 to −0.992), MIEP ( p = .003, 95% CI, −5.480 to −1.019), and PNE ( p = .002, 95% CI, −3.613 to −1.546) groups. The post‐VAS score was not statistically but clinically (>32.3% points) decreased in the interventions group, especially in MIEP group. VAS scores were found to be lower after than before. In the CG, no significant difference was found in the pre‐ and postresult measurements of VAS (see Table ). 3.1.2 The Pain Beliefs Questionnaire A statistically significant difference was found between the MIEP + PNE and CG groups in the scores in the final PBQ (post‐PBQ) organic beliefs subscale of the groups ( p = .029, 95% CI, 1.724–9.495). The combined group was found to be superior to the CG (see Table ). A statistically significant difference was observed between pre‐ and post‐PBQ organic beliefs scores in the MIEP + PNE ( p = .017, 95% CI, −7.821 to −0.318) and PNE ( p = .003, 95% CI, −9.799 to −0.040) groups. The score was found to be lower after than before (see Table ). 3.1.3 The Pain Catastrophizing Scale‐9 No significant difference was detected in the final PCS‐9 (post‐PCS‐9) scores of the groups, thus the groups do not have any superiority over each other (see Table ). There is a statistically significant difference between pre‐ and post‐PCS‐9 in total ( p = .006, 95% CI, −15.54 to −0.793), rumination ( p = .007, 95% CI, −5.854 to −0.146), and helplessness ( p = .025, 95% CI, −7.628 to −0.031) subscores in the PNE group. PCS‐9 total, rumination, and helplessness scores were found to be lower after than before (see Table ) and clinically (>4.5 points) improved in the PNE group. 3.1.4 The Tampa Scale for Kinesiophobia No significant difference was detected in the final TSK (post‐TSK) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐TSK scores. TSK scores were found to be lower after than before (see Table ). 3.1.5 Hospital Anxiety and Depression Scale No significant difference was detected in the final HADS (post‐HADS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between pre‐ and post‐HADS, there was a statistically significant difference in the scores of the anxiety ( p = .026, 95% CI, −4.808 to 1.308) and depression ( p = .035, 95% CI, −5.547 to 1.867) subscales in the PNE group. HADS scores were found to be lower after than before (see Table ) and not clinically (>5.7 points) improved in the intervention groups. 3.1.6 Cognitive Emotion Regulation Questionnaire No significant difference was detected in the final CERQ (post‐CERQ) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between the pre‐ and post‐CERQ, the subscales refocusing on the plan ( p = .014, 95% CI, −0.42 to 3.42), positive reconsideration ( p = .005, 95% CI, −0.31 to 5.31), and destruction ( p = .007, 95% CI, −4.01 to 0.17) showed a statistically significant difference in the PNE group. “Destruction” subscores were found to be lower after than before, and “refocus on the plan” and “positive reexamination” subscores were found to be higher after than before (see Table ). 3.1.7 Rosenberg Self‐Esteem Scale No significant difference was detected in the final RSS (post‐RSS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐RSS scores. RSS scores were found to be lower after than before (see Table ). 3.1.8 The Body Awareness Questionnaire A statistically significant difference was found between the final BAQ (post‐BAQ) scores of the groups. Post‐BAQ score was found to be higher in the MIEP group compared to the other groups ( p = .008). As a result of the post hoc test, the MIEP group was statistically significantly superior to the PNE group ( p = .008, 95% CI, −21.2 to −5.7) and the MIEP + PNE group ( p = .046, 95% CI, 2.92–23.2; see Table ). There is a statistically significant difference between pre‐ and post‐BAQ scores in the MIEP group ( p = .041, 95% CI, −4.70 to 17.52). BAQ scores were found to be higher after than before (see Table ).
Visual Pain Scale No significant difference was detected in the preliminary VAS (pre‐VAS) scores of the participants in the groups, thus the groups consist of participants with similar VAS scores. There was no significant difference in the final VAS (post‐VAS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). There was a statistically significant difference between pre‐ and post‐VAS scores in the MIEP + PNE ( p = .003, 95% confidence interval [CI], −4.707 to −0.992), MIEP ( p = .003, 95% CI, −5.480 to −1.019), and PNE ( p = .002, 95% CI, −3.613 to −1.546) groups. The post‐VAS score was not statistically but clinically (>32.3% points) decreased in the interventions group, especially in MIEP group. VAS scores were found to be lower after than before. In the CG, no significant difference was found in the pre‐ and postresult measurements of VAS (see Table ).
The Pain Beliefs Questionnaire A statistically significant difference was found between the MIEP + PNE and CG groups in the scores in the final PBQ (post‐PBQ) organic beliefs subscale of the groups ( p = .029, 95% CI, 1.724–9.495). The combined group was found to be superior to the CG (see Table ). A statistically significant difference was observed between pre‐ and post‐PBQ organic beliefs scores in the MIEP + PNE ( p = .017, 95% CI, −7.821 to −0.318) and PNE ( p = .003, 95% CI, −9.799 to −0.040) groups. The score was found to be lower after than before (see Table ).
The Pain Catastrophizing Scale‐9 No significant difference was detected in the final PCS‐9 (post‐PCS‐9) scores of the groups, thus the groups do not have any superiority over each other (see Table ). There is a statistically significant difference between pre‐ and post‐PCS‐9 in total ( p = .006, 95% CI, −15.54 to −0.793), rumination ( p = .007, 95% CI, −5.854 to −0.146), and helplessness ( p = .025, 95% CI, −7.628 to −0.031) subscores in the PNE group. PCS‐9 total, rumination, and helplessness scores were found to be lower after than before (see Table ) and clinically (>4.5 points) improved in the PNE group.
The Tampa Scale for Kinesiophobia No significant difference was detected in the final TSK (post‐TSK) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐TSK scores. TSK scores were found to be lower after than before (see Table ).
Hospital Anxiety and Depression Scale No significant difference was detected in the final HADS (post‐HADS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between pre‐ and post‐HADS, there was a statistically significant difference in the scores of the anxiety ( p = .026, 95% CI, −4.808 to 1.308) and depression ( p = .035, 95% CI, −5.547 to 1.867) subscales in the PNE group. HADS scores were found to be lower after than before (see Table ) and not clinically (>5.7 points) improved in the intervention groups.
Cognitive Emotion Regulation Questionnaire No significant difference was detected in the final CERQ (post‐CERQ) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). Between the pre‐ and post‐CERQ, the subscales refocusing on the plan ( p = .014, 95% CI, −0.42 to 3.42), positive reconsideration ( p = .005, 95% CI, −0.31 to 5.31), and destruction ( p = .007, 95% CI, −4.01 to 0.17) showed a statistically significant difference in the PNE group. “Destruction” subscores were found to be lower after than before, and “refocus on the plan” and “positive reexamination” subscores were found to be higher after than before (see Table ).
Rosenberg Self‐Esteem Scale No significant difference was detected in the final RSS (post‐RSS) scores of the groups, so the groups do not have any superiority over each other in the applications (see Table ). No significant difference was detected between pre‐ and post‐RSS scores. RSS scores were found to be lower after than before (see Table ).
The Body Awareness Questionnaire A statistically significant difference was found between the final BAQ (post‐BAQ) scores of the groups. Post‐BAQ score was found to be higher in the MIEP group compared to the other groups ( p = .008). As a result of the post hoc test, the MIEP group was statistically significantly superior to the PNE group ( p = .008, 95% CI, −21.2 to −5.7) and the MIEP + PNE group ( p = .046, 95% CI, 2.92–23.2; see Table ). There is a statistically significant difference between pre‐ and post‐BAQ scores in the MIEP group ( p = .041, 95% CI, −4.70 to 17.52). BAQ scores were found to be higher after than before (see Table ).
DISCUSSION Our main purpose in this clinical study was to investigate the effectiveness of MIEP and PNE, especially on pain intensity. Additionally. The aim was to examine the effects of MIEP and PNE on pain, kinesiophobia, body awareness, psychological state, cognitive–emotional regulation state, and self‐esteem. According to the results of this study, PNE combined with MIEP treatment was associated with clinically significant improvement in pain and belief, while PNE and MIEP alone also resulted in clinical improvement during the 12‐week follow‐up period. Only MIEP application is more effective for body awareness and pain, PNE is superior for kinesiophobia, psychological state, cognitive–emotional regulation state, and self‐esteem according to pre‐ and postresults. Importantly, Saracoglu et al. reported 6‐week PNE sessions to pharmacological treatment was successful in improving functional status, pain, and level of kinesiophobia in patients with FM. However, there was no pharmacological intervention in our study. This study also showed that the inclusion of PNE was highly effective on kinesiophobia and pain in FM patients receiving pharmacological treatment, but in our results, the combination of MIEP and PNE did not show this effect. Our study demonstrated that the combination of PNE and MIEP treatment was not resulted in superior outcomes in all scales in the 12‐week follow‐up period. The significant difference between baseline values in the intervention and CGs was felt not to be due to the randomization procedure because the baseline values of both groups were similar and not statistically different in primary outcome (pain) and other secondary outcome measures (all scales). This study has certain limitations. First, the participants were newly diagnosed patients who had not received any intervention before. This situation may have created extra tiring situations in the combined groups in terms of physical activity and therefore psychological effects. The number of participants was small so further studies are needed. MIEP improved motor visualization ability, improved pain, and increased body awareness. PNE enhanced people's organic pain beliefs; it distracted people from fears, disasters, and negative thoughts about pain. PNE and MIEP alone may contribute to psychological management of FM. Finally, these were the results of a 12‐week intervention in FM patients, and the long‐term effects of these interventions, either combined or alone, on patients are still unclear. 4.1 Motor imagery exercises MI exercises are designed as a type of neuroscientific exercise to improve the virtual body map in the brain, and there are studies that show positive effects on chronic pain in current physiotherapy programs (Javdaneh et al., ). In a study conducted by MacIver et al. in 2008, it was observed that MI training provided a decrease in pain perception at the cortical level (Ribas et al., ). MI therapy used by Lindgreen et al. to cope with postsurgical pain has also yielded successful results. Vran et al. have shown that MI exercises in patients with chronic pain have potential benefits for the management of pain during painful activities (Sengul et al., ). Another clinical study suggested that stabilization exercises combined with MI exercises were superior to stabilization exercises alone in reducing pain, disability, and kinesiophobia in patients with chronic pain (Javdaneh et al., ). MI‐based rehabilitation has found a stronger effect on anxiety‐coping behavior in pain patients than traditional physiotherapy (Paolucci et al., ). Studies in the literature on MI have shown that improvements in pain intensity, kinesthetic–visual imagery, fear of movement, anxiety–depression, cognitive status, and body awareness have been achieved through plasticity in the brain. 4.2 Pain neuroscience education PNE redefines pain by eliminating fears and misconceptions about pain, creating a change in pain cognitions and perceptions. Thanks to this reconceptualization, participants can be more open to the activities and movements they feared before the training and can move away from catastrophizing thoughts about pain (Malfliet et al., ). Thus, it allows changing cognition and erroneous beliefs, as well as improving functionality and physical condition (Galan‐Martin et al., ). Nijs et al. have supported PNE as a suitable method to prepare patients for cognition‐targeted exercise therapy. Many pain researchers, such as Moseley, Louw, Diener, Butler, Meeus, Ryan, Van Oosterwijck, and Puentedura, have suggested that pain neuroscience training programs provide positive improvements in improving pain intensity, pain knowledge of perceived disability, and pain cognitions, either alone or in combination with physiotherapy treatments (Orhan et al., ). In studies conducted on PNE, positive results were obtained in people's pain intensity, beliefs about pain, catastrophizing thoughts about pain, fear of pain‐related movement, cognitive–emotional state, psychological process management, and body awareness after the training they received on pain. There are a few studies in the literature investigating the effectiveness of MIEP on pain intensity, and these studies have shown that they are effective in reducing pain severity. In studies investigating the effect of PNE on pain severity, there were conclusions that pain severity was effective in addition to various physiotherapy applications, but that it was insufficient alone and further research was required. In this clinical study we conducted, our hypothesis was that both of these applications would show significant results on pain severity. The results showed a significant decrease in VAS scores between pre‐ and postmeasurements in all three experimental groups, supporting our hypothesis. As a result of the study, although pain intensity decreased in all groups, no significant superiority was detected between the groups. A decrease in pain intensity was observed both when MIEP and PNE were applied individually or together. We expected participants' pain beliefs to change positively after PNE. There was a significant improvement in PBQ “organic beliefs” subscores in both groups where PNE was applied, and the MIEP + PNE group was found to be statistically significantly superior to the CG in outcome measurements. Thus, our hypothesis was confirmed and supported the studies in the literature. After PNE, we expected that fear and bad thoughts about pain would decrease, as people with chronic pain would have a better perspective on pain management. Previous studies have also found that PNE provides a positive change in pain thoughts. According to PCS‐9 scores, significant results were found in the PNE group in terms of improvement in the total score, “rumination, which means repeated negative thoughts” and “helplessness” subscores. Although there was an improvement in scores in the MIEP + PNE combined group, no statistical significance was found. This may be due to the fact that the preliminary scores of the participants in the combined group regarding pain‐related fear and bad thoughts were not high; therefore, our hypothesis was partially proven. Although there was no significant superiority between the groups in the final measurements, PCS‐9 averages were seen to be lower in both groups where PNE was applied (lower only in the PNE group). In studies where MIEP and PNE were investigated separately, it was observed that both of them made it easier for people to manage psychological processes. Our hypothesis in this study is that psychological process management will be better. According to the HADS scores, which are the two subscales of anxiety and depression, a significant improvement was observed in the PNE group in both subheadings between before and after measurements. Although no significant superiority was detected between the groups in the outcome measurements, the lowest score was seen in the groups where PNE was applied, and our hypothesis was partially supported. While cognitive–emotional regulation abilities were expected to be better after PNE and MIEP, our hypothesis was partially proven by seeing a significant improvement in the CERQ subheadings of refocusing on the plan, positive reconsideration, and destruction “PNE only” group, and no significant superiority between the groups was detected in the outcome measurements. Previous studies have shown that MIEP improves body awareness by improving visualization ability and perceptual virtual body map, and our hypothesis is that MI exercises have a positive effect on body awareness. Although an increase was observed in the BAQ score in both experimental groups where MIEP was applied, a significant improvement was seen in the “MIEP only” group. In addition, a significant superiority was detected in the MIEP group compared to the other experimental groups in outcome measurements. This result relatively supported the literature and confirmed our hypothesis. Contrary to studies in the literature showing that MI reduces fear of movement due to pain, no significant results were obtained in any experimental group in our clinical study. Likewise, while an improvement was expected in self‐esteem, there was no significant change in RSS scores in any group. The reason for not getting the expected effect in these parameters may be that the participants in the study did not have a significant level of fear of movement on average in their first measurements, and similarly, they had average RSS scores in the first measurements.
Motor imagery exercises MI exercises are designed as a type of neuroscientific exercise to improve the virtual body map in the brain, and there are studies that show positive effects on chronic pain in current physiotherapy programs (Javdaneh et al., ). In a study conducted by MacIver et al. in 2008, it was observed that MI training provided a decrease in pain perception at the cortical level (Ribas et al., ). MI therapy used by Lindgreen et al. to cope with postsurgical pain has also yielded successful results. Vran et al. have shown that MI exercises in patients with chronic pain have potential benefits for the management of pain during painful activities (Sengul et al., ). Another clinical study suggested that stabilization exercises combined with MI exercises were superior to stabilization exercises alone in reducing pain, disability, and kinesiophobia in patients with chronic pain (Javdaneh et al., ). MI‐based rehabilitation has found a stronger effect on anxiety‐coping behavior in pain patients than traditional physiotherapy (Paolucci et al., ). Studies in the literature on MI have shown that improvements in pain intensity, kinesthetic–visual imagery, fear of movement, anxiety–depression, cognitive status, and body awareness have been achieved through plasticity in the brain.
Pain neuroscience education PNE redefines pain by eliminating fears and misconceptions about pain, creating a change in pain cognitions and perceptions. Thanks to this reconceptualization, participants can be more open to the activities and movements they feared before the training and can move away from catastrophizing thoughts about pain (Malfliet et al., ). Thus, it allows changing cognition and erroneous beliefs, as well as improving functionality and physical condition (Galan‐Martin et al., ). Nijs et al. have supported PNE as a suitable method to prepare patients for cognition‐targeted exercise therapy. Many pain researchers, such as Moseley, Louw, Diener, Butler, Meeus, Ryan, Van Oosterwijck, and Puentedura, have suggested that pain neuroscience training programs provide positive improvements in improving pain intensity, pain knowledge of perceived disability, and pain cognitions, either alone or in combination with physiotherapy treatments (Orhan et al., ). In studies conducted on PNE, positive results were obtained in people's pain intensity, beliefs about pain, catastrophizing thoughts about pain, fear of pain‐related movement, cognitive–emotional state, psychological process management, and body awareness after the training they received on pain. There are a few studies in the literature investigating the effectiveness of MIEP on pain intensity, and these studies have shown that they are effective in reducing pain severity. In studies investigating the effect of PNE on pain severity, there were conclusions that pain severity was effective in addition to various physiotherapy applications, but that it was insufficient alone and further research was required. In this clinical study we conducted, our hypothesis was that both of these applications would show significant results on pain severity. The results showed a significant decrease in VAS scores between pre‐ and postmeasurements in all three experimental groups, supporting our hypothesis. As a result of the study, although pain intensity decreased in all groups, no significant superiority was detected between the groups. A decrease in pain intensity was observed both when MIEP and PNE were applied individually or together. We expected participants' pain beliefs to change positively after PNE. There was a significant improvement in PBQ “organic beliefs” subscores in both groups where PNE was applied, and the MIEP + PNE group was found to be statistically significantly superior to the CG in outcome measurements. Thus, our hypothesis was confirmed and supported the studies in the literature. After PNE, we expected that fear and bad thoughts about pain would decrease, as people with chronic pain would have a better perspective on pain management. Previous studies have also found that PNE provides a positive change in pain thoughts. According to PCS‐9 scores, significant results were found in the PNE group in terms of improvement in the total score, “rumination, which means repeated negative thoughts” and “helplessness” subscores. Although there was an improvement in scores in the MIEP + PNE combined group, no statistical significance was found. This may be due to the fact that the preliminary scores of the participants in the combined group regarding pain‐related fear and bad thoughts were not high; therefore, our hypothesis was partially proven. Although there was no significant superiority between the groups in the final measurements, PCS‐9 averages were seen to be lower in both groups where PNE was applied (lower only in the PNE group). In studies where MIEP and PNE were investigated separately, it was observed that both of them made it easier for people to manage psychological processes. Our hypothesis in this study is that psychological process management will be better. According to the HADS scores, which are the two subscales of anxiety and depression, a significant improvement was observed in the PNE group in both subheadings between before and after measurements. Although no significant superiority was detected between the groups in the outcome measurements, the lowest score was seen in the groups where PNE was applied, and our hypothesis was partially supported. While cognitive–emotional regulation abilities were expected to be better after PNE and MIEP, our hypothesis was partially proven by seeing a significant improvement in the CERQ subheadings of refocusing on the plan, positive reconsideration, and destruction “PNE only” group, and no significant superiority between the groups was detected in the outcome measurements. Previous studies have shown that MIEP improves body awareness by improving visualization ability and perceptual virtual body map, and our hypothesis is that MI exercises have a positive effect on body awareness. Although an increase was observed in the BAQ score in both experimental groups where MIEP was applied, a significant improvement was seen in the “MIEP only” group. In addition, a significant superiority was detected in the MIEP group compared to the other experimental groups in outcome measurements. This result relatively supported the literature and confirmed our hypothesis. Contrary to studies in the literature showing that MI reduces fear of movement due to pain, no significant results were obtained in any experimental group in our clinical study. Likewise, while an improvement was expected in self‐esteem, there was no significant change in RSS scores in any group. The reason for not getting the expected effect in these parameters may be that the participants in the study did not have a significant level of fear of movement on average in their first measurements, and similarly, they had average RSS scores in the first measurements.
CONCLUSION According to this clinical study in which MIEP and PNE were combined, no superiority was detected between the groups in terms of pain intensity, and both applications are effective in reducing pain severity when applied together or individually. MIEP improve MI ability, improve pain, and increase body awareness. When applied alone, MI exercises were found to be effective in improving body awareness, with significant superiority over other experimental groups. PNE allows people to develop positively in their organic pain beliefs, distracts people from catastrophizing negative thoughts about pain, protects people from anxiety–depression by helping them manage psychological processes more easily, and improves a positive perspective in cognitive–emotion regulation. When applied alone, pain neuroscience training is effective in improving organic pain beliefs, pain thoughts, psychological processes and cognitive emotion regulation parameters. Although there are more traditional physiotherapy practices in our country, specific therapy practices are mostly carried out in private rehabilitation centers. For example, MIEP techniques are performed by physiotherapists who are experts in their field, but this may increase the cost. As a result of the scarcity of PNE‐certified clinicians and their intense work, applying these treatments in combination and in groups on more patients can save time, reduce costs, and enable the application of more effective treatments on pain. Although the results we obtained from the study do not recommend combined treatment, they recommend that different studies be conducted on more FM patients. Finally, increasing the number of patients in future studies each group may contribute the generalize ability of our findings.
Selin Kircali : Conceptualization; writing—original draft; investigation; validation; methodology; formal analysis. Öznur Özge Özcan : Writing—review and editing; writing—original draft; methodology; conceptualization; data curation; validation. Mesut Karahan : Conceptualization; supervision; project administration; writing—review and editing; methodology; writing—original draft; software; validation; visualization; funding acquisition; investigation; data curation; resources.
This research received no specific grant from any funding.
The authors declare no conflicts of interest.
The peer review history for this article is available at https://publons.com/publon/10.1002/brb3.70013 .
|
Advancements in minimally invasive treatment of deltoid ligament injuries combined with distal tibiofibular syndesmosis injuries | 202fc6ed-dbcb-4b9f-9938-963b197cc00c | 11776317 | Surgical Procedures, Operative[mh] | Deltoid ligament injuries combined with distal tibiofibular syndesmosis injuries are often accompanied by avulsion or complete fractures and ligamentous tears. These injuries frequently occur simultaneously in clinical settings and are often associated with severe complications, making them a focal point of research and debate in foot and ankle surgery. Preoperative diagnosis of these injuries should be comprehensive, incorporating the mechanism of injury, physical examination, and imaging studies to fully assess the patient’s condition. Studies have shown that the contact area of the talus bone decreases significantly as the talus shifts laterally. Specifically, a displacement of 1 mm can result in a 42% loss of contact area . Therefore, promptly restoring the rotational and valgus stability of the talus within the ankle mortise is critical. The stability of the talus depends primarily on non-osseous structures, especially the deltoid ligament and the distal tibiofibular syndesmosis, underscoring the importance of accurate diagnosis and treatment of these injuries. Conservative treatment is generally reserved for cases with intact osseous structures, where the ankle joint is immobilized using a cast or brace. However, this method relies on scar healing of the ligament ends, which significantly compromises ankle stability. Current surgical treatments often involve percutaneous screw fixation for syndesmosis injuries and anchor fixation for deltoid ligament repairs . Nevertheless, studies have shown that patients treated with these traditional approaches often have poor outcomes, with low functional scores for the ankle joint . Thus, simple fixation and natural ligament healing are insufficient to meet the demands of postoperative daily activities. With the rise of minimally invasive techniques, the reconstruction or repair of ligaments to achieve both anatomical and functional restoration has emerged as a new therapeutic direction. Innovations such as elastic micromotion devices for syndesmosis injuries and arthroscopic repairs of deltoid ligaments are becoming mainstream approaches in foot and ankle surgery . This review aims to summarize and evaluate the advantages and disadvantages of existing surgical treatment methods while exploring new materials and techniques under the concept of minimally invasive treatment. These include elastic devices, autograft or allograft ligament reconstruction, and arthroscopic techniques, providing clinicians with updated treatment principles and future directions for managing these complex injuries. The key points of the treatment of deltoid ligament injuries combined with distal tibiofibular syndesmosis injuries were shown in Table .
Injuries to the distal tibiofibular syndesmosis and the deltoid ligament are common in clinical practice, often occurring in conjunction with ankle fractures. The primary mechanism of these injuries is abduction and external rotation, frequently observed in Maisonneuve fractures or Dupuytren fractures . The most common type is supination-external rotation ankle fractures, such as Danis-Weber type B, while pronation-external rotation ankle fractures, such as Weber type C, are also seen . Currently, most syndesmosis injuries occur due to significant rotational forces experienced during ankle motion. When subjected to external rotational forces, the fibula externally rotates, and the anterior inferior tibiofibular ligament (AITFL) is subjected to excessive tension, exceeding its biomechanical limit, leading to injury or rupture . Additionally, when external rotation is combined with foot abduction, exerting force on the interosseous membrane, it can result in a complete rupture of the deltoid ligament and separation of the tibiofibular syndesmosis . The integrity of the syndesmosis and the deltoid ligament is crucial for ankle joint stability and directly impacts treatment strategies . From an anatomical perspective, ankle stability is maintained by three key structures: the medial malleolus and deltoid ligament, the lateral malleolus and lateral collateral ligament, and the distal tibiofibular syndesmosis . If one of these structures is compromised but the other two remain intact, ankle stability is typically preserved, and surgical intervention is not generally required. However, with advancements in foot and ankle surgery and sports medicine, the understanding of combined deltoid ligament and syndesmosis injuries has significantly evolved, laying a solid foundation for precise and effective treatment.
In the structure of the ankle joint, the distal tibiofibular syndesmosis plays a crucial role. It not only provides effective control over ankle joint stability but also supports its functional mobility. Additionally, the syndesmosis counters axial, anteroposterior, and rotational stresses, serving as a key mechanism to protect the ankle . It is composed of the anterior inferior tibiofibular ligament (AITFL), posterior inferior tibiofibular ligament (PITFL), transverse ligament, interosseous ligament, and the distal portion of the interosseous membrane. The syndesmosis allows a micromotion range of 2° to 5° in three-dimensional space, which buffers excessive forces and reduces the risk of fractures . The deltoid ligament, also known as the medial collateral ligament, connects the distal tibia to the talus in a fan-shaped structure. It consists of two layers: the superficial and deep layers . The superficial layer comprises the tibionavicular ligament, tibiospring ligament, tibiocalcaneal ligament, and the superficial posterior tibiotalar ligament, which collectively prevent excessive talar eversion . The most superficial structure, the tibiospring ligament, primarily supports the functions of the superficial layer . The deep layer includes the deep anterior tibiotalar ligament and the deep posterior tibiotalar ligament, which restrict excessive talar pronation and maintain joint stability. Among these, the deep posterior tibiotalar ligament is the strongest component of the deltoid ligament complex. Overall, the deep layer of the deltoid ligament contributes significantly more to ankle stability than the superficial layer .
Clinical presentation Distal tibiofibular syndesmosis injuries often manifest as pain, swelling, and restricted mobility around the lateral malleolus. Common diagnostic methods include the Cotton test and external rotation test . Mild syndesmosis injuries can often be diagnosed through physical examination. However, in cases of severe soft tissue contusion or fractures, pain may lead to false-positive results, necessitating further confirmation with imaging studies . Patients with deltoid ligament injuries often have a history of ankle sprain, accompanied by medial malleolus tenderness and restricted motion. However, these symptoms are only indicative and cannot confirm a diagnosis. Definitive diagnosis requires combining eversion stress tests, the Cotton test, and imaging findings. Physical examination Medial malleolus tenderness can be a preliminary indicator of medial malleolus fractures or deltoid ligament injuries. Common tests include: External rotation stress test With the patient seated and the hip and knee flexed at 90°, the examiner stabilizes the leg and externally rotates the foot to observe for pain. Squeeze test Compression of the mid-calf to check for tenderness, though its sensitivity is relatively low . Lateral malleolus examination may reveal tenderness over the syndesmosis. Additionally, external rotation tests, the Cotton test, and dorsiflexion tests are highly sensitive for diagnosing syndesmosis injuries . Imaging studies X-ray X-rays are effective for screening musculoskeletal injuries in emergency settings. A distal tibiofibular gap > 6 mm suggests syndesmosis injury; a medial clear space > 4 mm or a talar tilt angle of 6°–10° indicates possible deltoid ligament injury. Injecting 3 ml of contrast medium into the ankle can assist in evaluating syndesmosis injuries through anatomical landmarks . What is more, weight-bearing ankle plain X-rays are a critical diagnostic tool for evaluating the stability of ankle injuries, particularly in the context of suspected ligamentous injuries or fractures. They provide unique insights that non-weight-bearing X-rays or other imaging modalities may not reveal. They help identify subtle diastasis of the distal tibiofibular syndesmosis or medial clear space widening, which are hallmarks of unstable injuries . CT CT provides precise information on bony structures. A distal tibiofibular gap > 6 mm on the anterior tubercle of the distal tibia suggests injury. CT scans in coronal and sagittal planes can detect displacements of 2–3 mm, but sensitivity decreases for displacements < 1 mm . While CT excels at assessing fracture displacement and classification, it is less effective for soft tissue evaluation. At the meantime., weight-bearing CT is a highly sensitive and specific tool for evaluating subtle ankle instability. Its ability to visualize the joint under load, combined with precise measurements and quantitative analysis, makes it a valuable addition to the diagnostic arsenal, especially in complex or ambiguous cases of ankle instability. MRI MRI is considered a good option for imaging diagnosis of syndesmosis injuries. It allows detailed visualization of ligament morphology and helps identify ischemia, soft tissue edema, and acute ligament injuries . Studies have shown that superficial deltoid ligament tears often occur distally, while deep tears are commonly proximal, which can be confirmed using coronal MRI . Sagittal MRI is required to detect distal deltoid or spring ligament tears. But MRI is typically performed in a non-weight-bearing position, making it less effective at detecting functional or load-induced instability. Considering the above point, weight-bearing X-rays or CT scans may be more helpful in order to identify instability. Ultrasound Ultrasound, with its non-radiation and low cost, is increasingly used to detect syndesmosis separation . High-frequency ultrasound offers precise clinical diagnosis by assessing ligament thickness, course, and tension, especially for chronic ligament injuries . Dynamic imaging Although less reliable than weight bearing X-rays as they frequently overestimate the injury, dynamic imaging techniques, such as dynamic X-rays or dynamic MRI, are increasingly used to assess ankle injuries. These techniques capture joint movements in different positions, improving sensitivity for detecting subtle syndesmosis and deltoid ligament injuries.
Distal tibiofibular syndesmosis injuries often manifest as pain, swelling, and restricted mobility around the lateral malleolus. Common diagnostic methods include the Cotton test and external rotation test . Mild syndesmosis injuries can often be diagnosed through physical examination. However, in cases of severe soft tissue contusion or fractures, pain may lead to false-positive results, necessitating further confirmation with imaging studies . Patients with deltoid ligament injuries often have a history of ankle sprain, accompanied by medial malleolus tenderness and restricted motion. However, these symptoms are only indicative and cannot confirm a diagnosis. Definitive diagnosis requires combining eversion stress tests, the Cotton test, and imaging findings.
Medial malleolus tenderness can be a preliminary indicator of medial malleolus fractures or deltoid ligament injuries. Common tests include: External rotation stress test With the patient seated and the hip and knee flexed at 90°, the examiner stabilizes the leg and externally rotates the foot to observe for pain. Squeeze test Compression of the mid-calf to check for tenderness, though its sensitivity is relatively low . Lateral malleolus examination may reveal tenderness over the syndesmosis. Additionally, external rotation tests, the Cotton test, and dorsiflexion tests are highly sensitive for diagnosing syndesmosis injuries .
With the patient seated and the hip and knee flexed at 90°, the examiner stabilizes the leg and externally rotates the foot to observe for pain.
Compression of the mid-calf to check for tenderness, though its sensitivity is relatively low . Lateral malleolus examination may reveal tenderness over the syndesmosis. Additionally, external rotation tests, the Cotton test, and dorsiflexion tests are highly sensitive for diagnosing syndesmosis injuries .
X-ray X-rays are effective for screening musculoskeletal injuries in emergency settings. A distal tibiofibular gap > 6 mm suggests syndesmosis injury; a medial clear space > 4 mm or a talar tilt angle of 6°–10° indicates possible deltoid ligament injury. Injecting 3 ml of contrast medium into the ankle can assist in evaluating syndesmosis injuries through anatomical landmarks . What is more, weight-bearing ankle plain X-rays are a critical diagnostic tool for evaluating the stability of ankle injuries, particularly in the context of suspected ligamentous injuries or fractures. They provide unique insights that non-weight-bearing X-rays or other imaging modalities may not reveal. They help identify subtle diastasis of the distal tibiofibular syndesmosis or medial clear space widening, which are hallmarks of unstable injuries . CT CT provides precise information on bony structures. A distal tibiofibular gap > 6 mm on the anterior tubercle of the distal tibia suggests injury. CT scans in coronal and sagittal planes can detect displacements of 2–3 mm, but sensitivity decreases for displacements < 1 mm . While CT excels at assessing fracture displacement and classification, it is less effective for soft tissue evaluation. At the meantime., weight-bearing CT is a highly sensitive and specific tool for evaluating subtle ankle instability. Its ability to visualize the joint under load, combined with precise measurements and quantitative analysis, makes it a valuable addition to the diagnostic arsenal, especially in complex or ambiguous cases of ankle instability. MRI MRI is considered a good option for imaging diagnosis of syndesmosis injuries. It allows detailed visualization of ligament morphology and helps identify ischemia, soft tissue edema, and acute ligament injuries . Studies have shown that superficial deltoid ligament tears often occur distally, while deep tears are commonly proximal, which can be confirmed using coronal MRI . Sagittal MRI is required to detect distal deltoid or spring ligament tears. But MRI is typically performed in a non-weight-bearing position, making it less effective at detecting functional or load-induced instability. Considering the above point, weight-bearing X-rays or CT scans may be more helpful in order to identify instability. Ultrasound Ultrasound, with its non-radiation and low cost, is increasingly used to detect syndesmosis separation . High-frequency ultrasound offers precise clinical diagnosis by assessing ligament thickness, course, and tension, especially for chronic ligament injuries . Dynamic imaging Although less reliable than weight bearing X-rays as they frequently overestimate the injury, dynamic imaging techniques, such as dynamic X-rays or dynamic MRI, are increasingly used to assess ankle injuries. These techniques capture joint movements in different positions, improving sensitivity for detecting subtle syndesmosis and deltoid ligament injuries.
X-rays are effective for screening musculoskeletal injuries in emergency settings. A distal tibiofibular gap > 6 mm suggests syndesmosis injury; a medial clear space > 4 mm or a talar tilt angle of 6°–10° indicates possible deltoid ligament injury. Injecting 3 ml of contrast medium into the ankle can assist in evaluating syndesmosis injuries through anatomical landmarks . What is more, weight-bearing ankle plain X-rays are a critical diagnostic tool for evaluating the stability of ankle injuries, particularly in the context of suspected ligamentous injuries or fractures. They provide unique insights that non-weight-bearing X-rays or other imaging modalities may not reveal. They help identify subtle diastasis of the distal tibiofibular syndesmosis or medial clear space widening, which are hallmarks of unstable injuries .
CT provides precise information on bony structures. A distal tibiofibular gap > 6 mm on the anterior tubercle of the distal tibia suggests injury. CT scans in coronal and sagittal planes can detect displacements of 2–3 mm, but sensitivity decreases for displacements < 1 mm . While CT excels at assessing fracture displacement and classification, it is less effective for soft tissue evaluation. At the meantime., weight-bearing CT is a highly sensitive and specific tool for evaluating subtle ankle instability. Its ability to visualize the joint under load, combined with precise measurements and quantitative analysis, makes it a valuable addition to the diagnostic arsenal, especially in complex or ambiguous cases of ankle instability.
MRI is considered a good option for imaging diagnosis of syndesmosis injuries. It allows detailed visualization of ligament morphology and helps identify ischemia, soft tissue edema, and acute ligament injuries . Studies have shown that superficial deltoid ligament tears often occur distally, while deep tears are commonly proximal, which can be confirmed using coronal MRI . Sagittal MRI is required to detect distal deltoid or spring ligament tears. But MRI is typically performed in a non-weight-bearing position, making it less effective at detecting functional or load-induced instability. Considering the above point, weight-bearing X-rays or CT scans may be more helpful in order to identify instability.
Ultrasound, with its non-radiation and low cost, is increasingly used to detect syndesmosis separation . High-frequency ultrasound offers precise clinical diagnosis by assessing ligament thickness, course, and tension, especially for chronic ligament injuries .
Although less reliable than weight bearing X-rays as they frequently overestimate the injury, dynamic imaging techniques, such as dynamic X-rays or dynamic MRI, are increasingly used to assess ankle injuries. These techniques capture joint movements in different positions, improving sensitivity for detecting subtle syndesmosis and deltoid ligament injuries.
Fractures involving deltoid ligament and distal tibiofibular syndesmosis injuries are often classified as highly unstable ankle injuries . Conservative treatment primarily relies on fibrous scar healing of the ligaments, which can lead to significant anatomical and functional impairments. This instability may result in severe complications such as post-traumatic arthritis, chronic pain, and functional disability. Therefore, non-surgical treatment should be cautiously considered and is typically reserved for patients with specific contraindications to surgery or stable injuries, or in rare cases of mild injuries. In non-surgical management, long-leg or short-leg casting for 6–12 weeks is commonly used to ensure relative stability of the ankle joint . Recent studies suggest that early functional rehabilitation after cast removal may help minimize joint stiffness and muscle atrophy, promoting better functional recovery in patients with mild deltoid ligament and syndesmosis injuries . Additionally, advancements in imaging technologies, such as high-resolution MRI and ultrasound, allow for dynamic monitoring of ligament healing during conservative treatment, ensuring that the recovery meets functional requirements. For isolated ligament injuries with proper joint alignment, conservative treatment can achieve satisfactory short-term outcomes in selected cases with stable joints . However, when the ruptured ligament precludes the reduction of the ankle joint, surgical repair of the ligament is essential. In case an anatomical reduction of the syndesmosis and the ankle joint has been achieved, the surgery repair of the ligament is optional.
Treatment of syndesmosis injuries Percutaneous screw fixation Percutaneous screw fixation with a limited incision is a well-established method for treating syndesmosis injuries . According to Bekerom et al. , the following points should be considered during screw placement: (1) The screw should be positioned 2–4 cm proximal to the tibiotalar joint surface and aligned parallel to the joint line. (2) In the transverse plane, the screw should be inserted at a 30° angle from posterolateral to anteromedial to prevent syndesmosis constriction that could restrict ankle dorsiflexion. Percutaneous syndesmosis fixation, while minimally invasive, may carry an increased risk of malreduction compared to open techniques, particularly when imaging guidance or proper anatomical landmarks are not meticulously utilized. The use of clamping during percutaneous syndesmosis fixation has been debated . While clamping can facilitate joint reduction, improper use may inadvertently lead to over-compression or malalignment . Biomechanical studies indicate that placing screws 30–40 mm above the tibiotalar joint minimizes Mises equivalent stress and provides optimal fixation . Fibula fracture should be fixed at any level except the fibular head, in order to gain the proper length . During surgery, screws can be fixed through either 3 or 4 cortices. Fixation through 3 cortices is associated with a lower risk of joint space narrowing, while 4-cortex fixation provides greater holding strength but carries a higher risk of screw breakage. Liu et al. suggested that the diameter of the fibula determines the appropriate screw size. Once the syndesmosis ligament has healed, screw removal is optional to prevent adverse effects on ankle function or risks of loosening and breakage . CT imaging has shown that the rate of malreduction of the syndesmosis can reach up to 36% after screw fixation, but this rate significantly decreases after screw removal . Therefore, it is recommended to remove screws within 8–12 weeks postoperatively to allow for ligament healing and restoration of normal joint function. In recent years, the use of bioabsorbable screws has emerged as a solution to the potential complications of permanent metal implants. Bioabsorbable screws provide sufficient fixation strength and gradually degrade after tissue healing, eliminating the need for secondary screw removal surgery . While bioabsorbable screws are designed to provide sufficient strength during the healing period, their mechanical properties may be inferior to those of metallic screws, particularly in high-stress applications or in patients with poor bone quality. In addition, during the degradation process, bioabsorbable screws can release acidic byproducts, potentially causing local inflammation, sterile effusion, or osteolysis in some patients . Minimally invasive elastic fixation devices for distal tibiofibular syndesmosis Minimally invasive elastic fixation devices for the distal tibiofibular syndesmosis are designed based on the biomechanical characteristics of the ankle joint and are effective in achieving syndesmosis reduction. These devices offer several advantages, including reduced surgical trauma through limited incisions, allowing micromotion of the syndesmosis to align with its physiological function, relatively simple operation without the need for secondary implant removal, and enabling early weight-bearing and functional training. Common elastic fixation devices include the suture-button system, syndesmotic hooks, and hook plates. Studies have shown that single suture-button fixation achieves results comparable to traditional screw fixation. However, using two suture-buttons (2-Suture-button) provides better outcomes in anatomical reduction and rotational stability, though it may still fall short of the stability seen in a healthy syndesmosis . The suture-button system offers specific advantages, such as avoiding common complications of screw fixation like loosening or breakage, and eliminating the need for secondary implant removal. However, appropriate tension must be maintained during application to prevent over-compression of the syndesmosis, which could impair joint function . The Tightrope system is an innovative minimally invasive method for treating syndesmosis injuries. Research reports an overall repair satisfaction rate exceeding 95%, with no need for secondary implant removal . Furthermore, the Bolt Tightrope system, which combines bolt compression with a suture-loop titanium plate, has demonstrated favorable clinical outcomes. However, intraoperative care is essential to avoid irritation of distal tibial soft tissues or excessive pressure on the cortical bone. Recent studies suggest that elastic fixation devices maintain biomechanical stability of the syndesmosis more effectively than traditional screws during long-term weight-bearing activities . Additionally, the development of novel materials such as bioresorbable suture-button devices has shown promise in optimizing functional recovery. These advancements have made minimally invasive treatment of syndesmosis injuries safer and more effective, providing new possibilities for restoring ankle joint function. Treatment of deltoid ligament injuries Minimally invasive repair with absorbable anchors Traditional methods for repairing deltoid ligament injuries, such as transosseous wire or non-absorbable suture repair, are associated with significant surgical trauma and suboptimal outcomes . With advancements in medical biomaterials, minimally invasive suture anchor techniques have become the mainstream approach for deltoid ligament repair . These anchors, which are fully embedded within the bone, minimize irritation to surrounding soft tissues and eliminate the need for secondary implant removal surgery. This technique simplifies the procedure while ensuring robust fixation of the bone cortex, avoiding unnecessary interference with the ligament. Even in cases of compromised blood supply or delayed healing at the ligament ends, at least two tension-bearing suture strands can adequately replace the deep layer of the deltoid ligament, restoring its mechanical function. A critical aspect of this surgical technique is to secure the ligament repair only after fracture reduction and fixation, to prevent excessive tension on the repaired ligament that might compromise its integrity . Shen et al. conducted a retrospective study involving 34 patients with ankle fractures and deltoid ligament ruptures. All patients underwent primary suture anchor repair, with an average follow-up of 28.4 months. The mean final AOFAS score was 92.6, and the medial clear space on stress X-rays was (3.74 ± 0.32) mm, comparable to the contralateral uninjured side at (3.65 ± 0.17) mm. The study concluded that suture anchor repair achieved satisfactory surgical outcomes and effectively restored the deep posterior tibiotalar ligament. Further follow-up studies have confirmed that the suture anchor technique offers superior results compared to traditional methods, particularly in addressing deep deltoid ligament injuries . However, potential complications, such as rare occurrences of implant rejection or irritation of the surrounding skin, highlight the need for further research. Future directions may focus on evaluating various implant materials in terms of biomechanics and clinical outcomes, aiming to optimize both performance and patient comfort. Recent advancements, such as bioresorbable anchors with enhanced biocompatibility and reduced inflammatory responses, show promising potential in minimizing these complications and improving long-term outcomes. Arthroscopic surgery for the ankle joint Vega et al. conducted a retrospective study on 13 patients with medial and lateral ligament injuries caused by ankle fractures. Using an anteromedial approach via ankle arthroscopy, ruptured deltoid ligaments were repaired under direct visualization with automated suture clamps. After an average follow-up of 35 months, the median AOFAS score improved significantly from 70 preoperatively to 100 at the final follow-up. All 13 patients reported substantial improvements in ankle function. Researchers emphasized that arthroscopy allows for direct visualization and assessment of deltoid ligament injuries and monitored repair to evaluate ankle stability after reconstruction . Acevedo et al. performed arthroscopic repair on 87 patients with deltoid ligament injuries. Using suture anchors and sutures during arthroscopic procedures, the patient satisfaction rate exceeded 90%. The minimally invasive nature of arthroscopic surgery, coupled with favorable postoperative outcomes, has provided a new solution for deltoid ligament repair. Arthroscopic minimally invasive surgery has become a significant trend in foot and ankle surgery. Recent advances in arthroscopic techniques have significantly improved the precision and outcomes of deltoid ligament repairs. Enhanced instrumentation, including high-definition 4 K imaging systems and advanced suture management devices, has greatly increased the accuracy of visualizing ligament injuries and the efficiency of suture placement. The use of bioabsorbable suture anchors has also gained popularity due to their biocompatibility and the elimination of the need for hardware removal, with studies demonstrating outcomes comparable or superior to those of traditional metallic anchors. Additionally, combining arthroscopy with real-time imaging modalities such as ultrasound and intraoperative CT has enhanced the assessment of joint stability and ligament tension during repairs, improving surgical precision and reducing the risk of residual instability. Biomechanical studies, although currently limited, have started to validate the stability and functional outcomes of arthroscopic deltoid ligament repairs, particularly under weight-bearing conditions. Another promising advancement is the incorporation of bioengineered ligament substitutes in arthroscopic procedures. These synthetic grafts, used alongside traditional suture anchors, offer superior mechanical strength and promote healing in cases of extensive ligament damage. Furthermore, recent long-term follow-up studies indicate that patients undergoing arthroscopic deltoid ligament repair maintain stable functional outcomes over a period of 5–10 years, reinforcing the efficacy and durability of arthroscopic approaches compared to open surgery . These advancements collectively highlight the growing potential of arthroscopic techniques in foot and ankle surgery.
Percutaneous screw fixation Percutaneous screw fixation with a limited incision is a well-established method for treating syndesmosis injuries . According to Bekerom et al. , the following points should be considered during screw placement: (1) The screw should be positioned 2–4 cm proximal to the tibiotalar joint surface and aligned parallel to the joint line. (2) In the transverse plane, the screw should be inserted at a 30° angle from posterolateral to anteromedial to prevent syndesmosis constriction that could restrict ankle dorsiflexion. Percutaneous syndesmosis fixation, while minimally invasive, may carry an increased risk of malreduction compared to open techniques, particularly when imaging guidance or proper anatomical landmarks are not meticulously utilized. The use of clamping during percutaneous syndesmosis fixation has been debated . While clamping can facilitate joint reduction, improper use may inadvertently lead to over-compression or malalignment . Biomechanical studies indicate that placing screws 30–40 mm above the tibiotalar joint minimizes Mises equivalent stress and provides optimal fixation . Fibula fracture should be fixed at any level except the fibular head, in order to gain the proper length . During surgery, screws can be fixed through either 3 or 4 cortices. Fixation through 3 cortices is associated with a lower risk of joint space narrowing, while 4-cortex fixation provides greater holding strength but carries a higher risk of screw breakage. Liu et al. suggested that the diameter of the fibula determines the appropriate screw size. Once the syndesmosis ligament has healed, screw removal is optional to prevent adverse effects on ankle function or risks of loosening and breakage . CT imaging has shown that the rate of malreduction of the syndesmosis can reach up to 36% after screw fixation, but this rate significantly decreases after screw removal . Therefore, it is recommended to remove screws within 8–12 weeks postoperatively to allow for ligament healing and restoration of normal joint function. In recent years, the use of bioabsorbable screws has emerged as a solution to the potential complications of permanent metal implants. Bioabsorbable screws provide sufficient fixation strength and gradually degrade after tissue healing, eliminating the need for secondary screw removal surgery . While bioabsorbable screws are designed to provide sufficient strength during the healing period, their mechanical properties may be inferior to those of metallic screws, particularly in high-stress applications or in patients with poor bone quality. In addition, during the degradation process, bioabsorbable screws can release acidic byproducts, potentially causing local inflammation, sterile effusion, or osteolysis in some patients . Minimally invasive elastic fixation devices for distal tibiofibular syndesmosis Minimally invasive elastic fixation devices for the distal tibiofibular syndesmosis are designed based on the biomechanical characteristics of the ankle joint and are effective in achieving syndesmosis reduction. These devices offer several advantages, including reduced surgical trauma through limited incisions, allowing micromotion of the syndesmosis to align with its physiological function, relatively simple operation without the need for secondary implant removal, and enabling early weight-bearing and functional training. Common elastic fixation devices include the suture-button system, syndesmotic hooks, and hook plates. Studies have shown that single suture-button fixation achieves results comparable to traditional screw fixation. However, using two suture-buttons (2-Suture-button) provides better outcomes in anatomical reduction and rotational stability, though it may still fall short of the stability seen in a healthy syndesmosis . The suture-button system offers specific advantages, such as avoiding common complications of screw fixation like loosening or breakage, and eliminating the need for secondary implant removal. However, appropriate tension must be maintained during application to prevent over-compression of the syndesmosis, which could impair joint function . The Tightrope system is an innovative minimally invasive method for treating syndesmosis injuries. Research reports an overall repair satisfaction rate exceeding 95%, with no need for secondary implant removal . Furthermore, the Bolt Tightrope system, which combines bolt compression with a suture-loop titanium plate, has demonstrated favorable clinical outcomes. However, intraoperative care is essential to avoid irritation of distal tibial soft tissues or excessive pressure on the cortical bone. Recent studies suggest that elastic fixation devices maintain biomechanical stability of the syndesmosis more effectively than traditional screws during long-term weight-bearing activities . Additionally, the development of novel materials such as bioresorbable suture-button devices has shown promise in optimizing functional recovery. These advancements have made minimally invasive treatment of syndesmosis injuries safer and more effective, providing new possibilities for restoring ankle joint function.
Percutaneous screw fixation with a limited incision is a well-established method for treating syndesmosis injuries . According to Bekerom et al. , the following points should be considered during screw placement: (1) The screw should be positioned 2–4 cm proximal to the tibiotalar joint surface and aligned parallel to the joint line. (2) In the transverse plane, the screw should be inserted at a 30° angle from posterolateral to anteromedial to prevent syndesmosis constriction that could restrict ankle dorsiflexion. Percutaneous syndesmosis fixation, while minimally invasive, may carry an increased risk of malreduction compared to open techniques, particularly when imaging guidance or proper anatomical landmarks are not meticulously utilized. The use of clamping during percutaneous syndesmosis fixation has been debated . While clamping can facilitate joint reduction, improper use may inadvertently lead to over-compression or malalignment . Biomechanical studies indicate that placing screws 30–40 mm above the tibiotalar joint minimizes Mises equivalent stress and provides optimal fixation . Fibula fracture should be fixed at any level except the fibular head, in order to gain the proper length . During surgery, screws can be fixed through either 3 or 4 cortices. Fixation through 3 cortices is associated with a lower risk of joint space narrowing, while 4-cortex fixation provides greater holding strength but carries a higher risk of screw breakage. Liu et al. suggested that the diameter of the fibula determines the appropriate screw size. Once the syndesmosis ligament has healed, screw removal is optional to prevent adverse effects on ankle function or risks of loosening and breakage . CT imaging has shown that the rate of malreduction of the syndesmosis can reach up to 36% after screw fixation, but this rate significantly decreases after screw removal . Therefore, it is recommended to remove screws within 8–12 weeks postoperatively to allow for ligament healing and restoration of normal joint function. In recent years, the use of bioabsorbable screws has emerged as a solution to the potential complications of permanent metal implants. Bioabsorbable screws provide sufficient fixation strength and gradually degrade after tissue healing, eliminating the need for secondary screw removal surgery . While bioabsorbable screws are designed to provide sufficient strength during the healing period, their mechanical properties may be inferior to those of metallic screws, particularly in high-stress applications or in patients with poor bone quality. In addition, during the degradation process, bioabsorbable screws can release acidic byproducts, potentially causing local inflammation, sterile effusion, or osteolysis in some patients .
Minimally invasive elastic fixation devices for the distal tibiofibular syndesmosis are designed based on the biomechanical characteristics of the ankle joint and are effective in achieving syndesmosis reduction. These devices offer several advantages, including reduced surgical trauma through limited incisions, allowing micromotion of the syndesmosis to align with its physiological function, relatively simple operation without the need for secondary implant removal, and enabling early weight-bearing and functional training. Common elastic fixation devices include the suture-button system, syndesmotic hooks, and hook plates. Studies have shown that single suture-button fixation achieves results comparable to traditional screw fixation. However, using two suture-buttons (2-Suture-button) provides better outcomes in anatomical reduction and rotational stability, though it may still fall short of the stability seen in a healthy syndesmosis . The suture-button system offers specific advantages, such as avoiding common complications of screw fixation like loosening or breakage, and eliminating the need for secondary implant removal. However, appropriate tension must be maintained during application to prevent over-compression of the syndesmosis, which could impair joint function . The Tightrope system is an innovative minimally invasive method for treating syndesmosis injuries. Research reports an overall repair satisfaction rate exceeding 95%, with no need for secondary implant removal . Furthermore, the Bolt Tightrope system, which combines bolt compression with a suture-loop titanium plate, has demonstrated favorable clinical outcomes. However, intraoperative care is essential to avoid irritation of distal tibial soft tissues or excessive pressure on the cortical bone. Recent studies suggest that elastic fixation devices maintain biomechanical stability of the syndesmosis more effectively than traditional screws during long-term weight-bearing activities . Additionally, the development of novel materials such as bioresorbable suture-button devices has shown promise in optimizing functional recovery. These advancements have made minimally invasive treatment of syndesmosis injuries safer and more effective, providing new possibilities for restoring ankle joint function.
Minimally invasive repair with absorbable anchors Traditional methods for repairing deltoid ligament injuries, such as transosseous wire or non-absorbable suture repair, are associated with significant surgical trauma and suboptimal outcomes . With advancements in medical biomaterials, minimally invasive suture anchor techniques have become the mainstream approach for deltoid ligament repair . These anchors, which are fully embedded within the bone, minimize irritation to surrounding soft tissues and eliminate the need for secondary implant removal surgery. This technique simplifies the procedure while ensuring robust fixation of the bone cortex, avoiding unnecessary interference with the ligament. Even in cases of compromised blood supply or delayed healing at the ligament ends, at least two tension-bearing suture strands can adequately replace the deep layer of the deltoid ligament, restoring its mechanical function. A critical aspect of this surgical technique is to secure the ligament repair only after fracture reduction and fixation, to prevent excessive tension on the repaired ligament that might compromise its integrity . Shen et al. conducted a retrospective study involving 34 patients with ankle fractures and deltoid ligament ruptures. All patients underwent primary suture anchor repair, with an average follow-up of 28.4 months. The mean final AOFAS score was 92.6, and the medial clear space on stress X-rays was (3.74 ± 0.32) mm, comparable to the contralateral uninjured side at (3.65 ± 0.17) mm. The study concluded that suture anchor repair achieved satisfactory surgical outcomes and effectively restored the deep posterior tibiotalar ligament. Further follow-up studies have confirmed that the suture anchor technique offers superior results compared to traditional methods, particularly in addressing deep deltoid ligament injuries . However, potential complications, such as rare occurrences of implant rejection or irritation of the surrounding skin, highlight the need for further research. Future directions may focus on evaluating various implant materials in terms of biomechanics and clinical outcomes, aiming to optimize both performance and patient comfort. Recent advancements, such as bioresorbable anchors with enhanced biocompatibility and reduced inflammatory responses, show promising potential in minimizing these complications and improving long-term outcomes. Arthroscopic surgery for the ankle joint Vega et al. conducted a retrospective study on 13 patients with medial and lateral ligament injuries caused by ankle fractures. Using an anteromedial approach via ankle arthroscopy, ruptured deltoid ligaments were repaired under direct visualization with automated suture clamps. After an average follow-up of 35 months, the median AOFAS score improved significantly from 70 preoperatively to 100 at the final follow-up. All 13 patients reported substantial improvements in ankle function. Researchers emphasized that arthroscopy allows for direct visualization and assessment of deltoid ligament injuries and monitored repair to evaluate ankle stability after reconstruction . Acevedo et al. performed arthroscopic repair on 87 patients with deltoid ligament injuries. Using suture anchors and sutures during arthroscopic procedures, the patient satisfaction rate exceeded 90%. The minimally invasive nature of arthroscopic surgery, coupled with favorable postoperative outcomes, has provided a new solution for deltoid ligament repair. Arthroscopic minimally invasive surgery has become a significant trend in foot and ankle surgery. Recent advances in arthroscopic techniques have significantly improved the precision and outcomes of deltoid ligament repairs. Enhanced instrumentation, including high-definition 4 K imaging systems and advanced suture management devices, has greatly increased the accuracy of visualizing ligament injuries and the efficiency of suture placement. The use of bioabsorbable suture anchors has also gained popularity due to their biocompatibility and the elimination of the need for hardware removal, with studies demonstrating outcomes comparable or superior to those of traditional metallic anchors. Additionally, combining arthroscopy with real-time imaging modalities such as ultrasound and intraoperative CT has enhanced the assessment of joint stability and ligament tension during repairs, improving surgical precision and reducing the risk of residual instability. Biomechanical studies, although currently limited, have started to validate the stability and functional outcomes of arthroscopic deltoid ligament repairs, particularly under weight-bearing conditions. Another promising advancement is the incorporation of bioengineered ligament substitutes in arthroscopic procedures. These synthetic grafts, used alongside traditional suture anchors, offer superior mechanical strength and promote healing in cases of extensive ligament damage. Furthermore, recent long-term follow-up studies indicate that patients undergoing arthroscopic deltoid ligament repair maintain stable functional outcomes over a period of 5–10 years, reinforcing the efficacy and durability of arthroscopic approaches compared to open surgery . These advancements collectively highlight the growing potential of arthroscopic techniques in foot and ankle surgery.
Traditional methods for repairing deltoid ligament injuries, such as transosseous wire or non-absorbable suture repair, are associated with significant surgical trauma and suboptimal outcomes . With advancements in medical biomaterials, minimally invasive suture anchor techniques have become the mainstream approach for deltoid ligament repair . These anchors, which are fully embedded within the bone, minimize irritation to surrounding soft tissues and eliminate the need for secondary implant removal surgery. This technique simplifies the procedure while ensuring robust fixation of the bone cortex, avoiding unnecessary interference with the ligament. Even in cases of compromised blood supply or delayed healing at the ligament ends, at least two tension-bearing suture strands can adequately replace the deep layer of the deltoid ligament, restoring its mechanical function. A critical aspect of this surgical technique is to secure the ligament repair only after fracture reduction and fixation, to prevent excessive tension on the repaired ligament that might compromise its integrity . Shen et al. conducted a retrospective study involving 34 patients with ankle fractures and deltoid ligament ruptures. All patients underwent primary suture anchor repair, with an average follow-up of 28.4 months. The mean final AOFAS score was 92.6, and the medial clear space on stress X-rays was (3.74 ± 0.32) mm, comparable to the contralateral uninjured side at (3.65 ± 0.17) mm. The study concluded that suture anchor repair achieved satisfactory surgical outcomes and effectively restored the deep posterior tibiotalar ligament. Further follow-up studies have confirmed that the suture anchor technique offers superior results compared to traditional methods, particularly in addressing deep deltoid ligament injuries . However, potential complications, such as rare occurrences of implant rejection or irritation of the surrounding skin, highlight the need for further research. Future directions may focus on evaluating various implant materials in terms of biomechanics and clinical outcomes, aiming to optimize both performance and patient comfort. Recent advancements, such as bioresorbable anchors with enhanced biocompatibility and reduced inflammatory responses, show promising potential in minimizing these complications and improving long-term outcomes.
Vega et al. conducted a retrospective study on 13 patients with medial and lateral ligament injuries caused by ankle fractures. Using an anteromedial approach via ankle arthroscopy, ruptured deltoid ligaments were repaired under direct visualization with automated suture clamps. After an average follow-up of 35 months, the median AOFAS score improved significantly from 70 preoperatively to 100 at the final follow-up. All 13 patients reported substantial improvements in ankle function. Researchers emphasized that arthroscopy allows for direct visualization and assessment of deltoid ligament injuries and monitored repair to evaluate ankle stability after reconstruction . Acevedo et al. performed arthroscopic repair on 87 patients with deltoid ligament injuries. Using suture anchors and sutures during arthroscopic procedures, the patient satisfaction rate exceeded 90%. The minimally invasive nature of arthroscopic surgery, coupled with favorable postoperative outcomes, has provided a new solution for deltoid ligament repair. Arthroscopic minimally invasive surgery has become a significant trend in foot and ankle surgery. Recent advances in arthroscopic techniques have significantly improved the precision and outcomes of deltoid ligament repairs. Enhanced instrumentation, including high-definition 4 K imaging systems and advanced suture management devices, has greatly increased the accuracy of visualizing ligament injuries and the efficiency of suture placement. The use of bioabsorbable suture anchors has also gained popularity due to their biocompatibility and the elimination of the need for hardware removal, with studies demonstrating outcomes comparable or superior to those of traditional metallic anchors. Additionally, combining arthroscopy with real-time imaging modalities such as ultrasound and intraoperative CT has enhanced the assessment of joint stability and ligament tension during repairs, improving surgical precision and reducing the risk of residual instability. Biomechanical studies, although currently limited, have started to validate the stability and functional outcomes of arthroscopic deltoid ligament repairs, particularly under weight-bearing conditions. Another promising advancement is the incorporation of bioengineered ligament substitutes in arthroscopic procedures. These synthetic grafts, used alongside traditional suture anchors, offer superior mechanical strength and promote healing in cases of extensive ligament damage. Furthermore, recent long-term follow-up studies indicate that patients undergoing arthroscopic deltoid ligament repair maintain stable functional outcomes over a period of 5–10 years, reinforcing the efficacy and durability of arthroscopic approaches compared to open surgery . These advancements collectively highlight the growing potential of arthroscopic techniques in foot and ankle surgery.
Ligament reconstruction for distal tibiofibular syndesmosis injuries commonly uses autologous fibular longus or brevis tendons . This approach avoids issues such as implant rejection and screw loosening while meeting anatomical and functional restoration requirements. Additionally, artificial ligament materials, such as polyethylene terephthalate (PET), have been utilized due to their high tensile strength and biocompatibility, effectively accelerating functional recovery . Combining absorbable screws, compression screws, and single-side suspension techniques further enhances surgical efficiency and reduces operative time. Biomechanical studies suggest that oblique fixation through a limited incision between the anterior inferior tibiofibular ligament and the interosseous ligament provides optimal outcomes . This technique effectively incorporates the micromotion mechanism of the syndesmosis, significantly improving ankle joint functionality while reducing the incidence of complications. Connors et al. reported a case of a female patient with an ankle fracture and syndesmosis rupture who underwent allograft semitendinosus tendon transplantation. After two syndesmosis screws were removed at 6 months postoperatively, the patient exhibited no signs of ankle instability or syndesmosis separation. At over two years of follow-up, the patient showed no notable functional loss. Li et al. conducted a biomechanical study comparing suture-button systems and autologous semitendinosus tendon reconstruction in 8 cadaveric specimens. They measured three-dimensional syndesmosis diastasis, ultimate torque, and rotation angles, finding no statistically significant difference between the two techniques in restoring syndesmosis function. However, the biomechanical results for autologous ligament reconstruction were more promising, warranting further clinical exploration. Recent advancements in limited-incision techniques and novel materials, such as nanoscale artificial ligaments combined with absorbable screws, have expanded the possibilities for syndesmosis reconstruction. Limited-incision approaches not only minimize surgical trauma but also significantly enhance postoperative functional recovery. As more cases are accumulated and long-term follow-up data become available, refined surgical protocols will gradually emerge, offering superior treatment options for complex ankle injuries. Traditional reconstruction techniques for the deltoid ligament involve anchoring less functional ligaments to the medial malleolus. Common approaches include Kitaoka, Wihberger, and Hintermann techniques, which use autologous tendons to reconstruct the tibionavicular ligament, aiming to restrict talar external rotation. Similarly, Deland’s technique involves reconstructing the tibiocalcaneal ligament using autologous tendons to prevent talar eversion. Yoo et al. investigated the use of autologous semitendinosus tendons as substitutes for the deltoid ligament, while Persaud reported successful deltoid ligament reconstruction using autologous posterior tibial tendon transplantation. Immediate postoperative imaging revealed restoration of normal anatomical alignment, and 14-month follow-ups demonstrated satisfactory imaging and clinical outcomes. These techniques, however, are performed at the expense of autologous tendons such as the flexor hallucis longus and fibularis longus. This approach not only involves significant surgical trauma but also impacts the muscle strength at the donor site, potentially leading to long-term adverse effects on ankle joint function and arch stability. To address these limitations, superficial deltoid ligament repair can employ allogeneic semitendinosus tendons. This technique involves weaving the distal end of the tendon with suture anchor threads, offering advantages such as minimal invasiveness, shorter operative time, and reduced functional impact compared to traditional methods. Brodell et al. conducted a retrospective study on deltoid ligament reconstruction, involving 14 patients. Among these, 6 underwent allogeneic semitendinosus tendon transplantation, and 8 received allogeneic fibularis longus tendon grafts. Postoperative weight-bearing X-rays showed that the reconstructed feet achieved normal anatomical alignment. After an average follow-up of 24 months, the mean Foot and Ankle Ability Measure (FAAM) score improved from 69.3 preoperatively to 90.1 postoperatively. Oburu et al. utilized Y-shaped allograft popliteus tendons to reconstruct both the deep and superficial layers of the deltoid ligament. They further reinforced the allograft tendons with fiber bundles or non-absorbable fiber sutures to ensure robust fixation. Recent studies have explored advanced materials and minimally invasive approaches to enhance deltoid ligament reconstruction. Bioengineered ligaments, such as nanofiber-reinforced scaffolds, have shown promise in improving biomechanical strength and reducing immune responses . These materials mimic the native ligament structure and provide an ideal environment for cell proliferation and tissue regeneration. Additionally, 3D-printed scaffolds tailored to patient anatomy have demonstrated the potential to ensure optimal fit and mechanical stability while promoting host tissue integration. Combined techniques, such as arthroscopic-assisted reconstruction with advanced imaging guidance (e.g., intraoperative CT or ultrasound), have improved surgical precision and outcomes . These approaches reduce intraoperative complications and enhance postoperative recovery. Furthermore, recent meta-analyses suggest that allograft-based techniques outperform autografts in terms of reducing donor site morbidity, improving functional scores, and lowering complication rates . Allografts combined with bioresorbable anchors are gaining popularity among surgeons. The development of intraoperative tension-control devices has also significantly advanced deltoid ligament reconstruction. These devices ensure precise replication of physiological forces, which is critical for restoring normal joint mechanics. Future research should focus on optimizing graft selection, evaluating the long-term performance of novel materials, and standardizing reconstruction protocols. Prospective controlled trials and large-scale biomechanical studies will be essential in driving further progress. This integration of innovative materials, advanced technologies, and minimally invasive approaches is transforming deltoid ligament reconstruction into a safer, more effective, and patient-centered procedure. Patients with combined distal tibiofibular and deltoid ligament injuries require timely surgical intervention. Delayed treatment can lead to joint instability, degenerative changes, cartilage damage, and, in severe cases, traumatic arthritis. For patients with high expectations for functional recovery, simultaneous repair of both ligaments may be considered. However, the final surgical approach should be based on clinical presentation, imaging findings, and arthroscopic evaluation. For chronic injuries with a duration of less than six months, aggressive repair or reconstruction of the ligaments is recommended . This often involves reconstruction using autologous gracilis tendons, combined with osteotomy to correct abnormal bony structures and restore stability. For injuries lasting more than six months, distal tibiofibular fusion is generally advised to address chronic instability . When addressing chronic deltoid ligament injuries, the surgical approach should take into account the ligament’s shorter and deeper anatomical characteristics and the presence of scar tissue. The preferred method involves fixing the original ligament using suture anchors. If the original ligament is insufficient, tendon grafts (e.g., semitendinosus tendon, flexor hallucis longus tendon, or plantaris tendon) can be used to stabilize the ankle joint.
The future of minimally invasive treatment for combined deltoid ligament and distal tibiofibular syndesmosis injuries is promising, with advancements in surgical techniques and materials shaping new directions. The integration of advanced imaging technologies, such as augmented reality (AR), intraoperative 3D CT, and ultrasound, will enhance surgical precision by providing detailed visualization of ligament and joint structures during minimally invasive procedures. These tools will enable more accurate diagnosis, precise repairs, and reduced intraoperative complications. Next-generation implants, including biocompatible and bioabsorbable suture anchors and fixation devices, are expected to provide optimal strength and stability while minimizing the need for hardware removal. Innovations such as bioengineered nanomaterials and 3D-printed patient-specific implants will further refine treatment options. Additionally, advancements in arthroscopic techniques, incorporating robotic assistance and improved instrumentation, are anticipated to make arthroscopy the gold standard for addressing combined injuries, offering greater precision and faster recovery times. A growing emphasis on biomechanics and functional restoration will guide minimally invasive procedures, focusing on preserving natural joint motion and reducing the risk of post-traumatic arthritis. Biological augmentation, using growth factors, platelet-rich plasma (PRP), and mesenchymal stem cells, will play an increasing role in enhancing ligament healing and tissue integration, accelerating recovery and improving tissue quality. Customized rehabilitation protocols will be tailored to individual patient needs, with early functional rehabilitation and real-time monitoring tools aiming to maximize recovery and minimize complications. The use of large-scale clinical data and artificial intelligence (AI) will support the development of evidence-based surgical protocols, assisting in predicting outcomes and optimizing treatment plans. Finally, comprehensive long-term outcome studies will validate the efficacy and safety of minimally invasive approaches, helping to establish standard practices and improve long-term joint stability and patient satisfaction. The future of minimally invasive treatment for these injuries lies in integrating advanced technology, personalized medicine, and biological innovation to enhance surgical precision, patient outcomes, and overall recovery experiences.
Below is the link to the electronic supplementary material. Supplementary Material 1
|
Molecular and Microbial Detections of | 5a33bff1-72f6-4a28-9172-d97aaf428b38 | 11636308 | Dentistry[mh] | Introduction Dental caries represents a preventable non‐communicable disease that affects a significant portion of the population across their lifespan (Pitts et al. ). Caries prevention has traditionally depended on fluoride exposure, diet control, oral hygiene status, and consuming antibacterial agents. In 2019, dental caries was estimated to be 43% prevalent worldwide and 46% prevalent in low‐income countries (Spatafora et al. ). Since oral and dental hygiene is directly influenced by the change in the natural microbial flora and its identification is faster and more accessible in the early stages of dental caries, it is considered an ideal option for screening children exposed to dental caries (Sedghi et al. ). Previous studies have often addressed qualitative or quantitative identification of microbes involved in dental caries, and very few research works, especially in Iran, have addressed both qualitative and quantitative aspects of important bacteria in dental caries. To date, no study has been conducted to identify Streptococcus mutans and lactobacilli bacteria in children in Kerman Province; children's demographic and lifestyle factors associated with dental caries are also not fully discussed. Globally, Snyder's test has been examined as a complementary test in some studies along with the evaluation of S. mutans . Therefore, providing rapid and inexpensive dental caries testing to detect at‐risk children in low‐ and middle‐income countries could be a reliable strategy (Nguyen et al. ). A systematic review study recently reported that primary dental caries rates declined in children aged 2–5 years in the United States from 1988 to 2016; however, no obvious declining tendencies were seen in low‐ and middle‐income nations because of a rise in sugar intake (Spatafora et al. ). The global prevalence of early childhood caries is estimated at 49%. The prevalence varies in different countries; for example, in Turkey, Iraq, and Qatar, it is 73.8%, 77.8%, and 89.2%, respectively. However, in Greece and Japan, it is 19.3% and 20.6%, respectively (Maklennan et al. ). In Iran, the prevalence is reported to be 72.8% (Soltani et al. ). A study in Iran stated that the decayed, missing, and filled teeth (dmft) index has approximately 17% increased in Iranian children aged 5–9 years from 1990 to 2017 (Shoaee et al. ). Also, dmft/DMFT score has increased among the Iranian population by more than 15% from 1990 to 2017; particularly, this dmft/DMFT index in Kerman Province has increased considerably from approximately 4.53 in 1990 to over 5.24 in 2017 (Shoaee et al. ; Shoaee et al. ). Additionally, it has been reported that high dental expenditures are the primary obstacle to children's access to oral health care in Kerman province, as well as in some other provinces of Iran (Khoramrooz et al. ; Vali et al. ). Over 500 species of bacteria are involved in the formation of dental plaque; they are a very complex bacterial community that accumulates in the hard tissues of the oral cavity (Rosan and Lamont ). There is no doubt that lactobacilli is one of the most cariogenic bacteria in the oral environment. While it is not the initiator of caries, it plays a crucial role in its progression. Some lactobacilli, such as Lactobacillus gasseeri , L. fermentum , L. vaginalis , and L. casei , have been reported to be prevalent at the majority of oral sites, such as saliva, tongue, carious lesions, and dental plaques (Ahirwar, Gupta, and Snehi ; Wen et al. ). Numerous obligate and facultative anaerobic bacteria predominate in the microbial communities linked to dental caries. Mutans streptococci are the major cariogenic pathogens of tooth decay. Mutans streptococci isolated from dental caries samples are S. mutans and S. sobrinus . The acidogenic nature of these bacteria allows them to soften teeth's hard tissues by producing short‐chain acids. In addition, the presence of S. mutans is believed to be one of the main triggers of dental caries (Abranches et al. ; Okada et al. ). Bacteria, fluoride, saliva, and sugar are some of the factors that may alter the dynamic flow of demineralization and remineralization in the enamel. During childhood, pediatricians and families can manage these controllable factors to prevent, slow, or delay the progression of the disease (Krol and Whelan ). Snyder's test is one of the routine tests used to determine susceptibility to dental caries by qualitative estimation of acid production of the microbial community of the oral environment. However, it has some limitations; insufficient specificity in identifying certain groups of organisms involved in caries is the main limitation, as this can lead to false positive results (Kunte et al. ). Therefore, the availability of other tests, such as measuring the levels of the responsible bacteria (e.g. lactobacilli and S. mutans ) can assist in pinpointing the caries more precisely. This study aims to integrate Snyder's test, colony counting, and PCR techniques to provide a more comprehensive assessment of the caries activity of S. mutans and lactobacilli, which has not been thoroughly explored in the pediatric population aged 5–9 in Kerman province, Iran.
Materials and Methods 2.1 Sample Collection This cross‐sectional study was executed in the Department of Pediatric Dentistry, School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran between March and Jun 2024. As mentioned, the prevalence of dental caries in the Iranian population is 72.8%. In our study, Formula 1 which was described by (Pourhoseingholi, Vahedi, and Rahimzadeh ), was used for the sample size calculation. In this formula, the confidence level is represented by the Z statistic (in α = 10% and 90% confidence interval [CI], it is 1.645). The P statistic is the expected prevalence (it was 0.728 in our study). The d statistic indicates the precision, or effect size (in relative precision of 10%, it was 0.0728). (1) N = Z 2 P ( 1 − P ) d 2 According to Formula , the minimum total sample size ( N ) for conducting the study on dental caries in the Iranian population was calculated to be approximately 101. The present study involved 120 children (62 [51.67%] boys and 58 [48.33%] girls) aged 5–9 (6.92 ± 1.52) years old. The sampling was carried out by three dental residents under the supervision of a pediatric dentist. Salivary flow and saliva concentration vary within 24 h; for this reason, saliva was collected from all children in the morning between 9:00 and 10:00 a.m and later. Children had not eaten for at least 30 min before sampling. Saliva samples (approximately 2–3 mL) from children who met the eligibility requirements were placed into sterile test tubes with transfer fluid and stored at 10°C–5°C. The tubes were then promptly moved to the laboratory, where they were cultured for a maximum of 3 h. The maximum time of sampling was 1 min, which sometimes reached this maximum time based on the amount of saliva collected. Three pediatric dentists evaluated the oral and dental status of these children, and then they were categorized into four groups based on dmft index: Group 1 (controls) had a dmft of 0, Group 2 had a dmft of 4–6, Group 3 had a dmft of 7–9, and Group 4 had a dmft of 10–13. Using a questionnaire, parents were asked to provide information about their children's daily habits (including the frequency of brushing their teeth, main meals, and sweet snacks), and parents' educational levels (Table ). The Research Ethics Committee of Kerman University of Medical Sciences approved the study (IR. KMU. REC.1402.449). Furthermore, a written consent form was obtained from the parents of the children participating in the study. 2.1.1 Inclusion Criteria a. Children aged 5–9 years old who were originally from Kerman Province and referred to the Department of Pediatric Dentistry. b. The Children had no inflammatory, oral, bacterial, systematic, or other diseases that influence saliva secretion. In addition, they had not taken any antibiotics for at least 14 days before sampling. c. Children's saliva was collected without the use of saliva stimulants. d. Children and parents were willing to participate in the study voluntarily. 2.1.2 Exclusion Criteria a. Children with dmft indices 1–3 and over 13. The main reason is that there are very few cases with a dmft index over 13 in the pediatric population; also, it is hard to find substantial variations in microbiota in the dmft indices 1–3. b. Individuals who did not meet the age group requirements or were not residents of Kerman Province. c. Children whose parents had not signed the consent form. d. Incomplete responses to the questionnaire. e. To prevent any potential bias in the colony counting of bacteria, children who had exfoliation of teeth were excluded from the sampling. 2.2 Caries Activity 2.2.1 Snyder's Test B.C.G‐Dextrose Agar (Quelab Company, Montreal, QC, Canada) was used as Snyder's test medium to qualitatively determine the caries activity of S. mutans , lactobacilli, and some other acidogenic microbes involved in dental caries. The medium contains the following components: peptone (20 g/L), dextrose (glucose [20 g/L]), sodium chloride (5 g/L), bromocresol green (0.02 g/L), agar (20 g/L). The final pH of the medium at 25°C was 4.8 ± 0.2. Test tubes of Snyder's test were prepared according to the manufacturer's instructions: first, 65.02 g of the medium powder was suspended in 1000 mL of distilled water. Then to completely dissolve the medium, it was heated till boiling. After that, 10 mL amounts of medium was dispensed in each test tube. Subsequently, the test tubes were sterilized by autoclaving at 121°C for 15 min. Finally, the test tubes were cooled at an upright position. In the current study for conducting Snyder's test, 100 µL of saliva was added to each test tube. Incubation was then performed on these test tubes for 24, 48, and 72 h at 37°C. When cariogenic bacteria exist in the saliva, glucose is fermented and lactic acid is produced, accordingly reducing the pH to approximately 4.4 in the medium. The severity of caries is characterized by the rate at which the color changes from green to yellow. Depending on the situation, one of the four following patterns may occur: (1) complete yellowing within 24 h represents a “marked susceptibility” to creating dental caries; (2) yellowing up to 48 h represents a “definitive susceptibility”; (3) yellowing up to 72 h represents a “limited susceptibility”; and (4) no change in color (green) occurs within 72 h, indicating a “negative susceptibility” (Ali et al. ; Ramesh et al. ; Snyder ). 2.2.2 Colony Counting of S. mutans In this study, the Mitis Salivarius Agar Base (Quelab Company, Montreal, QC, Canada) was used as a selective medium for isolation and counting colonies of S. mutans . The medium consists of these components: casein enzymic hydrolysate (15 g/L), peptic digest of animal tissue (5 g/L), dextrose (1 g/L), sucrose (50 g/L), dipotassium phosphate (4 g/L), trypan blue (0.075 g/L), crystal violet (0.0008 g/L), and agar (15 g/L). The medium's final pH was 7 ± 0.2 at 25°C. The manufacturer's instructions were pursued to prepare the medium: The 90.07 g of the medium powder was suspended in 1000 mL of distilled water. Complete dissolution of the medium was obtained by heating it to boiling. Sterilization of the medium was granted using autoclaving at 121°C for 15 min. Afterward, the medium was cooled to 50°C–55°C and then 1 mL of sterile 1% potassium tellurite solution was added to the medium (after this step, the medium should not be reheated). Ultimately, the medium was mixed well and poured into Petri plates. For conducting colony counting of S. mutans , 0.2 mL of saliva was added to 1.8 distilled water in a 2 mL tube (dilution of 10 −1 ), then 0.2 of the sample from the tube was added to 1.8 mL of distilled water in a new 2 mL tube (dilution of 10 −2 ). The process was continued till the serial dilution reached 10 −5 . From this dilution (10 −5 ), 1 mL of sample was inserted into the Petri plate. After 48 h of incubation at 35°C, colonies of S. mutans appeared in the medium, and they were counted using a naked eye. The count of S. mutans was estimated using the number of times of serial dilution multiplied by the number of colony‐forming units (CFU); the output was demonstrated as CFU/mL of saliva (Ademe, Admassu, and Balakrishnan ). As the serial dilution of the saliva was 5, the colonies of S. mutants were presented based on 10 5 CFU/mL. 2.2.3 Colony Counting of Lactobacilli To isolate and colony count of lactobacilli, the medium of de Man, Rogosa, and Sharpe (MRS) Agar (Quelab Company, Montreal, QC, Canada) was utilized (de Man, Rogosa, and Sharpe ). The composition of the medium was proteose peptone (10 g/L), beef extract (8 g/L), yeast extract (4 g/L), dextrose (20 g/L), polysorbate 80 (1 g/L), ammonium citrate (2 g/L), Sodium acetate (5 g/L), magnesium sulfate (0.2 g/L), manganese sulfate (0.05 g/L), dipotassium phosphate (2 g/L), and agar (14 g/L); the pH of the medium at 25°C was 6.2 ± 0.2. To prepare the culture medium, the instructions of the manufacturer were followed: In 1000 mL distilled water, 64 g of medium powder was suspended and heated to dissolve the medium completely. Then the medium was distributed in Petri plates and was sterilized by autoclaving at 121°C for 15 min. For isolation and colony counting of lactobacilli, serial dilution of saliva (10 −5 ) which was above described for S. mutans was used; the amount of 1 mL of this serial dilution of saliva was inserted into the MRS agar medium. Plates were incubated anaerobically using Microbiology Anaerocult C (Merck Company, United States) in anaerobic jars for 48 h at 37°C. After that, colonies of lactobacilli appeared on the plates and then they were counted with the naked eye. The strategy that was described for calculating the colony counting of S. mutans was also performed for lactobacilli (Ademe, Admassu, and Balakrishnan ); hence, the 10 5 CFU/mL was also used for the count of lactobacilli. 2.2.4 Molecular Identification DNA was extracted from children's saliva using a standard protocol (Goode et al. ). Primers used for detection of the gftB gene in S. mutans were: “F5ʹ‐ ACTACACTTTCGGGTGGCTTGG‐3ʹ” and “R5ʹ‐CAGTATAAGCGCCAGTTTCATC‐3ʹ” (Franco e Franco et al. ). Furthermore, the 16S rRNA gene was used for the detection of lactobacilli with these primers: “F5ʹ‐CATTTGGAAACAGATGCTAATACC‐3ʹ, and R5ʹ‐GTCCATTGTGGAAGATTCCC‐3ʹ” (Pahumunto et al. ). Then using PCR assay, the presence of S. mutans and lactobacilli was evaluated as mentioned as follows. Each tube of PCR was prepared with these components: 10 µL of Taq DNA Polymerase 2x Master Mix RED (Ampliqon Co., Denmark), 150 ng of extracted DNA, 10 pmol/µL of each forward and reverse primers, and the final volume of reaction tube was reached to 20 µL with distilled water. Also, for negative control, distilled water was used instead of DNA, and for positive control, purified genomic DNA of S. mutans or lactobacilli was used instead of DNA of saliva. The steps of the PCR program were as follows: (I) 95°C for 5 min; (II) 40 (for 16S rRNA )/45 (for gftB ) cycles of 95°C for 30 s, 54°C for 45 s, and 72°C for 45 s; and (III), a final extension phase was applied on 72°C for 5 min. Ultimately, PCR products were run on a 2% agarose gel electrophoresis together with a DNA ladder, negative control, and positive control. 2.3 Statistical Analysis All statistical tests were performed utilizing SPSS version 27.0 (SPSS Inc., Chicago, USA), MedCalc version 22.021 (MedCalc Software Ltd., Ostend, Belgium) (for calculating odds ratio), and GraphPad Prism version 10.2.3 (GraphPad Software Inc, Boston, MA, USA, for generating bar chart and violin charts; Figure ] with a p value < 0.05 threshold. The following statistical tests were used in the study. The Kolmogorov–Smirnov test, which is a robust normality test for sample sizes over 50 (Mishra et al. ), was performed to evaluate the normal distribution of numerical data, including children's age, S. mutans count, and lactobacilli count; the outputs of the test were considered for choosing parametric or nonparametric tests. To determine the differences between a numerical variable and a dichotomous variable, the nonparametric Mann–Whitney U test was utilized; also, the effect size for this test was calculated using Formula . (2) Cohen ′ s d = | Z score | N The nonparametric Kruskal–Wallis test was used to find differences between a numerical variable and a multi‐state (three or more) variable. The chi‐square test was employed to compare two nominal/ordinal variables. The nonparametric Spearman's Correlation Coefficient test was used to determine the possible correlation among numerical variables. Scheffe's post hoc test was utilized for evaluating differences in bacterial count among dmft groups.
Sample Collection This cross‐sectional study was executed in the Department of Pediatric Dentistry, School of Dentistry, Kerman University of Medical Sciences, Kerman, Iran between March and Jun 2024. As mentioned, the prevalence of dental caries in the Iranian population is 72.8%. In our study, Formula 1 which was described by (Pourhoseingholi, Vahedi, and Rahimzadeh ), was used for the sample size calculation. In this formula, the confidence level is represented by the Z statistic (in α = 10% and 90% confidence interval [CI], it is 1.645). The P statistic is the expected prevalence (it was 0.728 in our study). The d statistic indicates the precision, or effect size (in relative precision of 10%, it was 0.0728). (1) N = Z 2 P ( 1 − P ) d 2 According to Formula , the minimum total sample size ( N ) for conducting the study on dental caries in the Iranian population was calculated to be approximately 101. The present study involved 120 children (62 [51.67%] boys and 58 [48.33%] girls) aged 5–9 (6.92 ± 1.52) years old. The sampling was carried out by three dental residents under the supervision of a pediatric dentist. Salivary flow and saliva concentration vary within 24 h; for this reason, saliva was collected from all children in the morning between 9:00 and 10:00 a.m and later. Children had not eaten for at least 30 min before sampling. Saliva samples (approximately 2–3 mL) from children who met the eligibility requirements were placed into sterile test tubes with transfer fluid and stored at 10°C–5°C. The tubes were then promptly moved to the laboratory, where they were cultured for a maximum of 3 h. The maximum time of sampling was 1 min, which sometimes reached this maximum time based on the amount of saliva collected. Three pediatric dentists evaluated the oral and dental status of these children, and then they were categorized into four groups based on dmft index: Group 1 (controls) had a dmft of 0, Group 2 had a dmft of 4–6, Group 3 had a dmft of 7–9, and Group 4 had a dmft of 10–13. Using a questionnaire, parents were asked to provide information about their children's daily habits (including the frequency of brushing their teeth, main meals, and sweet snacks), and parents' educational levels (Table ). The Research Ethics Committee of Kerman University of Medical Sciences approved the study (IR. KMU. REC.1402.449). Furthermore, a written consent form was obtained from the parents of the children participating in the study. 2.1.1 Inclusion Criteria a. Children aged 5–9 years old who were originally from Kerman Province and referred to the Department of Pediatric Dentistry. b. The Children had no inflammatory, oral, bacterial, systematic, or other diseases that influence saliva secretion. In addition, they had not taken any antibiotics for at least 14 days before sampling. c. Children's saliva was collected without the use of saliva stimulants. d. Children and parents were willing to participate in the study voluntarily. 2.1.2 Exclusion Criteria a. Children with dmft indices 1–3 and over 13. The main reason is that there are very few cases with a dmft index over 13 in the pediatric population; also, it is hard to find substantial variations in microbiota in the dmft indices 1–3. b. Individuals who did not meet the age group requirements or were not residents of Kerman Province. c. Children whose parents had not signed the consent form. d. Incomplete responses to the questionnaire. e. To prevent any potential bias in the colony counting of bacteria, children who had exfoliation of teeth were excluded from the sampling.
Inclusion Criteria a. Children aged 5–9 years old who were originally from Kerman Province and referred to the Department of Pediatric Dentistry. b. The Children had no inflammatory, oral, bacterial, systematic, or other diseases that influence saliva secretion. In addition, they had not taken any antibiotics for at least 14 days before sampling. c. Children's saliva was collected without the use of saliva stimulants. d. Children and parents were willing to participate in the study voluntarily.
Exclusion Criteria a. Children with dmft indices 1–3 and over 13. The main reason is that there are very few cases with a dmft index over 13 in the pediatric population; also, it is hard to find substantial variations in microbiota in the dmft indices 1–3. b. Individuals who did not meet the age group requirements or were not residents of Kerman Province. c. Children whose parents had not signed the consent form. d. Incomplete responses to the questionnaire. e. To prevent any potential bias in the colony counting of bacteria, children who had exfoliation of teeth were excluded from the sampling.
Caries Activity 2.2.1 Snyder's Test B.C.G‐Dextrose Agar (Quelab Company, Montreal, QC, Canada) was used as Snyder's test medium to qualitatively determine the caries activity of S. mutans , lactobacilli, and some other acidogenic microbes involved in dental caries. The medium contains the following components: peptone (20 g/L), dextrose (glucose [20 g/L]), sodium chloride (5 g/L), bromocresol green (0.02 g/L), agar (20 g/L). The final pH of the medium at 25°C was 4.8 ± 0.2. Test tubes of Snyder's test were prepared according to the manufacturer's instructions: first, 65.02 g of the medium powder was suspended in 1000 mL of distilled water. Then to completely dissolve the medium, it was heated till boiling. After that, 10 mL amounts of medium was dispensed in each test tube. Subsequently, the test tubes were sterilized by autoclaving at 121°C for 15 min. Finally, the test tubes were cooled at an upright position. In the current study for conducting Snyder's test, 100 µL of saliva was added to each test tube. Incubation was then performed on these test tubes for 24, 48, and 72 h at 37°C. When cariogenic bacteria exist in the saliva, glucose is fermented and lactic acid is produced, accordingly reducing the pH to approximately 4.4 in the medium. The severity of caries is characterized by the rate at which the color changes from green to yellow. Depending on the situation, one of the four following patterns may occur: (1) complete yellowing within 24 h represents a “marked susceptibility” to creating dental caries; (2) yellowing up to 48 h represents a “definitive susceptibility”; (3) yellowing up to 72 h represents a “limited susceptibility”; and (4) no change in color (green) occurs within 72 h, indicating a “negative susceptibility” (Ali et al. ; Ramesh et al. ; Snyder ). 2.2.2 Colony Counting of S. mutans In this study, the Mitis Salivarius Agar Base (Quelab Company, Montreal, QC, Canada) was used as a selective medium for isolation and counting colonies of S. mutans . The medium consists of these components: casein enzymic hydrolysate (15 g/L), peptic digest of animal tissue (5 g/L), dextrose (1 g/L), sucrose (50 g/L), dipotassium phosphate (4 g/L), trypan blue (0.075 g/L), crystal violet (0.0008 g/L), and agar (15 g/L). The medium's final pH was 7 ± 0.2 at 25°C. The manufacturer's instructions were pursued to prepare the medium: The 90.07 g of the medium powder was suspended in 1000 mL of distilled water. Complete dissolution of the medium was obtained by heating it to boiling. Sterilization of the medium was granted using autoclaving at 121°C for 15 min. Afterward, the medium was cooled to 50°C–55°C and then 1 mL of sterile 1% potassium tellurite solution was added to the medium (after this step, the medium should not be reheated). Ultimately, the medium was mixed well and poured into Petri plates. For conducting colony counting of S. mutans , 0.2 mL of saliva was added to 1.8 distilled water in a 2 mL tube (dilution of 10 −1 ), then 0.2 of the sample from the tube was added to 1.8 mL of distilled water in a new 2 mL tube (dilution of 10 −2 ). The process was continued till the serial dilution reached 10 −5 . From this dilution (10 −5 ), 1 mL of sample was inserted into the Petri plate. After 48 h of incubation at 35°C, colonies of S. mutans appeared in the medium, and they were counted using a naked eye. The count of S. mutans was estimated using the number of times of serial dilution multiplied by the number of colony‐forming units (CFU); the output was demonstrated as CFU/mL of saliva (Ademe, Admassu, and Balakrishnan ). As the serial dilution of the saliva was 5, the colonies of S. mutants were presented based on 10 5 CFU/mL. 2.2.3 Colony Counting of Lactobacilli To isolate and colony count of lactobacilli, the medium of de Man, Rogosa, and Sharpe (MRS) Agar (Quelab Company, Montreal, QC, Canada) was utilized (de Man, Rogosa, and Sharpe ). The composition of the medium was proteose peptone (10 g/L), beef extract (8 g/L), yeast extract (4 g/L), dextrose (20 g/L), polysorbate 80 (1 g/L), ammonium citrate (2 g/L), Sodium acetate (5 g/L), magnesium sulfate (0.2 g/L), manganese sulfate (0.05 g/L), dipotassium phosphate (2 g/L), and agar (14 g/L); the pH of the medium at 25°C was 6.2 ± 0.2. To prepare the culture medium, the instructions of the manufacturer were followed: In 1000 mL distilled water, 64 g of medium powder was suspended and heated to dissolve the medium completely. Then the medium was distributed in Petri plates and was sterilized by autoclaving at 121°C for 15 min. For isolation and colony counting of lactobacilli, serial dilution of saliva (10 −5 ) which was above described for S. mutans was used; the amount of 1 mL of this serial dilution of saliva was inserted into the MRS agar medium. Plates were incubated anaerobically using Microbiology Anaerocult C (Merck Company, United States) in anaerobic jars for 48 h at 37°C. After that, colonies of lactobacilli appeared on the plates and then they were counted with the naked eye. The strategy that was described for calculating the colony counting of S. mutans was also performed for lactobacilli (Ademe, Admassu, and Balakrishnan ); hence, the 10 5 CFU/mL was also used for the count of lactobacilli. 2.2.4 Molecular Identification DNA was extracted from children's saliva using a standard protocol (Goode et al. ). Primers used for detection of the gftB gene in S. mutans were: “F5ʹ‐ ACTACACTTTCGGGTGGCTTGG‐3ʹ” and “R5ʹ‐CAGTATAAGCGCCAGTTTCATC‐3ʹ” (Franco e Franco et al. ). Furthermore, the 16S rRNA gene was used for the detection of lactobacilli with these primers: “F5ʹ‐CATTTGGAAACAGATGCTAATACC‐3ʹ, and R5ʹ‐GTCCATTGTGGAAGATTCCC‐3ʹ” (Pahumunto et al. ). Then using PCR assay, the presence of S. mutans and lactobacilli was evaluated as mentioned as follows. Each tube of PCR was prepared with these components: 10 µL of Taq DNA Polymerase 2x Master Mix RED (Ampliqon Co., Denmark), 150 ng of extracted DNA, 10 pmol/µL of each forward and reverse primers, and the final volume of reaction tube was reached to 20 µL with distilled water. Also, for negative control, distilled water was used instead of DNA, and for positive control, purified genomic DNA of S. mutans or lactobacilli was used instead of DNA of saliva. The steps of the PCR program were as follows: (I) 95°C for 5 min; (II) 40 (for 16S rRNA )/45 (for gftB ) cycles of 95°C for 30 s, 54°C for 45 s, and 72°C for 45 s; and (III), a final extension phase was applied on 72°C for 5 min. Ultimately, PCR products were run on a 2% agarose gel electrophoresis together with a DNA ladder, negative control, and positive control.
Snyder's Test B.C.G‐Dextrose Agar (Quelab Company, Montreal, QC, Canada) was used as Snyder's test medium to qualitatively determine the caries activity of S. mutans , lactobacilli, and some other acidogenic microbes involved in dental caries. The medium contains the following components: peptone (20 g/L), dextrose (glucose [20 g/L]), sodium chloride (5 g/L), bromocresol green (0.02 g/L), agar (20 g/L). The final pH of the medium at 25°C was 4.8 ± 0.2. Test tubes of Snyder's test were prepared according to the manufacturer's instructions: first, 65.02 g of the medium powder was suspended in 1000 mL of distilled water. Then to completely dissolve the medium, it was heated till boiling. After that, 10 mL amounts of medium was dispensed in each test tube. Subsequently, the test tubes were sterilized by autoclaving at 121°C for 15 min. Finally, the test tubes were cooled at an upright position. In the current study for conducting Snyder's test, 100 µL of saliva was added to each test tube. Incubation was then performed on these test tubes for 24, 48, and 72 h at 37°C. When cariogenic bacteria exist in the saliva, glucose is fermented and lactic acid is produced, accordingly reducing the pH to approximately 4.4 in the medium. The severity of caries is characterized by the rate at which the color changes from green to yellow. Depending on the situation, one of the four following patterns may occur: (1) complete yellowing within 24 h represents a “marked susceptibility” to creating dental caries; (2) yellowing up to 48 h represents a “definitive susceptibility”; (3) yellowing up to 72 h represents a “limited susceptibility”; and (4) no change in color (green) occurs within 72 h, indicating a “negative susceptibility” (Ali et al. ; Ramesh et al. ; Snyder ).
Colony Counting of S. mutans In this study, the Mitis Salivarius Agar Base (Quelab Company, Montreal, QC, Canada) was used as a selective medium for isolation and counting colonies of S. mutans . The medium consists of these components: casein enzymic hydrolysate (15 g/L), peptic digest of animal tissue (5 g/L), dextrose (1 g/L), sucrose (50 g/L), dipotassium phosphate (4 g/L), trypan blue (0.075 g/L), crystal violet (0.0008 g/L), and agar (15 g/L). The medium's final pH was 7 ± 0.2 at 25°C. The manufacturer's instructions were pursued to prepare the medium: The 90.07 g of the medium powder was suspended in 1000 mL of distilled water. Complete dissolution of the medium was obtained by heating it to boiling. Sterilization of the medium was granted using autoclaving at 121°C for 15 min. Afterward, the medium was cooled to 50°C–55°C and then 1 mL of sterile 1% potassium tellurite solution was added to the medium (after this step, the medium should not be reheated). Ultimately, the medium was mixed well and poured into Petri plates. For conducting colony counting of S. mutans , 0.2 mL of saliva was added to 1.8 distilled water in a 2 mL tube (dilution of 10 −1 ), then 0.2 of the sample from the tube was added to 1.8 mL of distilled water in a new 2 mL tube (dilution of 10 −2 ). The process was continued till the serial dilution reached 10 −5 . From this dilution (10 −5 ), 1 mL of sample was inserted into the Petri plate. After 48 h of incubation at 35°C, colonies of S. mutans appeared in the medium, and they were counted using a naked eye. The count of S. mutans was estimated using the number of times of serial dilution multiplied by the number of colony‐forming units (CFU); the output was demonstrated as CFU/mL of saliva (Ademe, Admassu, and Balakrishnan ). As the serial dilution of the saliva was 5, the colonies of S. mutants were presented based on 10 5 CFU/mL.
Colony Counting of Lactobacilli To isolate and colony count of lactobacilli, the medium of de Man, Rogosa, and Sharpe (MRS) Agar (Quelab Company, Montreal, QC, Canada) was utilized (de Man, Rogosa, and Sharpe ). The composition of the medium was proteose peptone (10 g/L), beef extract (8 g/L), yeast extract (4 g/L), dextrose (20 g/L), polysorbate 80 (1 g/L), ammonium citrate (2 g/L), Sodium acetate (5 g/L), magnesium sulfate (0.2 g/L), manganese sulfate (0.05 g/L), dipotassium phosphate (2 g/L), and agar (14 g/L); the pH of the medium at 25°C was 6.2 ± 0.2. To prepare the culture medium, the instructions of the manufacturer were followed: In 1000 mL distilled water, 64 g of medium powder was suspended and heated to dissolve the medium completely. Then the medium was distributed in Petri plates and was sterilized by autoclaving at 121°C for 15 min. For isolation and colony counting of lactobacilli, serial dilution of saliva (10 −5 ) which was above described for S. mutans was used; the amount of 1 mL of this serial dilution of saliva was inserted into the MRS agar medium. Plates were incubated anaerobically using Microbiology Anaerocult C (Merck Company, United States) in anaerobic jars for 48 h at 37°C. After that, colonies of lactobacilli appeared on the plates and then they were counted with the naked eye. The strategy that was described for calculating the colony counting of S. mutans was also performed for lactobacilli (Ademe, Admassu, and Balakrishnan ); hence, the 10 5 CFU/mL was also used for the count of lactobacilli.
Molecular Identification DNA was extracted from children's saliva using a standard protocol (Goode et al. ). Primers used for detection of the gftB gene in S. mutans were: “F5ʹ‐ ACTACACTTTCGGGTGGCTTGG‐3ʹ” and “R5ʹ‐CAGTATAAGCGCCAGTTTCATC‐3ʹ” (Franco e Franco et al. ). Furthermore, the 16S rRNA gene was used for the detection of lactobacilli with these primers: “F5ʹ‐CATTTGGAAACAGATGCTAATACC‐3ʹ, and R5ʹ‐GTCCATTGTGGAAGATTCCC‐3ʹ” (Pahumunto et al. ). Then using PCR assay, the presence of S. mutans and lactobacilli was evaluated as mentioned as follows. Each tube of PCR was prepared with these components: 10 µL of Taq DNA Polymerase 2x Master Mix RED (Ampliqon Co., Denmark), 150 ng of extracted DNA, 10 pmol/µL of each forward and reverse primers, and the final volume of reaction tube was reached to 20 µL with distilled water. Also, for negative control, distilled water was used instead of DNA, and for positive control, purified genomic DNA of S. mutans or lactobacilli was used instead of DNA of saliva. The steps of the PCR program were as follows: (I) 95°C for 5 min; (II) 40 (for 16S rRNA )/45 (for gftB ) cycles of 95°C for 30 s, 54°C for 45 s, and 72°C for 45 s; and (III), a final extension phase was applied on 72°C for 5 min. Ultimately, PCR products were run on a 2% agarose gel electrophoresis together with a DNA ladder, negative control, and positive control.
Statistical Analysis All statistical tests were performed utilizing SPSS version 27.0 (SPSS Inc., Chicago, USA), MedCalc version 22.021 (MedCalc Software Ltd., Ostend, Belgium) (for calculating odds ratio), and GraphPad Prism version 10.2.3 (GraphPad Software Inc, Boston, MA, USA, for generating bar chart and violin charts; Figure ] with a p value < 0.05 threshold. The following statistical tests were used in the study. The Kolmogorov–Smirnov test, which is a robust normality test for sample sizes over 50 (Mishra et al. ), was performed to evaluate the normal distribution of numerical data, including children's age, S. mutans count, and lactobacilli count; the outputs of the test were considered for choosing parametric or nonparametric tests. To determine the differences between a numerical variable and a dichotomous variable, the nonparametric Mann–Whitney U test was utilized; also, the effect size for this test was calculated using Formula . (2) Cohen ′ s d = | Z score | N The nonparametric Kruskal–Wallis test was used to find differences between a numerical variable and a multi‐state (three or more) variable. The chi‐square test was employed to compare two nominal/ordinal variables. The nonparametric Spearman's Correlation Coefficient test was used to determine the possible correlation among numerical variables. Scheffe's post hoc test was utilized for evaluating differences in bacterial count among dmft groups.
Results 3.1 The Status of Normal Distribution In the present study, quantitative data (e.g., children's age and counts of S. mutans and lactobacilli) were checked using the Kolmogorov–Smirnov test to assess their normality. Since the outputs revealed that these variables were not normally distributed (age: N = 120, mean = 6.93, SD = 1.52, p < 0.001; S. mutans count: N = 120, mean = 52.52, SD = 44.33, p < 0.001; and lactobacilli count: N = 120, mean = 73.78, SD = 47.95, p < 0.001), non‐parametric tests should be applied for subsequent statistical analyses. Since the number of bacteria in a population varies from person to person, it is not out of the question that the population will not follow a normal distribution. 3.2 Assessment of the Roles of Bacterial Counts and the Age of the Children in Dental Caries The Mann–Whitney U test was used to evaluate significant differences between the bacterial count and the age of the children with bacterial PCR and the gender of the children (Table ). In addition, the association between children's age and bacterial counts with children's habits and the educational level of their parents was measured using the Kruskal–Wallis test (Table ). In Figure , the counts of lactobacilli (Figure ) and S. mutans (Figure ) in dmft Group 1 are compared with dmft Groups 2, 3, and 4 using the Mann–Whitney U test, and the bar chart–based median of counts is created. This test showed the lactobacilli levels were significantly different in dmft Group 1 with Group 2 (Total N = 59, Mean Rank; Group 1 = 22.32, Group 2 = 37.95; U = 204.5, Z = −3.498, Cohen's d = 0.455, p = 0.0003), Group 1 with Group 3 (Total N = 71, Mean Rank; Group 1 = 24.3, Group 3 = 44.56; U = 264, Z = −4.087, Cohen's d = 0.485, p < 0.0001), and Group 1 with Group 4 (Total N = 50, Mean Rank; Group 1 = 19.1, Group 4 = 35.1; U = 108, Z = −3.804, Cohen's d = 0.538, p < 0.0001). The S. mutans levels were also notably different in dmft Group 1 with Group 2 (Total N = 59, Mean Rank; Group 1 = 21.93, Group 2 = 38.34; U = 193, Z = −3.671, Cohen's d = 0.478, p = 0.0002), Group 1 with Group 3 (Total N = 71, Mean Rank; Group 1 = 22.42, Group 3 = 45.94; U = 207.5, Z = −4.745, Cohen's d = 0.563, p < 0.0001), and Group 1 with Group 4 (Total N = 50, Mean Rank; Group 1 = 19.5, Group 4 = 34.5; U = 120, Z = −3.566, Cohen's d = 0.504, p = 0.0002). The distribution status of lactobacilli (Figure ) and S. mutans counts (Figure ) in all dmft Groups is also shown in the violin plots. In addition, the counts of lactobacilli (Figure ) and S. mutans (Figure ) in the results of Snyder's test are also specified in the violin plots. The distributions of lactobacilli and S. mutans counts in children's gender are depicted in Figure . 3.3 Findings of PCR and Snyder's Test In Table , the roles of bacterial PCR, Snyder's test, and dmft groups in children's habitual variables and parents' educational level were determined using the chi‐square test. The PCR products of lactobacilli and S. mutans were run on 2% agarose gel electrophoresis (Figure ). The number of children with positive PCR results of lactobacilli was 104/120 (86.67%), and the lactobacilli count was 80.11 ± 47.37 (10 5 CFU/mL) in them. In children with negative PCR results of lactobacilli, these were 16/120 (13.33%) and 32.63 ± 25.17 (10 5 CFU/mL), respectively. In addition, the positive PCR results of S. mutans in the affected children were 59/120 (49.17%) and the S. mutans count was 69.12 ± 34.74 (10 5 CFU/mL); for positive PCR results of the bacterium, these were 61/120 (50.83%) and 36.46 ± 46.87 (10 5 CFU/mL), respectively. Also, children who had positive PCR of lactobacilli and marked susceptibility (the highest grade of acidity determined by Snyder's test) were 54/120 (45%) and the lactobacilli count was 93.15 ± 44.76 (10 5 CFU/mL). Moreover, children with positive PCR of S. mutans and marked susceptibility were 45/120 (37.5%) and the S. mutans count was 69.15 ± 29.7 (10 5 CFU/mL). Also, 44/120 (36.67%) children had positive PCRs of lactobacilli and marked susceptibility to acid production; they had 94.66 ± 44.30 (10 5 CFU/mL) for lactobacilli count and 69.18 ± 30.05 (10 5 CFU/mL) for S. mutans count. The frequency of Snyder's test and lactobacilli and S. mutans PCRs in dmft Groups is shown in Table . The frequency of lactobacilli and S. mutans counts in dmft Groups is presented in Table . The frequency of lactobacilli and S. mutans counts in the results of Snyder's test is provided in Table . The frequency of lactobacilli and S. mutans PCRs in the results of Snyder's test is demonstrated in Table . The frequencies of children's daily habits and parents’ educational level in the PCRs and the counts of lactobacilli and S. mutans are supplied in Table . The frequencies of children's daily habits and parents' educational level in the results of Snyder's test are provided in Table . Specifically, in Table , the relationship of each dmft Group with other dmft groups is examined using Scheffe's post hoc test. Based on the outputs of Scheffe's test, there are significant differences between Group 1 with Groups 2–4 for both S. mutans and lactobacilli counts. However, there was no significant relationship between these bacterial counts within dmft Groups 2–4. Predicting the relationship between lactobacilli count, S. mutans count, and age with each other within dmft groups and in total using Spearman's Correlation Coefficient test is provided in Table . The findings of Table revealed that lactobacilli count, S. mutans count, and age had significant positive correlations with one another. In dmft Group 1, correlation between lactobacilli count and S. mutans count was positive ( n = 30, r = 0.848, p < 0.001), in dmft Groups 2–4 they also were positive ( n = 90, r = 0.514, p < 0.001); moreover, in total participants (dmft Groups 1–4), they were positive ( N = 120, r = 0.662, p < 0.001). One of the key results is that increasing the children's age is a contributing factor to the increasing lactobacilli ( N = 120, r = 0.389, p < 0.001) and S. mutans count ( N = 120, r = 0.352, p < 0.001) in dental caries. As mentioned earlier in Table , there found significant differences between the count of lactobacilli ( N = 120, df = 3, H = 22.436, p < 0.001) and S. mutans ( N = 120, df = 3, H = 25.998, p < 0.001) with dmft Groups. Additionally, for evaluation of the outcomes of bacterial PCR within dmft Groups, odds ratio (OR) was estimated within dmft Groups (Table ). As shown in Table , lactobacilli PCR had a high significant risk in dmft Group 1 with dmft Groups 2 (OR = 7.816, p = 0.013), 3 (OR = 7.333, p = 0.005), and 4 (OR = 11, p = 0.028). However, the OR of S. mutans PCR was only significantly higher in dmft Group 3 compared to dmft Group 1 (OR = 2.699, p = 0.045).
The Status of Normal Distribution In the present study, quantitative data (e.g., children's age and counts of S. mutans and lactobacilli) were checked using the Kolmogorov–Smirnov test to assess their normality. Since the outputs revealed that these variables were not normally distributed (age: N = 120, mean = 6.93, SD = 1.52, p < 0.001; S. mutans count: N = 120, mean = 52.52, SD = 44.33, p < 0.001; and lactobacilli count: N = 120, mean = 73.78, SD = 47.95, p < 0.001), non‐parametric tests should be applied for subsequent statistical analyses. Since the number of bacteria in a population varies from person to person, it is not out of the question that the population will not follow a normal distribution.
Assessment of the Roles of Bacterial Counts and the Age of the Children in Dental Caries The Mann–Whitney U test was used to evaluate significant differences between the bacterial count and the age of the children with bacterial PCR and the gender of the children (Table ). In addition, the association between children's age and bacterial counts with children's habits and the educational level of their parents was measured using the Kruskal–Wallis test (Table ). In Figure , the counts of lactobacilli (Figure ) and S. mutans (Figure ) in dmft Group 1 are compared with dmft Groups 2, 3, and 4 using the Mann–Whitney U test, and the bar chart–based median of counts is created. This test showed the lactobacilli levels were significantly different in dmft Group 1 with Group 2 (Total N = 59, Mean Rank; Group 1 = 22.32, Group 2 = 37.95; U = 204.5, Z = −3.498, Cohen's d = 0.455, p = 0.0003), Group 1 with Group 3 (Total N = 71, Mean Rank; Group 1 = 24.3, Group 3 = 44.56; U = 264, Z = −4.087, Cohen's d = 0.485, p < 0.0001), and Group 1 with Group 4 (Total N = 50, Mean Rank; Group 1 = 19.1, Group 4 = 35.1; U = 108, Z = −3.804, Cohen's d = 0.538, p < 0.0001). The S. mutans levels were also notably different in dmft Group 1 with Group 2 (Total N = 59, Mean Rank; Group 1 = 21.93, Group 2 = 38.34; U = 193, Z = −3.671, Cohen's d = 0.478, p = 0.0002), Group 1 with Group 3 (Total N = 71, Mean Rank; Group 1 = 22.42, Group 3 = 45.94; U = 207.5, Z = −4.745, Cohen's d = 0.563, p < 0.0001), and Group 1 with Group 4 (Total N = 50, Mean Rank; Group 1 = 19.5, Group 4 = 34.5; U = 120, Z = −3.566, Cohen's d = 0.504, p = 0.0002). The distribution status of lactobacilli (Figure ) and S. mutans counts (Figure ) in all dmft Groups is also shown in the violin plots. In addition, the counts of lactobacilli (Figure ) and S. mutans (Figure ) in the results of Snyder's test are also specified in the violin plots. The distributions of lactobacilli and S. mutans counts in children's gender are depicted in Figure .
Findings of PCR and Snyder's Test In Table , the roles of bacterial PCR, Snyder's test, and dmft groups in children's habitual variables and parents' educational level were determined using the chi‐square test. The PCR products of lactobacilli and S. mutans were run on 2% agarose gel electrophoresis (Figure ). The number of children with positive PCR results of lactobacilli was 104/120 (86.67%), and the lactobacilli count was 80.11 ± 47.37 (10 5 CFU/mL) in them. In children with negative PCR results of lactobacilli, these were 16/120 (13.33%) and 32.63 ± 25.17 (10 5 CFU/mL), respectively. In addition, the positive PCR results of S. mutans in the affected children were 59/120 (49.17%) and the S. mutans count was 69.12 ± 34.74 (10 5 CFU/mL); for positive PCR results of the bacterium, these were 61/120 (50.83%) and 36.46 ± 46.87 (10 5 CFU/mL), respectively. Also, children who had positive PCR of lactobacilli and marked susceptibility (the highest grade of acidity determined by Snyder's test) were 54/120 (45%) and the lactobacilli count was 93.15 ± 44.76 (10 5 CFU/mL). Moreover, children with positive PCR of S. mutans and marked susceptibility were 45/120 (37.5%) and the S. mutans count was 69.15 ± 29.7 (10 5 CFU/mL). Also, 44/120 (36.67%) children had positive PCRs of lactobacilli and marked susceptibility to acid production; they had 94.66 ± 44.30 (10 5 CFU/mL) for lactobacilli count and 69.18 ± 30.05 (10 5 CFU/mL) for S. mutans count. The frequency of Snyder's test and lactobacilli and S. mutans PCRs in dmft Groups is shown in Table . The frequency of lactobacilli and S. mutans counts in dmft Groups is presented in Table . The frequency of lactobacilli and S. mutans counts in the results of Snyder's test is provided in Table . The frequency of lactobacilli and S. mutans PCRs in the results of Snyder's test is demonstrated in Table . The frequencies of children's daily habits and parents’ educational level in the PCRs and the counts of lactobacilli and S. mutans are supplied in Table . The frequencies of children's daily habits and parents' educational level in the results of Snyder's test are provided in Table . Specifically, in Table , the relationship of each dmft Group with other dmft groups is examined using Scheffe's post hoc test. Based on the outputs of Scheffe's test, there are significant differences between Group 1 with Groups 2–4 for both S. mutans and lactobacilli counts. However, there was no significant relationship between these bacterial counts within dmft Groups 2–4. Predicting the relationship between lactobacilli count, S. mutans count, and age with each other within dmft groups and in total using Spearman's Correlation Coefficient test is provided in Table . The findings of Table revealed that lactobacilli count, S. mutans count, and age had significant positive correlations with one another. In dmft Group 1, correlation between lactobacilli count and S. mutans count was positive ( n = 30, r = 0.848, p < 0.001), in dmft Groups 2–4 they also were positive ( n = 90, r = 0.514, p < 0.001); moreover, in total participants (dmft Groups 1–4), they were positive ( N = 120, r = 0.662, p < 0.001). One of the key results is that increasing the children's age is a contributing factor to the increasing lactobacilli ( N = 120, r = 0.389, p < 0.001) and S. mutans count ( N = 120, r = 0.352, p < 0.001) in dental caries. As mentioned earlier in Table , there found significant differences between the count of lactobacilli ( N = 120, df = 3, H = 22.436, p < 0.001) and S. mutans ( N = 120, df = 3, H = 25.998, p < 0.001) with dmft Groups. Additionally, for evaluation of the outcomes of bacterial PCR within dmft Groups, odds ratio (OR) was estimated within dmft Groups (Table ). As shown in Table , lactobacilli PCR had a high significant risk in dmft Group 1 with dmft Groups 2 (OR = 7.816, p = 0.013), 3 (OR = 7.333, p = 0.005), and 4 (OR = 11, p = 0.028). However, the OR of S. mutans PCR was only significantly higher in dmft Group 3 compared to dmft Group 1 (OR = 2.699, p = 0.045).
Discussion Our study aimed to investigate the effect of Snyder's test and the levels and PCRs of S. mutans and lactobacilli in children with (dmft Groups 2–4) and without (dmft Group 1) dental caries. Neither age nor the counts of S. mutans and lactobacilli did not differ significantly by gender (Table ). Likewise, age did not differ with lactobacilli PCR but differed with S. mutans PCR. However, the S. mutans and lactobacilli counts had significant differences with their PCR results. A study executed by Lee et al. determined the caries activity of S. mutans using PCR and found that the portion of S. mutans identified in the plaque sample is 56.8%, and in the saliva sample it is 79.7%. Also, they reported that among participants (children, adolescents, and adults), adolescents had the highest levels of S. mutans in saliva, and adults had the highest levels in plaque samples (Lee et al. ). Table demonstrated that no significant differences were found between the counts of lactobacilli and S. mutans with brushing teeth, sweet snacks, and main meals. However, they had a remarkable association with mothers' and fathers' educational levels, dmft groups, and Snyder's test. In addition, age was related to sweet snacks and fathers' educational levels, but it was not related to other variables. A study on a group of Indian schoolchildren found that the S. mutans count, frequency of food intake, and food content had notable roles in the risk of dental caries (Jagan et al. ). According to Table , lactobacilli PCR had significant differences with gender, mothers' and fathers' educational levels, dmft Groups, and S. mutans PCR; but had no differences with brushing teeth, sweet snacks, main meals, and Snyder's test. S. mutans PCR had significant differences with sweet snacks, lactobacilli PCR, and Snyder's test. Surprisingly, Snyder's test had only a significant difference with S. mutans PCR. The dmft Groups had significant differences with brushing teeth, sweet snacks, mothers' and fathers' educational levels, and lactobacilli PCR. A study conducted by Sajadi et al. on Iranian children aged 3–6 years in Kerman, Iran, showed that the dmft index had significant difference with the mother's educational level, eating sweets and biscuits, toothbrush use, and children's age. However, they did not find any difference between the dmft index and gender (Sajadi et al. ). The results of Sajadi's study are very similar to our study, but they did not assess the impact of the number of main meals that children eat per day; in our study, main meals did not differ between the dmft groups (Table ). One study reported that low levels of S. mutans in individuals with high levels of dental caries could be a result of having special strains of S. mutans (Toi, Cleaton‐Jones, and Daya ). In our study, we did not isolate and differentiate these strains of S. mutans . Host–microbiota–diet interactions may be manageable to reduce the risk of dental caries. The prevention strategies should be applied to children's behavioral and dietary habits to reduce the risk of oral and dental problems (Anil and Anand ). Our study found a positive association between increasing children's age and increasing the number of lactobacilli and S. mutans (Table ). Hence, controlling host–microbiota–diet interactions should be a primary strategy, at least in younger children. Some studies reported that significant correlations exist between oral lactobacilli counts and the severity of dental caries (Ademe, Admassu, and Balakrishnan ; Piwat et al. ). However, a study by Eşian et al. in Romania did not find a correlation between lactobacilli counts and dental caries, but they found a correlation between S. mutans counts and dental caries (Eşian et al. ). Because of S. mutans ' capacity to form biofilm by synthesizing glucan, ability to produce acid, and acid tolerance, a high concentration of S. mutans has been correlated to dental caries (Gao et al. ). Probiotic lactobacilli have been shown to reduce and/or inhibit the caries activity of S. mutans in dental caries which could be used as a suitable therapy, especially for at‐risk children (Wen et al. ). Our study found significant positive correlations between S. mutans count and lactobacilli count in dmft Groups. In contrast with our study, Sounah et al. used real‐time PCR to assess the microbial community involved in dental caries in adult Yemeni people, surprisingly they did not find any relation between S. mutans and lactobacilli levels with DMFT index (Sounah and Madfa ). Also, a study in Iran by Najafi et al. showed that there is no correlation between lactobacilli counts and S. mutans count in patients with dental caries (Najafi et al. ). A systematic review study reported that while lactobacilli are not inherently efficient at adhering to tooth surfaces compared to their cariogenic collaborator, mutans streptococci; therefore, their colonization potential is markedly improved in the presence of initial colonizers like S. mutans ; hence, lactobacilli are associated with advanced dental caries regardless of age (Wen et al. ). Our study also had a similar conclusion because the average of the lactobacilli count in dmft Group 1 is about 97.01% lower than in dmft Group 2, 100.02% in dmft Group 3, and 100.09% in dmft Group 4. The average of the S. mutans count in dmft Group 1 was 129.09% lower than in dmft Group 2, 175.05% in dmft Group 3, and 159.84% in dmft Group 4 (Table ). A randomized clinical trial on children 3–9 years old in Kerman, Iran, showed that Biodentine and formocresol pulpotomy techniques may be suitable as a good treatment for children suffering from primary molars (Gisour et al. ). One study identified 18 phylotypes of lactobacilli in dental caries by 16S rRNA sequencing and phylogenetic analysis; they also measured the concentration of lactobacilli using real‐time PCR and observed that it was about 34 times higher than when measuring CFU (Byun et al. ). A new strategy for reducing the levels of S. mutans and lactobacilli is the use of kidodent and probiotic mouth rinse (Bolla et al. ). It should be noted that some limitations were encountered during our study, including the refusal of some parents to complete the questionnaire and the refusal of some of them to allow their children to undergo dental examinations. As an additional limitation, this study was executed exclusively in Kerman Province, so the results may not be generalized to other Iranian provinces due to their localized nature. The authors recommend that further studies with a large sample size should be commissioned. Nowadays, even though the incidence of dental caries in children has decreased globally compared to past decades, there is still a need for public dental screening policies to prevent dental caries in children.
Conclusion There were strong correlations between levels of S. mutans and lactobacilli which can accelerate the dental caries process in children; this microbial level can even be strengthened by raising the age of the children. Furthermore, the positive PCR of them was related to the deterioration of tooth decay. Our study proposes that the combination of Snyder's test with PCR and colony counting of S. mutans and lactobacilli could serve as a cost‐effective tool for early caries detection in clinical settings. Understanding the precise roles played by these bacteria in childhood dental caries will require more research. The results of our research could be useful to microbiologists, molecular pathologists, healthcare professionals, pediatric dentists, and other related fields dealing with oral and dental problems.
Project administration: Marzieh Danaei and Raziyeh Shojaeipour. Supervisions: Marzieh Danaei, Raziyeh Shojaeipour, and Hamidreza Poureslami. Conceptualization: Marzieh Danaei, Raziyeh Shojaeipour, Hamidreza Poureslami, Fatemeh Sadat Sajadi, Elham Farokh Gisour, Fatemeh Jahanimoghadam, and Milad Mollaali. Funding acquisition: Marzieh Danaei. Resources: Marzieh Danaei. Writing manuscript draft: Milad Mollaali. Critical reviewing and editing of final manuscript: Raziyeh Shojaeipour, Fatemeh Sadat Sajadi, Marzieh Danaei, Hamidreza Poureslami, Fatemeh Jahanimoghadam, Elham Farokh Gisour, and Milad Mollaali. Investigations and sample collection: Marzieh Danaei, Vida Fakharmohialdini, Aida Gholampour, Mehrnaz Foroudisefat, Arezoo Mirshekari, Milad Mollaali, Hamidreza Poureslami, Elham Farokh Gisour, Fatemeh Jahanimoghadam, and Raziyeh Shojaeipour. Methodology: Marzieh Danaei, Raziyeh Shojaeipour, Hamidreza Poureslami, Fatemeh Sadat Sajadi, Elham Farokh Gisour, Fatemeh Jahanimoghadam, Milad Mollaali, Vida Fakharmohialdini, Aida Gholampour, Mehrnaz Foroudisefat, and Arezoo Mirshekari. Formal analysis: Milad Mollaali and Marzieh Danaei. Software: Milad Mollaali. Data curation: Milad Mollaali, Raziyeh Shojaeipour, and Marzieh Danaei. Visualization: Milad Mollaali. Validation: Hamidreza Poureslami, Fatemeh Sadat Sajadi, Raziyeh Shojaeipour, Marzieh Danaei, Elham Farokh Gisour, Fatemeh Jahanimoghadam and Milad Mollaali. Approving manuscript contents: all authors.
The Research Ethics Committee of Kerman University of Medical Sciences approved the study (IR. KMU. REC.1402.449).
A written consent form was obtained from the parents of the children participating in the study.
The authors declare no conflicts of interest.
Supporting information.
|
Comparative Analysis of Metabolites of Wild and Cultivated | 93a0d9e6-b68a-4230-813d-2600337dfb82 | 11820002 | Biochemistry[mh] | Notopterygium incisum Ting ex H. T. Chang (NI) is a medicinal plant of the genus Notopterygium in the Umbelliferae family. This plant is mainly distributed in the provinces of Sichuan, Gansu, and Qinghai in China. The Aba Tibetan and Qiang Autonomous Prefecture areas in Sichuan Province are considered to be the origin of genuine NI production. The dried rhizome of NI is used clinically for the treatment of rheumatism and paralysis in traditional Chinese medicine . Pharmacological studies have shown that NI extracts exhibit anti-inflammatory , antioxidant , antibacterial , analgesic , and anticancer activities, as well as anti-osteoporosis and neuroprotective properties. Extracts of NI have also been used to treat Alzheimer’s disease . Because of the scarcity of wild NI resources, it was ranked as a Grade III protected plant by the State Council of the People’s Republic of China in 1987 . Consequently, artificial cultivated varieties have been developed, and currently, cultivated NI is the main source of this medicine provided in the market. Phytochemical studies have shown that the chemical constituents in NI include volatile oils , phenolic acids, coumarins, polyene–alkynes and small amounts of flavonoids . These secondary metabolites are unique substances produced by plants in response to environmental stress during growth. They allow plants adapt and survive in the environment, and are also the basis for the pharmacological effects of plant-derived drugs. Pharmacological studies have shown that phenolic acids and coumarins exhibit anti-inflammatory, analgesic, and antioxidant pharmacological activity . Pharmacological studies on polyene–alkynes have demonstrated their anti-cancer properties and their ability to reduce neuroinflammation . Although NI is a medicinal plant, it is still unknown whether the growth environment and cultivation practices affect the chemical composition of its secondary metabolites. In addition, whether changes in its chemical composition affect its therapeutic effects is also unknown. To date, few studies have compared the chemical composition and pharmacological activities between wild and cultivated NI or among cultivated NI from different growing areas. In this study, we analyzed dried rhizomes of nine batches of wild and cultivated NI from Sichuan, Gansu, and Qinghai provinces. These samples were subjected to phytometabolomic analyses using gas chromatography–mass spectrometry (GC–MS) and ultrahigh performance liquid chromatography (UHPLC)-Orbitrap MS. This allowed us to compare metabolite profiles among wild resources, cultivars from genuine production areas, and cultivars from other origins. The biosynthetic pathway of phenolic acids and coumarins was summarized, and seven key intermediates contributing to differences in metabolic profiles among the samples were screened out. The possible mechanisms leading to differences in chemical profiles among the three types of NI resources were analyzed. Finally, the anti-inflammatory effects of wild NI and cultivated NI were compared using a zebrafish yolk sac inflammation model. 2.1. Morphological Differences Among NI from Different Areas As shown in , the morphology of rhizome slices of wild NI from Gansu, Qinghai, and Sichuan was similar. The rhizomes exhibited a brown surface, with punctate or verrucose protruding root scars at the nodes. The rhizome tissue displayed a radial arrangement, with deep fissures. The cortex was brown, the xylem was yellowish-white, and the pith was yellowish-brown. Cultivated NI exhibited a brown surface, a yellow-brown cortex, a yellow-white xylem, and many fibrous roots. The rhizome slices (“drinking tablets”) of the cultivated NI were generally larger than those of the wild NI. The rhizome slices of NI cultivated in Gansu and Qinghai were similar in their morphology, with larger fissures in the cross-section than those of NI cultivated in Sichuan. Their pith was yellow to light brown, whereas that of NI cultivated in Sichuan was brown. 2.2. Chemical Composition of Volatile Oil of NI as Determined by GC–MS The volatile components of the NI samples were analyzed by GC–MS. A shows the total ion chromatogram (TIC) of wild NI from Sichuan (SW). Comparison of the data with those in the NIST database revealed a total of 81 chemical constituents (see ), including 33 monoterpenes, 33 sesquiterpenes, and 15 other components, with a total detection rate of ≥89.5%. The main volatile oil constituents included γ-terpinene (C13), (−)-4-terpineol (C29), bornyl acetate (C41), p -cymene (C10), -4-carene (C9), and α-pinene (C3). Pharmacological studies have shown that γ-terpinene and α-pinene exhibit anti-inflammatory, antioxidant, neuroprotective, and analgesic effects ; p -cymene and -4-carene have antibacterial, insecticidal, and antiviral effects ; (−)-4-terpineol shows anticancer effects ; and bornyl acetate displays anti-inflammatory and immunomodulatory effects . A principal component analysis (PCA) was conducted for the batches of NI from various origins, using the relative abundance of chemical components as the variable ( B). The samples showed a clear clustering pattern in the PCA plot. The nine batches of samples were clustered into three categories: one category consisted of three batches of wild NI (from Sichuan, Gansu, and Qinghai), another consisted of four batches of cultivated NI from Qinghai and Gansu provinces, and the final category consisted of two batches of cultivated NI from Sichuan Province. A cluster heat map was constructed to show the distribution of volatile components and their abundance in different batches of NI. As shown in C, the distribution of volatile components was similar in the three wild NI samples from Sichuan, Gansu, and Qinghai. The distribution of volatile components was similar in cultivated NI from Qinghai and Gansu, but their volatile profiles differed from that of cultivated NI from Sichuan. The volatile profiles were similar in the two batches of cultivated NI from each place of origin. Thus, there was a high degree of inter-batch similarity in the volatile component composition of cultivated NI from the same planting area. To identify the differentially accumulated metabolites (DAMs) between wild and cultivated NI, orthogonal partial least squares discriminant analysis (OPLS-DA) was performed to separate wild and cultivated NI from different origins on the basis of the relative contents of volatile components. The parameters to screen for DAMs between the wild and cultivated NI materials were a variable importance in projection (VIP) value of >1, Log 2 fold change (FC) > 0 or <0, and p < 0.05. The OPLS-DA score plot and the permutation test results are shown in D. The contents of 21 volatile components were higher in wild NI than in cultivated NI (11 monoterpenes, nine sesquiterpenes and one other compound). The contents of seven compounds were lower in wild NI than in cultivated NI (one monoterpene, four sesquiterpenes, and two other compounds). The same procedure was used to identify DAMs between cultivated NI from Sichuan and cultivated NI from Gansu and Qinghai. The OPLS-DA score plot and permutation test results are shown in E. The contents of 16 volatile compounds were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai (seven monoterpenes, three sesquiterpenes, and six other components). The contents of 22 compounds were lower in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai (five monoterpenes, 14 sesquiterpenes, and three other compounds). Volcano plots were constructed to visualize the differential composition of volatile compounds in wild NI vs. cultivated NI and in cultivated NI from Sichuan vs. cultivated NI from Gansu and Qinghai ( F,G). As shown in F, the contents of α-phellandrene (C8), -4-carene (C9), α-terpineol (C31), γ-muurolene (C58), copaene (C47), and dehydroxy-isocalamendiol (C74) were higher in wild NI than in cultivated NI, whereas the contents of cis-thujopsene (C53), guaiac alcohol (C72), β-chamigrene (C59), caryophyllene (C51), and E-7-tetradecenol (C46) were higher in cultivated NI than in wild NI. Pharmacological studies have shown that α-phellandrene and copaene have anti-inflammatory and analgesic pharmacological effects ; α-phellandrene also exhibits antioxidant and wound-healing promoting effects ; caryophyllene displays anti-inflammatory and antioxidant effects . These results show that, compared with cultivated NI, wild NI contained more volatile components that exhibit antioxidant and anti-inflammatory pharmacological activities. Next, we compared the volatile profiles between cultivated NI from Sichuan and cultivated NI from Gansu and Qinghai. The contents of α-Terpinolene (C14), -4-carene (C9), cubenene (C52), octanal (C7), and nonanal (C17) were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai, whereas the contents of α-bisabolol (C81), (−)-aristolene (C77), apiol (C73), and guaiol (C78) were higher in cultivated NI from Gansu and Qinghai than in cultivated NI from Sichuan. Octanal and nonanal have bacteriostatic effects ; -4-carene, α-bisabolol, and guaiol show insecticidal and bacteriostatic effects ; α-bisabolol also exhibits anticancer and anti-inflammatory pharmacological effects . In summary, NIs cultivated in Sichuan, Gansu, and Qinghai were rich in volatile compounds with a range of antibacterial and insecticidal effects. 2.3. Non-Volatile Component Profiles of NI Samples as Determined by UHPLC-Orbitrap MS Analysis The chemical component of the 95% ethanol extract of nine batches of samples was investigated by UHPLC-Orbitrap MS. shows the TICs of the QC sample in both positive and negative ion modes. A total of 114 compounds, including secondary metabolites and endogenous substances, were identified (see ), consisting of 51 coumarins, 19 phenolic acids and their derivatives, two flavonoids, three polyene–alkynes, 14 amino acids, four nucleosides, six carbohydrates, seven fatty acids, five amides, and three other compounds. The chemical structures of these compounds are shown in . As shown in , the intensities of the peaks corresponding to the coumarin constituents such as nodakenin (C62), nodakenitin (C76), angelicin (C77), imperatorin (C95), notopterol (C97), phellopterin (C99), and isoimperatorin (C101), as well as the intensity of the peak corresponding to falcarindiol (C102), were significantly higher than those of the other constituents. The identified chemical components were classified into seven categories: coumarins, phenolic acids, polyene–alkynes, amino acids, carbohydrates, fatty acids, and other constituents (including flavonoids, nucleosides, amides, and others). Based on the sum of peak areas of each category of constituents, sector charts were constructed to visualize the distribution of these categories in the different samples ( A). As shown in the figure, coumarins were the dominant secondary metabolites in NI, followed by phenolic acids. The mass percentage of phenolic acids was higher in wild NI than in cultivated NI from Gansu and Qinghai, but lower in wild NI from Sichuan than in cultivated NI from Sichuan. Polyene–alkynes exhibit anti-cancer properties and reduce neuroinflammation . The mass percentage of polyacetylenes was higher in cultivated NI from Sichuan than in wild NI from Sichuan. The mass percentages of amino acids and carbohydrates were higher in cultivated NI than in wild NI. The mass percentage of coumarins was significantly lower in cultivated NI from Sichuan than in wild NI from Sichuan. Using the peak areas of the identified components as the variable, PCA clustering analysis was performed. As shown in the PCA plot ( B), the QC samples were clustered together, indicating good instrument precision and reliable mass spectrometry data results. In the plot, wild NI from Sichuan, Gansu, and Qinghai were grouped together; NI cultivated in Qinghai and Gansu were clustered together; and NI cultivated in Sichuan was placed in a separate category. We further investigated the distribution of secondary metabolites in different batches of NI samples using heat maps ( C). Wild NI from the three different growing areas exhibited similar color block distributions in the heat map. The color distribution was similar for NI cultivated in Gansu and Qinghai, while that of NI cultivated in Sichuan was significantly different from the other samples. To clarify the DAMs between wild and cultivated NI, as well as between NI cultivated in Sichuan and NI cultivated in Gansu and Qinghai, OPLS-DA analysis was first executed using the peak areas of non-volatile components in wild and cultivated NI as the variable. The OPLS-DA score diagram and permutation test results are shown in D. Using VIP > 1, Log 2 FC > 0 or <0, and p < 0.05 as screening criteria, the DAMs between wild and cultivated NI were screened. The contents of 35 components were higher in wild NI than in cultivated NI, including 20 coumarins (eight simple coumarins, 11 linear furancoumarins, and one angular furancoumarin), three polyene–alkynes, two carbohydrates, six phenolic acids and their derivatives, two fatty acids, one amide, and one other component. The contents of 20 components were lower in wild NI than in cultivated NI, including three coumarins (two simple coumarins, one linear furancoumarin), seven amino acids, one nucleoside, two carbohydrate component, two phenolic acids and their derivatives, two fatty acids, two amides, and one other chemical component. Using the same method, we identified 64 DAMs between NI cultivated in Sichuan and NI cultivated in Qinghai and Gansu. The OPLS-DA score plot and permutation test results are shown in E. The contents of 19 compounds were higher in NI cultivated in Sichuan than in NI cultivated in Qinghai and Gansu, including seven coumarin components (four simple coumarins, two linear furancoumarins, and one angular furancoumarin), three amino acids, one nucleoside, one flavonoid, one polyene–alkyne, and six phenolic acids and their derivatives. The contents of 45 compounds were lower in NI cultivated in Sichuan than in NI cultivated in Qinghai and Gansu, including 33 coumarins (seven simple coumarins, 20 linear furancoumarins, six angular furancoumarins), four amino acids, three nucleosides, one carbohydrate, three phenolic acids, and one other chemical component. Volcano plots were constructed to visualize the DAMs between wild and cultivated products, as well as between cultivated NI from Sichuan and Gansu and Qinghai ( F,G). As shown in F, there were higher contents of falcarinol (C104), azelaic acid (C68), notopterol (C97), aesculatin (C40), caffeic acid (C41), osthenol (C87), marmesin (C70), falcarindol (C102), and ferulic acid (C53) in wild NI than in cultivated NI. In contrast, the contents of proline (C16), phenyalanine (C33), asparagine (C11), D-sucrose (C18), succinic acid (C27), and cinnamic acid (C37) were higher in cultivated NI than in wild NI. This phenomenon, where substances related to the tricarboxylic acid (TCA) cycle, such as aspartic acid and succinic acid, were more abundant in cultivated NI than in wild NI indicates that cultivated NI, directed more resources to development and less to secondary metabolism. Next, we compared the cultivated NI from Sichuan with cultivated NI from Gansu and Qinghai. The contents of umbelliferone (C50), diosmin (C65), chlorogenic acid (C39), p -coumaroyl quinic acid (C44), ornithine (C1), arginine (C2), histidine (C3), and falcarindol (C102), were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai. In contrast, the contents of nodakenin (C62), isomperatin (C101), bergapten (C81), bergaptol (C72), p -coumaric acid (C47), D-fructopyranose (C17), and phenyalanine (C33) were higher in cultivated NI from Gansu and Qinghai than in cultivated NI from Sichuan. 2.4. Pathway Enrichment and Metabolic Pathway Analysis of DAMs To further reveal the molecular biological mechanisms leading to differences in chemical composition among the NI samples, enrichment analysis was conducted to identify the metabolic pathways enriched with DAMs using the Kyoto Encyclopedia of Genes and Genomes (KEGG) and MetaboAnalyst. The DAMs between wild and cultivated NI were enriched in 22 metabolic pathways ( A), including arginine biosynthesis (map00220); alanine, aspartate, and glutamate metabolism (map00250); arginine and proline metabolism (map00330); the TCA cycle (map00020); and phenylalanine metabolism (map00360). The DAMs between NI cultivated in Sichuan and NI cultivated in Qinghai and Gansu ( B) were enriched in 11 pathways, including phenylalanine, tyrosine and tryptophan biosynthesis (map00400); phenylalanine metabolism; and arginine biosynthesis. Phenolic acids and coumarins are important therapeutic components of NI. These compounds display various pharmacological properties, such as anti-inflammatory, analgesic, and antioxidant activities . Phenylalanine (C00079) is a substrate for the biosynthesis of phenolic acids and coumarins. First, it is converted into p -coumaric acid (C00811) by L-phenylalanine ammonia-lyase (PAL) and cinnamic acid-4-hydroxylase, and then p -coumaric acid is converted into p -coumaric acid CoA by 4-coumarite: coenzyme A ligase. p -Coumaric acid CoA plays several roles—it is used in the synthesis of phenolic acids (caffeic acid (C01481), ferulic acid (C01494), etc.) through the shikimic acid pathway or in the synthesis of chlorogenic acid (C00852) through the action of coumaric acid 3′- hydroxylase (C3′H) . p -Coumaric acid CoA, via dihydroxycinnamoyl CoA, is also used in the synthesis of umbelliferone (C09315), which gives rise to a series of coumarins with complex structures under the action of C-prenyltransferase (C-PT) and cyclases. The biosynthetic pathways of the phenolic acids and coumarins in NI are shown in . In the pathway where phenolic acids and coumarins are synthesized from phenylalanine, there are seven important intermediates that affect the levels of phenolic acids/coumarins, namely cinnamic acid, p -coumaric acid, p -coumaroyl quinic acid, umbelliferone, osthenol, demethylsuberosin, and aesculatin. Their structural formulas are shown in . The p -coumaric acid contents were higher in wild NI than in cultivated NI, so the corresponding downstream products caffeic acid and ferulic acid also had relatively high contents in wild NI. In addition, the p -coumaroyl quinic acid content was slightly lower in NI cultivated in Sichuan cultivated than in wild NI, but the chlorogenic acid content was significantly higher in NI cultivated in Sichuan than in wild NI. We speculate that this may be due to the higher expression level of coumaroyl quinic acid 3′- hydroxylase in cultivated NI from Sichuan. Coumarin synthase catalyzes the production of the structurally complex coumarin umbelliferone from dihydroxy cinnamoyl CoA, which is generated from p -coumaric acid CoA. Then, osthenol (C18080) is produced by the action of umbelliferone 6-phenyltransferase. The intramolecular cyclization of osthenol by osthenol cyclase forms angular furanocoumarins; alternatively, umbelliferone 8-prenyltransferase can generate demethylsuberosin (C18083). Then, the intramolecular cyclization of demethylsuberosin gives rise to linear furanocoumarins . The umbelliferone content was significantly higher in NI cultivated in Sichuan than in the other samples, but the osthenol and demethylsuberosin contents were lower in NI cultivated in Sichuan than in the other samples. These two products are precursors for the synthesis of structurally diverse angular and linear furanocoumarins. Therefore, the total contents of angular furanocoumarins and linear furanocoumarins were lower in NI cultivated in Sichuan than in the other samples. In addition, the contents of simple coumarins, such as aesculatin (C09263), were lower in NI cultivated in Sichuan than in wild NI. Overall, the coumarin content was lower in NI cultivated in Sichuan than in wild NI. For the same reason, although the umbelliferone content was not significantly higher in NI cultivated in Sichuan than this in wild NI, the contents of the precursor substances (osthenol and demethylsuberosin) of angular and linear furanocoumarins were significantly higher in the wild NI samples. Overall, therefore, the contents of angular and linear furanocoumarins were significantly higher in wild NI than in NI cultivated in Sichuan. To explore the relationships between endogenous metabolites and the biosynthesis of phenolic acids and coumarins, we conducted a correlation analysis between amino acids and key intermediates in the phenolic acid and coumarin biosynthetic pathways. The results show that, except for cinnamic acid and umbelliferone, all the other phenolic acids and coumarins were negatively correlated with amino acids . No previous studies have reported a direct impact of amino acids on coumarin biosynthesis. However, according to a study on the germination of Eleusine indica seeds, coumarins are allelochemical substances. Exposure of E. indica seeds to coumarins resulted in significant changes in their amino acid profile and significantly affected the expression of genes related to the TCA cycle . In our study, asparagine and glycine, which are both related to the TCA cycle, and arginine and proline, which are related to nitrogen metabolism, showed significant negative correlations with some coumarins. These results suggest that there is a certain degree of competition between coumarin production and pathways related to development in NI. According to its chemical structure, phenylalanine may be metabolized to generate cinnamic acid and tyrosine. Accordingly, in our correlation analysis, we detected a positive correlation between phenylalanine and cinnamic acid. This suggests that, to improve the yield of cinnamic acid and increase the contents of downstream phenolic acids and coumarins, specific enzymes could be up-/downregulated to minimize the amount of tyrosine generated from phenylalanine and increase the amount of cinnamic acid. 2.5. Results of Anti-Bacterial Inflammation Pharmacodynamic Study 2.5.1. Evaluation of Anti-Inflammatory Effect of NI Treatment by Neutrophil Counts The neutrophil counts were significantly higher in the model group than in the control group ( p < 0.001), indicating the successful establishment of the zebrafish inflammation model. Both SW and SC-2 exhibited anti-inflammatory effects at medium and high doses, as evidenced by significantly lower neutrophil counts than those observed in the model group. Images of the neutrophils and bar charts of the neutrophil counts in zebrafish yolk sacs are presented in . The neutrophil count numbers are provided in . 2.5.2. Effect of NI Treatments on Transcript Levels of Genes Encoding Inflammation Markers To evaluate the anti-inflammatory pharmacodynamic effects of SW and SC-2, the transcript levels of IL-1β , IL -6, and TNF-α were determined by qRT-PCR. As shown in , IL-1β , IL-6 , and TNF-α were significantly upregulated in the model group compared with the control group, indicating successful establishment of the inflammation model. Compared with the model group, the groups treated with low and medium doses of SW and SC-2 showed the downregulation of IL-6 , and those treated with high doses of SW and SC-2 showed the downregulation of IL-1β . The transcript level of TNF-α was not significantly affected by SW or SC-2 at any dose. In conclusion, both SW and SC-2 exerted anti-inflammatory effects, observed as decreases in the transcript levels of the inflammatory marker genes IL-1β and IL-6 . The gene transcript level data are provided in . As shown in , the morphology of rhizome slices of wild NI from Gansu, Qinghai, and Sichuan was similar. The rhizomes exhibited a brown surface, with punctate or verrucose protruding root scars at the nodes. The rhizome tissue displayed a radial arrangement, with deep fissures. The cortex was brown, the xylem was yellowish-white, and the pith was yellowish-brown. Cultivated NI exhibited a brown surface, a yellow-brown cortex, a yellow-white xylem, and many fibrous roots. The rhizome slices (“drinking tablets”) of the cultivated NI were generally larger than those of the wild NI. The rhizome slices of NI cultivated in Gansu and Qinghai were similar in their morphology, with larger fissures in the cross-section than those of NI cultivated in Sichuan. Their pith was yellow to light brown, whereas that of NI cultivated in Sichuan was brown. The volatile components of the NI samples were analyzed by GC–MS. A shows the total ion chromatogram (TIC) of wild NI from Sichuan (SW). Comparison of the data with those in the NIST database revealed a total of 81 chemical constituents (see ), including 33 monoterpenes, 33 sesquiterpenes, and 15 other components, with a total detection rate of ≥89.5%. The main volatile oil constituents included γ-terpinene (C13), (−)-4-terpineol (C29), bornyl acetate (C41), p -cymene (C10), -4-carene (C9), and α-pinene (C3). Pharmacological studies have shown that γ-terpinene and α-pinene exhibit anti-inflammatory, antioxidant, neuroprotective, and analgesic effects ; p -cymene and -4-carene have antibacterial, insecticidal, and antiviral effects ; (−)-4-terpineol shows anticancer effects ; and bornyl acetate displays anti-inflammatory and immunomodulatory effects . A principal component analysis (PCA) was conducted for the batches of NI from various origins, using the relative abundance of chemical components as the variable ( B). The samples showed a clear clustering pattern in the PCA plot. The nine batches of samples were clustered into three categories: one category consisted of three batches of wild NI (from Sichuan, Gansu, and Qinghai), another consisted of four batches of cultivated NI from Qinghai and Gansu provinces, and the final category consisted of two batches of cultivated NI from Sichuan Province. A cluster heat map was constructed to show the distribution of volatile components and their abundance in different batches of NI. As shown in C, the distribution of volatile components was similar in the three wild NI samples from Sichuan, Gansu, and Qinghai. The distribution of volatile components was similar in cultivated NI from Qinghai and Gansu, but their volatile profiles differed from that of cultivated NI from Sichuan. The volatile profiles were similar in the two batches of cultivated NI from each place of origin. Thus, there was a high degree of inter-batch similarity in the volatile component composition of cultivated NI from the same planting area. To identify the differentially accumulated metabolites (DAMs) between wild and cultivated NI, orthogonal partial least squares discriminant analysis (OPLS-DA) was performed to separate wild and cultivated NI from different origins on the basis of the relative contents of volatile components. The parameters to screen for DAMs between the wild and cultivated NI materials were a variable importance in projection (VIP) value of >1, Log 2 fold change (FC) > 0 or <0, and p < 0.05. The OPLS-DA score plot and the permutation test results are shown in D. The contents of 21 volatile components were higher in wild NI than in cultivated NI (11 monoterpenes, nine sesquiterpenes and one other compound). The contents of seven compounds were lower in wild NI than in cultivated NI (one monoterpene, four sesquiterpenes, and two other compounds). The same procedure was used to identify DAMs between cultivated NI from Sichuan and cultivated NI from Gansu and Qinghai. The OPLS-DA score plot and permutation test results are shown in E. The contents of 16 volatile compounds were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai (seven monoterpenes, three sesquiterpenes, and six other components). The contents of 22 compounds were lower in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai (five monoterpenes, 14 sesquiterpenes, and three other compounds). Volcano plots were constructed to visualize the differential composition of volatile compounds in wild NI vs. cultivated NI and in cultivated NI from Sichuan vs. cultivated NI from Gansu and Qinghai ( F,G). As shown in F, the contents of α-phellandrene (C8), -4-carene (C9), α-terpineol (C31), γ-muurolene (C58), copaene (C47), and dehydroxy-isocalamendiol (C74) were higher in wild NI than in cultivated NI, whereas the contents of cis-thujopsene (C53), guaiac alcohol (C72), β-chamigrene (C59), caryophyllene (C51), and E-7-tetradecenol (C46) were higher in cultivated NI than in wild NI. Pharmacological studies have shown that α-phellandrene and copaene have anti-inflammatory and analgesic pharmacological effects ; α-phellandrene also exhibits antioxidant and wound-healing promoting effects ; caryophyllene displays anti-inflammatory and antioxidant effects . These results show that, compared with cultivated NI, wild NI contained more volatile components that exhibit antioxidant and anti-inflammatory pharmacological activities. Next, we compared the volatile profiles between cultivated NI from Sichuan and cultivated NI from Gansu and Qinghai. The contents of α-Terpinolene (C14), -4-carene (C9), cubenene (C52), octanal (C7), and nonanal (C17) were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai, whereas the contents of α-bisabolol (C81), (−)-aristolene (C77), apiol (C73), and guaiol (C78) were higher in cultivated NI from Gansu and Qinghai than in cultivated NI from Sichuan. Octanal and nonanal have bacteriostatic effects ; -4-carene, α-bisabolol, and guaiol show insecticidal and bacteriostatic effects ; α-bisabolol also exhibits anticancer and anti-inflammatory pharmacological effects . In summary, NIs cultivated in Sichuan, Gansu, and Qinghai were rich in volatile compounds with a range of antibacterial and insecticidal effects. The chemical component of the 95% ethanol extract of nine batches of samples was investigated by UHPLC-Orbitrap MS. shows the TICs of the QC sample in both positive and negative ion modes. A total of 114 compounds, including secondary metabolites and endogenous substances, were identified (see ), consisting of 51 coumarins, 19 phenolic acids and their derivatives, two flavonoids, three polyene–alkynes, 14 amino acids, four nucleosides, six carbohydrates, seven fatty acids, five amides, and three other compounds. The chemical structures of these compounds are shown in . As shown in , the intensities of the peaks corresponding to the coumarin constituents such as nodakenin (C62), nodakenitin (C76), angelicin (C77), imperatorin (C95), notopterol (C97), phellopterin (C99), and isoimperatorin (C101), as well as the intensity of the peak corresponding to falcarindiol (C102), were significantly higher than those of the other constituents. The identified chemical components were classified into seven categories: coumarins, phenolic acids, polyene–alkynes, amino acids, carbohydrates, fatty acids, and other constituents (including flavonoids, nucleosides, amides, and others). Based on the sum of peak areas of each category of constituents, sector charts were constructed to visualize the distribution of these categories in the different samples ( A). As shown in the figure, coumarins were the dominant secondary metabolites in NI, followed by phenolic acids. The mass percentage of phenolic acids was higher in wild NI than in cultivated NI from Gansu and Qinghai, but lower in wild NI from Sichuan than in cultivated NI from Sichuan. Polyene–alkynes exhibit anti-cancer properties and reduce neuroinflammation . The mass percentage of polyacetylenes was higher in cultivated NI from Sichuan than in wild NI from Sichuan. The mass percentages of amino acids and carbohydrates were higher in cultivated NI than in wild NI. The mass percentage of coumarins was significantly lower in cultivated NI from Sichuan than in wild NI from Sichuan. Using the peak areas of the identified components as the variable, PCA clustering analysis was performed. As shown in the PCA plot ( B), the QC samples were clustered together, indicating good instrument precision and reliable mass spectrometry data results. In the plot, wild NI from Sichuan, Gansu, and Qinghai were grouped together; NI cultivated in Qinghai and Gansu were clustered together; and NI cultivated in Sichuan was placed in a separate category. We further investigated the distribution of secondary metabolites in different batches of NI samples using heat maps ( C). Wild NI from the three different growing areas exhibited similar color block distributions in the heat map. The color distribution was similar for NI cultivated in Gansu and Qinghai, while that of NI cultivated in Sichuan was significantly different from the other samples. To clarify the DAMs between wild and cultivated NI, as well as between NI cultivated in Sichuan and NI cultivated in Gansu and Qinghai, OPLS-DA analysis was first executed using the peak areas of non-volatile components in wild and cultivated NI as the variable. The OPLS-DA score diagram and permutation test results are shown in D. Using VIP > 1, Log 2 FC > 0 or <0, and p < 0.05 as screening criteria, the DAMs between wild and cultivated NI were screened. The contents of 35 components were higher in wild NI than in cultivated NI, including 20 coumarins (eight simple coumarins, 11 linear furancoumarins, and one angular furancoumarin), three polyene–alkynes, two carbohydrates, six phenolic acids and their derivatives, two fatty acids, one amide, and one other component. The contents of 20 components were lower in wild NI than in cultivated NI, including three coumarins (two simple coumarins, one linear furancoumarin), seven amino acids, one nucleoside, two carbohydrate component, two phenolic acids and their derivatives, two fatty acids, two amides, and one other chemical component. Using the same method, we identified 64 DAMs between NI cultivated in Sichuan and NI cultivated in Qinghai and Gansu. The OPLS-DA score plot and permutation test results are shown in E. The contents of 19 compounds were higher in NI cultivated in Sichuan than in NI cultivated in Qinghai and Gansu, including seven coumarin components (four simple coumarins, two linear furancoumarins, and one angular furancoumarin), three amino acids, one nucleoside, one flavonoid, one polyene–alkyne, and six phenolic acids and their derivatives. The contents of 45 compounds were lower in NI cultivated in Sichuan than in NI cultivated in Qinghai and Gansu, including 33 coumarins (seven simple coumarins, 20 linear furancoumarins, six angular furancoumarins), four amino acids, three nucleosides, one carbohydrate, three phenolic acids, and one other chemical component. Volcano plots were constructed to visualize the DAMs between wild and cultivated products, as well as between cultivated NI from Sichuan and Gansu and Qinghai ( F,G). As shown in F, there were higher contents of falcarinol (C104), azelaic acid (C68), notopterol (C97), aesculatin (C40), caffeic acid (C41), osthenol (C87), marmesin (C70), falcarindol (C102), and ferulic acid (C53) in wild NI than in cultivated NI. In contrast, the contents of proline (C16), phenyalanine (C33), asparagine (C11), D-sucrose (C18), succinic acid (C27), and cinnamic acid (C37) were higher in cultivated NI than in wild NI. This phenomenon, where substances related to the tricarboxylic acid (TCA) cycle, such as aspartic acid and succinic acid, were more abundant in cultivated NI than in wild NI indicates that cultivated NI, directed more resources to development and less to secondary metabolism. Next, we compared the cultivated NI from Sichuan with cultivated NI from Gansu and Qinghai. The contents of umbelliferone (C50), diosmin (C65), chlorogenic acid (C39), p -coumaroyl quinic acid (C44), ornithine (C1), arginine (C2), histidine (C3), and falcarindol (C102), were higher in cultivated NI from Sichuan than in cultivated NI from Gansu and Qinghai. In contrast, the contents of nodakenin (C62), isomperatin (C101), bergapten (C81), bergaptol (C72), p -coumaric acid (C47), D-fructopyranose (C17), and phenyalanine (C33) were higher in cultivated NI from Gansu and Qinghai than in cultivated NI from Sichuan. To further reveal the molecular biological mechanisms leading to differences in chemical composition among the NI samples, enrichment analysis was conducted to identify the metabolic pathways enriched with DAMs using the Kyoto Encyclopedia of Genes and Genomes (KEGG) and MetaboAnalyst. The DAMs between wild and cultivated NI were enriched in 22 metabolic pathways ( A), including arginine biosynthesis (map00220); alanine, aspartate, and glutamate metabolism (map00250); arginine and proline metabolism (map00330); the TCA cycle (map00020); and phenylalanine metabolism (map00360). The DAMs between NI cultivated in Sichuan and NI cultivated in Qinghai and Gansu ( B) were enriched in 11 pathways, including phenylalanine, tyrosine and tryptophan biosynthesis (map00400); phenylalanine metabolism; and arginine biosynthesis. Phenolic acids and coumarins are important therapeutic components of NI. These compounds display various pharmacological properties, such as anti-inflammatory, analgesic, and antioxidant activities . Phenylalanine (C00079) is a substrate for the biosynthesis of phenolic acids and coumarins. First, it is converted into p -coumaric acid (C00811) by L-phenylalanine ammonia-lyase (PAL) and cinnamic acid-4-hydroxylase, and then p -coumaric acid is converted into p -coumaric acid CoA by 4-coumarite: coenzyme A ligase. p -Coumaric acid CoA plays several roles—it is used in the synthesis of phenolic acids (caffeic acid (C01481), ferulic acid (C01494), etc.) through the shikimic acid pathway or in the synthesis of chlorogenic acid (C00852) through the action of coumaric acid 3′- hydroxylase (C3′H) . p -Coumaric acid CoA, via dihydroxycinnamoyl CoA, is also used in the synthesis of umbelliferone (C09315), which gives rise to a series of coumarins with complex structures under the action of C-prenyltransferase (C-PT) and cyclases. The biosynthetic pathways of the phenolic acids and coumarins in NI are shown in . In the pathway where phenolic acids and coumarins are synthesized from phenylalanine, there are seven important intermediates that affect the levels of phenolic acids/coumarins, namely cinnamic acid, p -coumaric acid, p -coumaroyl quinic acid, umbelliferone, osthenol, demethylsuberosin, and aesculatin. Their structural formulas are shown in . The p -coumaric acid contents were higher in wild NI than in cultivated NI, so the corresponding downstream products caffeic acid and ferulic acid also had relatively high contents in wild NI. In addition, the p -coumaroyl quinic acid content was slightly lower in NI cultivated in Sichuan cultivated than in wild NI, but the chlorogenic acid content was significantly higher in NI cultivated in Sichuan than in wild NI. We speculate that this may be due to the higher expression level of coumaroyl quinic acid 3′- hydroxylase in cultivated NI from Sichuan. Coumarin synthase catalyzes the production of the structurally complex coumarin umbelliferone from dihydroxy cinnamoyl CoA, which is generated from p -coumaric acid CoA. Then, osthenol (C18080) is produced by the action of umbelliferone 6-phenyltransferase. The intramolecular cyclization of osthenol by osthenol cyclase forms angular furanocoumarins; alternatively, umbelliferone 8-prenyltransferase can generate demethylsuberosin (C18083). Then, the intramolecular cyclization of demethylsuberosin gives rise to linear furanocoumarins . The umbelliferone content was significantly higher in NI cultivated in Sichuan than in the other samples, but the osthenol and demethylsuberosin contents were lower in NI cultivated in Sichuan than in the other samples. These two products are precursors for the synthesis of structurally diverse angular and linear furanocoumarins. Therefore, the total contents of angular furanocoumarins and linear furanocoumarins were lower in NI cultivated in Sichuan than in the other samples. In addition, the contents of simple coumarins, such as aesculatin (C09263), were lower in NI cultivated in Sichuan than in wild NI. Overall, the coumarin content was lower in NI cultivated in Sichuan than in wild NI. For the same reason, although the umbelliferone content was not significantly higher in NI cultivated in Sichuan than this in wild NI, the contents of the precursor substances (osthenol and demethylsuberosin) of angular and linear furanocoumarins were significantly higher in the wild NI samples. Overall, therefore, the contents of angular and linear furanocoumarins were significantly higher in wild NI than in NI cultivated in Sichuan. To explore the relationships between endogenous metabolites and the biosynthesis of phenolic acids and coumarins, we conducted a correlation analysis between amino acids and key intermediates in the phenolic acid and coumarin biosynthetic pathways. The results show that, except for cinnamic acid and umbelliferone, all the other phenolic acids and coumarins were negatively correlated with amino acids . No previous studies have reported a direct impact of amino acids on coumarin biosynthesis. However, according to a study on the germination of Eleusine indica seeds, coumarins are allelochemical substances. Exposure of E. indica seeds to coumarins resulted in significant changes in their amino acid profile and significantly affected the expression of genes related to the TCA cycle . In our study, asparagine and glycine, which are both related to the TCA cycle, and arginine and proline, which are related to nitrogen metabolism, showed significant negative correlations with some coumarins. These results suggest that there is a certain degree of competition between coumarin production and pathways related to development in NI. According to its chemical structure, phenylalanine may be metabolized to generate cinnamic acid and tyrosine. Accordingly, in our correlation analysis, we detected a positive correlation between phenylalanine and cinnamic acid. This suggests that, to improve the yield of cinnamic acid and increase the contents of downstream phenolic acids and coumarins, specific enzymes could be up-/downregulated to minimize the amount of tyrosine generated from phenylalanine and increase the amount of cinnamic acid. 2.5.1. Evaluation of Anti-Inflammatory Effect of NI Treatment by Neutrophil Counts The neutrophil counts were significantly higher in the model group than in the control group ( p < 0.001), indicating the successful establishment of the zebrafish inflammation model. Both SW and SC-2 exhibited anti-inflammatory effects at medium and high doses, as evidenced by significantly lower neutrophil counts than those observed in the model group. Images of the neutrophils and bar charts of the neutrophil counts in zebrafish yolk sacs are presented in . The neutrophil count numbers are provided in . 2.5.2. Effect of NI Treatments on Transcript Levels of Genes Encoding Inflammation Markers To evaluate the anti-inflammatory pharmacodynamic effects of SW and SC-2, the transcript levels of IL-1β , IL -6, and TNF-α were determined by qRT-PCR. As shown in , IL-1β , IL-6 , and TNF-α were significantly upregulated in the model group compared with the control group, indicating successful establishment of the inflammation model. Compared with the model group, the groups treated with low and medium doses of SW and SC-2 showed the downregulation of IL-6 , and those treated with high doses of SW and SC-2 showed the downregulation of IL-1β . The transcript level of TNF-α was not significantly affected by SW or SC-2 at any dose. In conclusion, both SW and SC-2 exerted anti-inflammatory effects, observed as decreases in the transcript levels of the inflammatory marker genes IL-1β and IL-6 . The gene transcript level data are provided in . The neutrophil counts were significantly higher in the model group than in the control group ( p < 0.001), indicating the successful establishment of the zebrafish inflammation model. Both SW and SC-2 exhibited anti-inflammatory effects at medium and high doses, as evidenced by significantly lower neutrophil counts than those observed in the model group. Images of the neutrophils and bar charts of the neutrophil counts in zebrafish yolk sacs are presented in . The neutrophil count numbers are provided in . To evaluate the anti-inflammatory pharmacodynamic effects of SW and SC-2, the transcript levels of IL-1β , IL -6, and TNF-α were determined by qRT-PCR. As shown in , IL-1β , IL-6 , and TNF-α were significantly upregulated in the model group compared with the control group, indicating successful establishment of the inflammation model. Compared with the model group, the groups treated with low and medium doses of SW and SC-2 showed the downregulation of IL-6 , and those treated with high doses of SW and SC-2 showed the downregulation of IL-1β . The transcript level of TNF-α was not significantly affected by SW or SC-2 at any dose. In conclusion, both SW and SC-2 exerted anti-inflammatory effects, observed as decreases in the transcript levels of the inflammatory marker genes IL-1β and IL-6 . The gene transcript level data are provided in . Material: Samples of Notopterygium incisum (NI) were collected from Sichuan, Qinghai, and Gansu provinces. These samples were identified as the dried rhizomes of Notopterygium incisum Ting ex H. T. Chang, a plant belonging to the Umbelliferae family, by Yang Bin, a researcher at the Institute of Traditional Chinese Medicine of the Chinese Academy of Chinese Medical Sciences. Voucher specimens have been deposited in the Institute of Traditional Chinese Medicine, Chinese Academy of Chinese Medical Sciences. Detailed sample information is presented in . Animals: The zebrafish were kept in culture water at 28 °C (water quality: 200 mg instant sea salt per 1 L reverse osmosis water, with conductivity of 450–550 μS/cm, pH of 6.5–8.5, and hardness of 50–100 mg/L CaCO 3 ). The fish were bred at the fish culture center of HuanTe Bio-Technology Co Ltd. The license number of the experimental animals was SYXK (Zhejiang) 2022-0004. The feeding and management practices complied with the requirements of the international AAALAC certification (certification number: 001458), IACUC ethics review number: IACUC-2024-9701-01. Reagents: The reagents used in this study (and their manufacturers) were as follows: distilled water (Guangzhou Watsons Food and Beverage Co., Ltd., Guangzhou, China), ethyl acetate (AR, Xilong Science, Shantou, China), anhydrous sodium sulfate (AR, Shanghai McLean Biochemical Technology Co., Ltd., Shanghai, China), ethanol (AR, Tongguang Fine Chemicals company, Beijing, China), dimethyl sulfoxide (DMSO, Sigma, St Louis, MO, USA), methyl cellulose (Shanghai Aladdin Biochemical Technology Co., Ltd., Shanghai, China), and acetonitrile (GR, Thermo Fisher Scientific, Waltham, MA, USA). Trial drugs: Lipopolysaccharide (LPS, Sigma, Saint Louis, MO, USA) and dexamethasone acetate (Shanghai Aladdin Biochemical Technology Co., Ltd., Shanghai, China) were used for the pharmacological experiments. Information regarding the reference standards used for chemical composition identification is provided in . 4.1. Preparation of Samples for Chemical Components Analysis Preparation of extracts for analysis of volatile components: The dried rhizome of NI was crushed and passed through a No. 3 sieve to obtain the sample powder. Then, 100.00 g of the sample powder was mixed with 600 mL water and a small amount of zeolite, allowed to stand for 9 h, then heated for 12 h. The upper phase was collected as the volatile oil sample. For each sample, 0.1 mL volatile oil was mixed with 0.4 mL ethyl acetate, and then anhydrous sodium sulfate was added to remove the water. The mixture was shaken well and kept at 4 °C overnight; then, an aliquot was used for the analysis of volatile compounds using GC–MS. Preparation of extract for analysis of non-volatile components: A 0.2 g portion of each powdered NI sample (sieved through a No. 3 sieve) was weighed precisely, and then 20 mL of 95% v/v ethanol was added. The mixture was subjected to ultrasonic treatment for 30 min (KQ-250DB, Kunshan Ultrasonic Instrument Co., Ltd., Suzhou, China) and then centrifuged for 10 min (9391× g ). A portion of the supernatant was filtered (0.22 μm cellulose membrane filter) before analysis of the non-volatile components by UHPLC-Orbitrap MS. QC test solution preparation: Samples of the nine batches of N. incisum powder (sieved through a No. 3 sieve) were combined to create a QC sample. To generate the mixed sample for QC, 22.30 mg of each sample was weighed precisely into a conical flask, and then 20 mL of 95% ethanol was added. The mixture was subjected to ultrasonic extraction for 30 min and then centrifuged for 10 min (9391× g ). The supernatant was filtered through a cellulose membrane filter (0.22 μm) before analysis. 4.2. Preparation of Samples for Analysis of Anti-Inflammatory Activity Two samples were used in the anti-inflammatory experiment, i.e., wild NI from Sichuan and cultivated NI from Sichuan. For each sample, 4 g of powder (sieved through a No. 3 sieve) was weighed into a 50 mL triangular flask. The sample was extracted with 25 mL of 95% v/v ethanol three times, for 30 min each, and the three extracts were combined. The extracts were concentrated under reduced pressure to remove the ethanol, and then freeze-dried (Alpha 2–4, LSCbasic laboratory freeze-dryer, Martin Christ, Osterode am Harz, Germany) to obtain the extract powder for the anti-inflammatory pharmacodynamic experiments. 4.3. Determination of the Chemical Components of NI Volatile Oil by GC–MS GC conditions: The gas chromatograph (Thermoscientific TRACE 1600, Thermo Fisher Scientific, Waltham, MA, USA) was equipped with a Thermoscientific TG-5SILMS (0.25 μm, 0.25 mm × 30 m) chromatographic column (Thermo Fisher Scientific, Waltham, MA, USA). The operating conditions were as follows: carrier gas, helium; inlet temperature, 250 °C; detector temperature, 250 °C; sample volume, 2 µL. The programmed heating conditions were as follows: a starting temperature of 50 °C, increasing to 130 °C at a rate of 3 °C·min −1 , then to 137 °C at a rate of 0.5 °C·min −1 , and then to 180 °C at a rate of 4.3 °C·min −1 , and held at 180 °C for 5 min. Mass spectrometry conditions: The mass spectrometer (Thermoscientific TSQ 9610, Thermo Fisher Scientific, Waltham, MA, USA) was operated with an EI ion source, with electron energy of 70 eV; a scanning interval of 0.30 s; a mass scanning range of 35–500 Da; and a gas flow rate of 1 mL·min −1 . 4.4. Determination of Chemical Components of NI by UHPLC-Orbitrap MS UHPLC conditions: The ultra-high performance liquid chromatograph (Thermo Scientific Vanquish) was equipped with a MORHCHEM Caprisil C18-X (1.8 μm, 100 mm × 2.1 mm) column (Morhchem, City of Industry, CA, USA). The samples were eluted by gradient elution with the mobile phase consisting of high purity water containing 0.01% acetic acid (solvent A) and acetonitrile (solvent B). The elution program was as follows: 0–9 min, 5–12% B; 9.00–18.00 min, 12–31% B; 18–36 min, 31–90% B; 36–36.5 min, 90–98% B; 36.5–38.5 min, 98% B; 38.6–43 min, 5% B. The column temperature was 35 °C, and the injection volume was 2 µL. Mass spectrometry conditions: The mass spectrometer (Thermo Orbitrap Exploris 120, Thermo Fisher Scientific, Waltham, MA, USA) was operated with an electrospray ionization source. The data were collected in positive and negative ion modes, respectively. The operating conditions were as follows: positive ion spray voltage, 3.50 kV; negative ion spray voltage, −3.00 kV; sheath gas, 40 arb; auxiliary gas, 10 arb. The temperature of the ion transfer tube was 320 °C, and a primary full scan was performed at a resolution of 120,000, with a primary ion scan range of 100–1500 m/z . Secondary cleavage was performed using an HCD, with a collision energy parameter set at 30% and a secondary resolution of 30,000. The ions in the first four collected signals were fragmented, and dynamic exclusion was used to remove interfering signals. 4.5. Anti-Inflammatory Pharmacodynamic Experiments A pharmacological experiment was conducted in which zebrafish were used as the experimental animals, and a bacterial inflammation model was established by injecting lipopolysaccharide (LPS) into the yolk sac. Cytokines such as IL-6 , IL-1β , and TNF-α are inflammatory markers, and their levels are commonly used indicators for evaluating the inflammatory effect of a drug or compound . The determination of transcript levels of cytokine-related genes as indexes of the inflammatory response can eliminate the interference of multiple factors in the body’s response. Compared with measurements of these cytokines, measurements of their gene transcript levels are a better index of inflammation because the gene expression response is faster and more sensitive, as well as more informative, in revealing the mechanism of drug action. To evaluate the pharmacological effects of oral administration of 95% ethanol extracts of wild and cultivated NI from Sichuan (SW and SC-2, respectively) on bacterial inflammation, we conducted quantitative real-time polymerase chain reaction (qRT-PCR) analyses to determine the transcript levels of IL-6 , IL-1β , and TNF- α and compared neutrophil counts among the control, experimental, model control, and positive control groups. 4.5.1. Maximum Detectable Concentration Determination Transgenic neutrophil green fluorescent zebrafish (MPX) were randomly selected 3 days post-fertilization (3 dpf) and added to 6-well plates (Zhejiang Bellambeau Biotechnology Co. Ltd., Hangzhou, China), with 30 tails per well. The NI samples were applied as an aqueous solution, and the treatment, model, and control groups were established with a volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of all experimental groups (except the normal control group) were injected with LPS using a microinjector (IM300, Narishige, Tokyo, Japan) to establish a bacterial inflammation model. The minimum detectable concentrations (MTCs) of the samples in the modeled zebrafish were determined after treatment at 28 °C for 2 h. The results are shown in . 4.5.2. Effects of NI Samples on Bacterial Inflammation (Neutrophil Counts) As described above, 3 dpf MPX were randomly selected and added to 6-well plates, with 30 tails per well. The NI samples were applied in the form of aqueous extracts, and the positive control was dexamethasone acetate with a concentration of 43.5 μg/mL. The control, model, and treatment groups were established with a total volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of each experimental group (except the normal control group) were injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, 10 zebrafish tails were randomly selected from each experimental group and observed and photographed under a fluorescence microscope (AZ100, Nikon, Tokyo, Japan). The images were analyzed and processed using NIS-Elements D 3.20 advanced image processing software. This allowed us to count the number of neutrophils in the zebrafish yolk sac. 4.5.3. Effect of NI Samples on Gene Expression in the Zebrafish Inflammation Model The normal control, model, and treatment groups were established with 3 dpf MPX (30 tails per well), as described in . After 1 h of sample pretreatment, the yolk sac of each experimental group (except the normal control group) was injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, RNA was extracted from the MPX in each group using an RNA Rapid Extraction Kit (TL2204001643C, Foshan Aowei Biotechnology Co., Ltd., Foshan, China). The concentration and purity of the total RNA were determined using a UV–visible spectrophotometer (Nanodrop 2000, Thermo Fisher Scientific, Waltham, MA, USA). The results are shown in . The transcript levels of the genes encoding inflammatory factors were determined by qRT-PCR, using the primer sequences shown in . For the qRT-PCR analyses, 2.00 μg of total RNA from each sample was used to synthesize cDNA in a 20.0-μL reaction mixture, using a cDNA First Strand Synthesis Kit (X0320, Tiangen Biochemical Science and Technology Co., Ltd., Beijing, China). The transcript levels of β-actin , IL-1β , IL-6 , and TNF-α were detected by qRT-PCR (T100, Bio-Rad, Hercules, CA, USA). The relative transcript levels of IL-1β , IL-6 , and TNF-α were calculated using β-actin as an internal reference. 4.6. Data Analysis 4.6.1. Methods for Identification of Chemical Components Volatile components analysis: Qualitative and semi-quantitative analyses of the volatile oil composition of the nine batches of samples were carried out by comparing the data obtained in our GC–MS analyses with those in the NIST database. The NIST database was searched, and the chemical constituents of the volatile oils were identified based on a match rate greater than 800. The chromatographic peaks of the nine batches of volatile samples were integrated, and the content of each volatile oil component was expressed as the relative peak area . Non-volatile components analysis: The raw mass spectral data acquired by UHPLC-Orbitrap MS were imported into Compounds Discoverer 3.3 software (Thermo Scientific, USA) connected with the online KEGG ( https://www.kegg.jp/ , accessed on 12 July 2024), ChEBI ( https://www.ebi.ac.uk/chebi/ , accessed on 12 July 2024), ChEMBL ( https://www.ebi.ac.uk/chembl/ , accessed on 12 July 2024), mzCloud ( https://www.mzcloud.org/ , accessed on 12 July 2024), and in-house Thermo Scientific mzVault databases, as well as other libraries. After peak filtering, peak alignment, and peak identification, a data matrix containing information such as retention time (RT), m/z , compound name, peak area, etc., was generated. Then, all identified compounds in this matrix were confirmed either by referring to the reference standard compound information regarding RT and m/z or by analyzing the first and second stage m/z information, based on the compound’s mass spectrometric fragmentation regularity. In the qualitative analysis of the metabolites, interference from the isotopic signals, duplicate signals from the K + and NH 4 + ions, and fragment ions from other larger molecules were removed. The deviation was set to <5 × 10 −6 . The content of each chemical component in the sample is represented by the peak area. 4.6.2. Phytometabolomic Research Based on Multivariate Statistical Analysis The qualitative and semi-quantitative results of the chemical components of the samples underwent multivariate statistical analyses, including cluster analysis (CA), PCA, and OPLS-DA. The CA grouped samples according to the distribution of metabolites, and the heatmap was drawn online using SRplot ( https://www.bioinformatics.com.cn , accessed on 27 September 2024). The clustering approach is designated as “complete”, the distance measure is opted for “Euclidean”, and the callback function is set to “pheatmap”. The PCA clustered the samples on the basis of similarities in the volatile oil components and the 95% ethanol extract components, including quality control samples (QC), using SIMCA 14.1 software (Umetrics, Malmö , Sweden). The scaling type was set as unit variance (UV). For the PCA of the volatile oil components, the cumulative contribution rate of five principal components reached 0.935, and the value of Q 2 was 0.59. Meanwhile, for the PCA of the 95% ethanol extract components, the cumulative contribution rate of three principal components reached 0.846, and the value of Q 2 was 0.572, indicating that the models possess good predictive ability. Then, OPLS-DA was performed to calculate the VIP value for the screening of DAMs. The validity of the OPLS-DA model was assessed using the permutations function. The predictive parameters for evaluating the model included R 2 X, R 2 Y, and Q 2 , where Q 2 indicates the predictive power of the model, and R 2 X and R 2 Y indicate the rate of explanation of the X and Y matrices, respectively, by the constructed model. The closer these three indicators are to 1, the more stable and reliable the model. A valid model is indicated by Q 2 > 0.5. In this study, all of the OPLS-DA models exhibited R 2 X values greater than 0.7, and both R 2 Y and Q 2 values were greater than 0.9, indicating outstanding performance in explaining independent and dependent variables, as well as for predicting new data. On the basis of the permutation tests, these models were not overfitted. To obtain more valuable information, a t -test was also implemented with SPSS 25.0 (IBM, USA). In the end, the DAMs were screened according to the following thresholds: VIP > 1, p -value of t -test < 0.05, and Log 2 FC > 0 or <0. The DAMS were visualized in volcano plots generated using SRpolt ( https://www.bioinformatics.com.cn , accessed on 28 September 2024). 4.6.3. Metabolic Pathway Analysis The DAMs were annotated using the KEGG database. An enrichment analysis to determine the biosynthetic and metabolic pathways associated with the differential components was conducted using MetaboAnalyst 6.0 ( https://www.metaboanalyst.ca/ , accessed on 11 October 2024). The results are shown as bubble diagrams. The biosynthetic pathways of phenolic acids and coumarins, which are the main secondary metabolites of NI, were constructed based on information reported in the literature. Clustered heat maps were constructed to display the distribution of secondary metabolites related to the pathway maps in the nine batches of samples. These revealed the key metabolic components responsible for the metabolic differences between the wild and cultivated NI from different origins. Pearson’s correlation analysis was performed using Origin 2022 software (OriginLab, Northampton, MA, USA). The correlation analyses were conducted to detect relationships between the amino acid components of wild NI and NI cultivated in Gansu and Qinghai and the phenolic acid and coumarin components of the above metabolic pathways. These analyses revealed the potential relationships between endogenous substances in NI and its secondary metabolism. The correlation heatmaps were plotted using the Correlation Plot Application in Origin 2022 software, with blue ovals indicating negative correlations, red ovals indicating positive correlations, and narrower ovals indicating larger correlation coefficients. Asterisks indicate significant correlations between two components, where * and ** indicate significance at the p < 0.05 level and the p < 0.01 level, respectively. 4.6.4. Statistical Analyses of the Results of Pharmacological Experiments The results of the pharmacological experiments are expressed as mean ± SE (standard error). The number of neutrophils and gene transcript levels was compared between the model group and the control and experimental groups separately via a t -test, using SPSS 25.0 (IBM, Armonk, NY, USA). Differences were considered significant at p < 0.05. In the figures, asterisks indicate significant differences between the model group and the control/treatment groups, with * and ** indicating significant differences at p < 0.05 and p < 0.01, respectively. Histograms were generated using GraphPad Prism 9.5.0 (GraphPad Software, La Jolla, CA, USA). Preparation of extracts for analysis of volatile components: The dried rhizome of NI was crushed and passed through a No. 3 sieve to obtain the sample powder. Then, 100.00 g of the sample powder was mixed with 600 mL water and a small amount of zeolite, allowed to stand for 9 h, then heated for 12 h. The upper phase was collected as the volatile oil sample. For each sample, 0.1 mL volatile oil was mixed with 0.4 mL ethyl acetate, and then anhydrous sodium sulfate was added to remove the water. The mixture was shaken well and kept at 4 °C overnight; then, an aliquot was used for the analysis of volatile compounds using GC–MS. Preparation of extract for analysis of non-volatile components: A 0.2 g portion of each powdered NI sample (sieved through a No. 3 sieve) was weighed precisely, and then 20 mL of 95% v/v ethanol was added. The mixture was subjected to ultrasonic treatment for 30 min (KQ-250DB, Kunshan Ultrasonic Instrument Co., Ltd., Suzhou, China) and then centrifuged for 10 min (9391× g ). A portion of the supernatant was filtered (0.22 μm cellulose membrane filter) before analysis of the non-volatile components by UHPLC-Orbitrap MS. QC test solution preparation: Samples of the nine batches of N. incisum powder (sieved through a No. 3 sieve) were combined to create a QC sample. To generate the mixed sample for QC, 22.30 mg of each sample was weighed precisely into a conical flask, and then 20 mL of 95% ethanol was added. The mixture was subjected to ultrasonic extraction for 30 min and then centrifuged for 10 min (9391× g ). The supernatant was filtered through a cellulose membrane filter (0.22 μm) before analysis. Two samples were used in the anti-inflammatory experiment, i.e., wild NI from Sichuan and cultivated NI from Sichuan. For each sample, 4 g of powder (sieved through a No. 3 sieve) was weighed into a 50 mL triangular flask. The sample was extracted with 25 mL of 95% v/v ethanol three times, for 30 min each, and the three extracts were combined. The extracts were concentrated under reduced pressure to remove the ethanol, and then freeze-dried (Alpha 2–4, LSCbasic laboratory freeze-dryer, Martin Christ, Osterode am Harz, Germany) to obtain the extract powder for the anti-inflammatory pharmacodynamic experiments. GC conditions: The gas chromatograph (Thermoscientific TRACE 1600, Thermo Fisher Scientific, Waltham, MA, USA) was equipped with a Thermoscientific TG-5SILMS (0.25 μm, 0.25 mm × 30 m) chromatographic column (Thermo Fisher Scientific, Waltham, MA, USA). The operating conditions were as follows: carrier gas, helium; inlet temperature, 250 °C; detector temperature, 250 °C; sample volume, 2 µL. The programmed heating conditions were as follows: a starting temperature of 50 °C, increasing to 130 °C at a rate of 3 °C·min −1 , then to 137 °C at a rate of 0.5 °C·min −1 , and then to 180 °C at a rate of 4.3 °C·min −1 , and held at 180 °C for 5 min. Mass spectrometry conditions: The mass spectrometer (Thermoscientific TSQ 9610, Thermo Fisher Scientific, Waltham, MA, USA) was operated with an EI ion source, with electron energy of 70 eV; a scanning interval of 0.30 s; a mass scanning range of 35–500 Da; and a gas flow rate of 1 mL·min −1 . UHPLC conditions: The ultra-high performance liquid chromatograph (Thermo Scientific Vanquish) was equipped with a MORHCHEM Caprisil C18-X (1.8 μm, 100 mm × 2.1 mm) column (Morhchem, City of Industry, CA, USA). The samples were eluted by gradient elution with the mobile phase consisting of high purity water containing 0.01% acetic acid (solvent A) and acetonitrile (solvent B). The elution program was as follows: 0–9 min, 5–12% B; 9.00–18.00 min, 12–31% B; 18–36 min, 31–90% B; 36–36.5 min, 90–98% B; 36.5–38.5 min, 98% B; 38.6–43 min, 5% B. The column temperature was 35 °C, and the injection volume was 2 µL. Mass spectrometry conditions: The mass spectrometer (Thermo Orbitrap Exploris 120, Thermo Fisher Scientific, Waltham, MA, USA) was operated with an electrospray ionization source. The data were collected in positive and negative ion modes, respectively. The operating conditions were as follows: positive ion spray voltage, 3.50 kV; negative ion spray voltage, −3.00 kV; sheath gas, 40 arb; auxiliary gas, 10 arb. The temperature of the ion transfer tube was 320 °C, and a primary full scan was performed at a resolution of 120,000, with a primary ion scan range of 100–1500 m/z . Secondary cleavage was performed using an HCD, with a collision energy parameter set at 30% and a secondary resolution of 30,000. The ions in the first four collected signals were fragmented, and dynamic exclusion was used to remove interfering signals. A pharmacological experiment was conducted in which zebrafish were used as the experimental animals, and a bacterial inflammation model was established by injecting lipopolysaccharide (LPS) into the yolk sac. Cytokines such as IL-6 , IL-1β , and TNF-α are inflammatory markers, and their levels are commonly used indicators for evaluating the inflammatory effect of a drug or compound . The determination of transcript levels of cytokine-related genes as indexes of the inflammatory response can eliminate the interference of multiple factors in the body’s response. Compared with measurements of these cytokines, measurements of their gene transcript levels are a better index of inflammation because the gene expression response is faster and more sensitive, as well as more informative, in revealing the mechanism of drug action. To evaluate the pharmacological effects of oral administration of 95% ethanol extracts of wild and cultivated NI from Sichuan (SW and SC-2, respectively) on bacterial inflammation, we conducted quantitative real-time polymerase chain reaction (qRT-PCR) analyses to determine the transcript levels of IL-6 , IL-1β , and TNF- α and compared neutrophil counts among the control, experimental, model control, and positive control groups. 4.5.1. Maximum Detectable Concentration Determination Transgenic neutrophil green fluorescent zebrafish (MPX) were randomly selected 3 days post-fertilization (3 dpf) and added to 6-well plates (Zhejiang Bellambeau Biotechnology Co. Ltd., Hangzhou, China), with 30 tails per well. The NI samples were applied as an aqueous solution, and the treatment, model, and control groups were established with a volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of all experimental groups (except the normal control group) were injected with LPS using a microinjector (IM300, Narishige, Tokyo, Japan) to establish a bacterial inflammation model. The minimum detectable concentrations (MTCs) of the samples in the modeled zebrafish were determined after treatment at 28 °C for 2 h. The results are shown in . 4.5.2. Effects of NI Samples on Bacterial Inflammation (Neutrophil Counts) As described above, 3 dpf MPX were randomly selected and added to 6-well plates, with 30 tails per well. The NI samples were applied in the form of aqueous extracts, and the positive control was dexamethasone acetate with a concentration of 43.5 μg/mL. The control, model, and treatment groups were established with a total volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of each experimental group (except the normal control group) were injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, 10 zebrafish tails were randomly selected from each experimental group and observed and photographed under a fluorescence microscope (AZ100, Nikon, Tokyo, Japan). The images were analyzed and processed using NIS-Elements D 3.20 advanced image processing software. This allowed us to count the number of neutrophils in the zebrafish yolk sac. 4.5.3. Effect of NI Samples on Gene Expression in the Zebrafish Inflammation Model The normal control, model, and treatment groups were established with 3 dpf MPX (30 tails per well), as described in . After 1 h of sample pretreatment, the yolk sac of each experimental group (except the normal control group) was injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, RNA was extracted from the MPX in each group using an RNA Rapid Extraction Kit (TL2204001643C, Foshan Aowei Biotechnology Co., Ltd., Foshan, China). The concentration and purity of the total RNA were determined using a UV–visible spectrophotometer (Nanodrop 2000, Thermo Fisher Scientific, Waltham, MA, USA). The results are shown in . The transcript levels of the genes encoding inflammatory factors were determined by qRT-PCR, using the primer sequences shown in . For the qRT-PCR analyses, 2.00 μg of total RNA from each sample was used to synthesize cDNA in a 20.0-μL reaction mixture, using a cDNA First Strand Synthesis Kit (X0320, Tiangen Biochemical Science and Technology Co., Ltd., Beijing, China). The transcript levels of β-actin , IL-1β , IL-6 , and TNF-α were detected by qRT-PCR (T100, Bio-Rad, Hercules, CA, USA). The relative transcript levels of IL-1β , IL-6 , and TNF-α were calculated using β-actin as an internal reference. Transgenic neutrophil green fluorescent zebrafish (MPX) were randomly selected 3 days post-fertilization (3 dpf) and added to 6-well plates (Zhejiang Bellambeau Biotechnology Co. Ltd., Hangzhou, China), with 30 tails per well. The NI samples were applied as an aqueous solution, and the treatment, model, and control groups were established with a volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of all experimental groups (except the normal control group) were injected with LPS using a microinjector (IM300, Narishige, Tokyo, Japan) to establish a bacterial inflammation model. The minimum detectable concentrations (MTCs) of the samples in the modeled zebrafish were determined after treatment at 28 °C for 2 h. The results are shown in . As described above, 3 dpf MPX were randomly selected and added to 6-well plates, with 30 tails per well. The NI samples were applied in the form of aqueous extracts, and the positive control was dexamethasone acetate with a concentration of 43.5 μg/mL. The control, model, and treatment groups were established with a total volume of 3 mL per well. After 1 h of sample pretreatment, the yolk sacs of each experimental group (except the normal control group) were injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, 10 zebrafish tails were randomly selected from each experimental group and observed and photographed under a fluorescence microscope (AZ100, Nikon, Tokyo, Japan). The images were analyzed and processed using NIS-Elements D 3.20 advanced image processing software. This allowed us to count the number of neutrophils in the zebrafish yolk sac. The normal control, model, and treatment groups were established with 3 dpf MPX (30 tails per well), as described in . After 1 h of sample pretreatment, the yolk sac of each experimental group (except the normal control group) was injected with LPS to establish the inflammation model. After 2 h of treatment at 28 °C, RNA was extracted from the MPX in each group using an RNA Rapid Extraction Kit (TL2204001643C, Foshan Aowei Biotechnology Co., Ltd., Foshan, China). The concentration and purity of the total RNA were determined using a UV–visible spectrophotometer (Nanodrop 2000, Thermo Fisher Scientific, Waltham, MA, USA). The results are shown in . The transcript levels of the genes encoding inflammatory factors were determined by qRT-PCR, using the primer sequences shown in . For the qRT-PCR analyses, 2.00 μg of total RNA from each sample was used to synthesize cDNA in a 20.0-μL reaction mixture, using a cDNA First Strand Synthesis Kit (X0320, Tiangen Biochemical Science and Technology Co., Ltd., Beijing, China). The transcript levels of β-actin , IL-1β , IL-6 , and TNF-α were detected by qRT-PCR (T100, Bio-Rad, Hercules, CA, USA). The relative transcript levels of IL-1β , IL-6 , and TNF-α were calculated using β-actin as an internal reference. 4.6.1. Methods for Identification of Chemical Components Volatile components analysis: Qualitative and semi-quantitative analyses of the volatile oil composition of the nine batches of samples were carried out by comparing the data obtained in our GC–MS analyses with those in the NIST database. The NIST database was searched, and the chemical constituents of the volatile oils were identified based on a match rate greater than 800. The chromatographic peaks of the nine batches of volatile samples were integrated, and the content of each volatile oil component was expressed as the relative peak area . Non-volatile components analysis: The raw mass spectral data acquired by UHPLC-Orbitrap MS were imported into Compounds Discoverer 3.3 software (Thermo Scientific, USA) connected with the online KEGG ( https://www.kegg.jp/ , accessed on 12 July 2024), ChEBI ( https://www.ebi.ac.uk/chebi/ , accessed on 12 July 2024), ChEMBL ( https://www.ebi.ac.uk/chembl/ , accessed on 12 July 2024), mzCloud ( https://www.mzcloud.org/ , accessed on 12 July 2024), and in-house Thermo Scientific mzVault databases, as well as other libraries. After peak filtering, peak alignment, and peak identification, a data matrix containing information such as retention time (RT), m/z , compound name, peak area, etc., was generated. Then, all identified compounds in this matrix were confirmed either by referring to the reference standard compound information regarding RT and m/z or by analyzing the first and second stage m/z information, based on the compound’s mass spectrometric fragmentation regularity. In the qualitative analysis of the metabolites, interference from the isotopic signals, duplicate signals from the K + and NH 4 + ions, and fragment ions from other larger molecules were removed. The deviation was set to <5 × 10 −6 . The content of each chemical component in the sample is represented by the peak area. 4.6.2. Phytometabolomic Research Based on Multivariate Statistical Analysis The qualitative and semi-quantitative results of the chemical components of the samples underwent multivariate statistical analyses, including cluster analysis (CA), PCA, and OPLS-DA. The CA grouped samples according to the distribution of metabolites, and the heatmap was drawn online using SRplot ( https://www.bioinformatics.com.cn , accessed on 27 September 2024). The clustering approach is designated as “complete”, the distance measure is opted for “Euclidean”, and the callback function is set to “pheatmap”. The PCA clustered the samples on the basis of similarities in the volatile oil components and the 95% ethanol extract components, including quality control samples (QC), using SIMCA 14.1 software (Umetrics, Malmö , Sweden). The scaling type was set as unit variance (UV). For the PCA of the volatile oil components, the cumulative contribution rate of five principal components reached 0.935, and the value of Q 2 was 0.59. Meanwhile, for the PCA of the 95% ethanol extract components, the cumulative contribution rate of three principal components reached 0.846, and the value of Q 2 was 0.572, indicating that the models possess good predictive ability. Then, OPLS-DA was performed to calculate the VIP value for the screening of DAMs. The validity of the OPLS-DA model was assessed using the permutations function. The predictive parameters for evaluating the model included R 2 X, R 2 Y, and Q 2 , where Q 2 indicates the predictive power of the model, and R 2 X and R 2 Y indicate the rate of explanation of the X and Y matrices, respectively, by the constructed model. The closer these three indicators are to 1, the more stable and reliable the model. A valid model is indicated by Q 2 > 0.5. In this study, all of the OPLS-DA models exhibited R 2 X values greater than 0.7, and both R 2 Y and Q 2 values were greater than 0.9, indicating outstanding performance in explaining independent and dependent variables, as well as for predicting new data. On the basis of the permutation tests, these models were not overfitted. To obtain more valuable information, a t -test was also implemented with SPSS 25.0 (IBM, USA). In the end, the DAMs were screened according to the following thresholds: VIP > 1, p -value of t -test < 0.05, and Log 2 FC > 0 or <0. The DAMS were visualized in volcano plots generated using SRpolt ( https://www.bioinformatics.com.cn , accessed on 28 September 2024). 4.6.3. Metabolic Pathway Analysis The DAMs were annotated using the KEGG database. An enrichment analysis to determine the biosynthetic and metabolic pathways associated with the differential components was conducted using MetaboAnalyst 6.0 ( https://www.metaboanalyst.ca/ , accessed on 11 October 2024). The results are shown as bubble diagrams. The biosynthetic pathways of phenolic acids and coumarins, which are the main secondary metabolites of NI, were constructed based on information reported in the literature. Clustered heat maps were constructed to display the distribution of secondary metabolites related to the pathway maps in the nine batches of samples. These revealed the key metabolic components responsible for the metabolic differences between the wild and cultivated NI from different origins. Pearson’s correlation analysis was performed using Origin 2022 software (OriginLab, Northampton, MA, USA). The correlation analyses were conducted to detect relationships between the amino acid components of wild NI and NI cultivated in Gansu and Qinghai and the phenolic acid and coumarin components of the above metabolic pathways. These analyses revealed the potential relationships between endogenous substances in NI and its secondary metabolism. The correlation heatmaps were plotted using the Correlation Plot Application in Origin 2022 software, with blue ovals indicating negative correlations, red ovals indicating positive correlations, and narrower ovals indicating larger correlation coefficients. Asterisks indicate significant correlations between two components, where * and ** indicate significance at the p < 0.05 level and the p < 0.01 level, respectively. 4.6.4. Statistical Analyses of the Results of Pharmacological Experiments The results of the pharmacological experiments are expressed as mean ± SE (standard error). The number of neutrophils and gene transcript levels was compared between the model group and the control and experimental groups separately via a t -test, using SPSS 25.0 (IBM, Armonk, NY, USA). Differences were considered significant at p < 0.05. In the figures, asterisks indicate significant differences between the model group and the control/treatment groups, with * and ** indicating significant differences at p < 0.05 and p < 0.01, respectively. Histograms were generated using GraphPad Prism 9.5.0 (GraphPad Software, La Jolla, CA, USA). Volatile components analysis: Qualitative and semi-quantitative analyses of the volatile oil composition of the nine batches of samples were carried out by comparing the data obtained in our GC–MS analyses with those in the NIST database. The NIST database was searched, and the chemical constituents of the volatile oils were identified based on a match rate greater than 800. The chromatographic peaks of the nine batches of volatile samples were integrated, and the content of each volatile oil component was expressed as the relative peak area . Non-volatile components analysis: The raw mass spectral data acquired by UHPLC-Orbitrap MS were imported into Compounds Discoverer 3.3 software (Thermo Scientific, USA) connected with the online KEGG ( https://www.kegg.jp/ , accessed on 12 July 2024), ChEBI ( https://www.ebi.ac.uk/chebi/ , accessed on 12 July 2024), ChEMBL ( https://www.ebi.ac.uk/chembl/ , accessed on 12 July 2024), mzCloud ( https://www.mzcloud.org/ , accessed on 12 July 2024), and in-house Thermo Scientific mzVault databases, as well as other libraries. After peak filtering, peak alignment, and peak identification, a data matrix containing information such as retention time (RT), m/z , compound name, peak area, etc., was generated. Then, all identified compounds in this matrix were confirmed either by referring to the reference standard compound information regarding RT and m/z or by analyzing the first and second stage m/z information, based on the compound’s mass spectrometric fragmentation regularity. In the qualitative analysis of the metabolites, interference from the isotopic signals, duplicate signals from the K + and NH 4 + ions, and fragment ions from other larger molecules were removed. The deviation was set to <5 × 10 −6 . The content of each chemical component in the sample is represented by the peak area. The qualitative and semi-quantitative results of the chemical components of the samples underwent multivariate statistical analyses, including cluster analysis (CA), PCA, and OPLS-DA. The CA grouped samples according to the distribution of metabolites, and the heatmap was drawn online using SRplot ( https://www.bioinformatics.com.cn , accessed on 27 September 2024). The clustering approach is designated as “complete”, the distance measure is opted for “Euclidean”, and the callback function is set to “pheatmap”. The PCA clustered the samples on the basis of similarities in the volatile oil components and the 95% ethanol extract components, including quality control samples (QC), using SIMCA 14.1 software (Umetrics, Malmö , Sweden). The scaling type was set as unit variance (UV). For the PCA of the volatile oil components, the cumulative contribution rate of five principal components reached 0.935, and the value of Q 2 was 0.59. Meanwhile, for the PCA of the 95% ethanol extract components, the cumulative contribution rate of three principal components reached 0.846, and the value of Q 2 was 0.572, indicating that the models possess good predictive ability. Then, OPLS-DA was performed to calculate the VIP value for the screening of DAMs. The validity of the OPLS-DA model was assessed using the permutations function. The predictive parameters for evaluating the model included R 2 X, R 2 Y, and Q 2 , where Q 2 indicates the predictive power of the model, and R 2 X and R 2 Y indicate the rate of explanation of the X and Y matrices, respectively, by the constructed model. The closer these three indicators are to 1, the more stable and reliable the model. A valid model is indicated by Q 2 > 0.5. In this study, all of the OPLS-DA models exhibited R 2 X values greater than 0.7, and both R 2 Y and Q 2 values were greater than 0.9, indicating outstanding performance in explaining independent and dependent variables, as well as for predicting new data. On the basis of the permutation tests, these models were not overfitted. To obtain more valuable information, a t -test was also implemented with SPSS 25.0 (IBM, USA). In the end, the DAMs were screened according to the following thresholds: VIP > 1, p -value of t -test < 0.05, and Log 2 FC > 0 or <0. The DAMS were visualized in volcano plots generated using SRpolt ( https://www.bioinformatics.com.cn , accessed on 28 September 2024). The DAMs were annotated using the KEGG database. An enrichment analysis to determine the biosynthetic and metabolic pathways associated with the differential components was conducted using MetaboAnalyst 6.0 ( https://www.metaboanalyst.ca/ , accessed on 11 October 2024). The results are shown as bubble diagrams. The biosynthetic pathways of phenolic acids and coumarins, which are the main secondary metabolites of NI, were constructed based on information reported in the literature. Clustered heat maps were constructed to display the distribution of secondary metabolites related to the pathway maps in the nine batches of samples. These revealed the key metabolic components responsible for the metabolic differences between the wild and cultivated NI from different origins. Pearson’s correlation analysis was performed using Origin 2022 software (OriginLab, Northampton, MA, USA). The correlation analyses were conducted to detect relationships between the amino acid components of wild NI and NI cultivated in Gansu and Qinghai and the phenolic acid and coumarin components of the above metabolic pathways. These analyses revealed the potential relationships between endogenous substances in NI and its secondary metabolism. The correlation heatmaps were plotted using the Correlation Plot Application in Origin 2022 software, with blue ovals indicating negative correlations, red ovals indicating positive correlations, and narrower ovals indicating larger correlation coefficients. Asterisks indicate significant correlations between two components, where * and ** indicate significance at the p < 0.05 level and the p < 0.01 level, respectively. The results of the pharmacological experiments are expressed as mean ± SE (standard error). The number of neutrophils and gene transcript levels was compared between the model group and the control and experimental groups separately via a t -test, using SPSS 25.0 (IBM, Armonk, NY, USA). Differences were considered significant at p < 0.05. In the figures, asterisks indicate significant differences between the model group and the control/treatment groups, with * and ** indicating significant differences at p < 0.05 and p < 0.01, respectively. Histograms were generated using GraphPad Prism 9.5.0 (GraphPad Software, La Jolla, CA, USA). In this study, wild and cultivated NI from Sichuan, Qinghai, and Gansu were subjected to metabolic analyses. The results showed that the chemical compositions of NI differed, depending on the origin, as well as between wild and cultivated materials. The total contents of angular coumarins, linear coumarins, and simple coumarins differed between wild NI and cultivated NI from Gansu and Qinghai, while the contents of the vast majority of phytometabolites, such as angular coumarins and linear coumarins, in cultivated NI from Sichuan are significantly lower than those in other candidates. Notably, the contents of angelicin (C77), 6′- O -β-D-glucoxyl-7′-hydroxyberganottin (C80), and 8-geranyl-5-methoxy-psoralen (C107) are significantly higher in cultivated NI from Sichuan than in the other candidates. The differential metabolic pathways between wild and cultivated NI inlcuded arginine biosynthesis; alanine, aspartic acid, and glutamic acid metabolism; arginine and proline metabolism; and phenylalanine metabolism. The differential metabolic pathways between cultivated NI from Sichuan and cultivated NI from other origins included phenylalanine, tyrosine, and tryptophan metabolism, as well as arginine biosynthesis. Analyses of these biosynthetic pathways revealed seven key metabolic intermediates in the biosynthesis of phenolic acids and coumarins that were key factors contributing to the differences in the compositions of phenolic acid and coumarin components formed by NI’ s origins and growth modes, i.e., cinnamic acid, p -coumaric acid, p -coumaroyl quinic acid, umbelliferone, osthenol, demethylsuberosin, and aesculetin. We detected significant correlations between amino acids and coumarins and between some amino acids and phenolic acid biosynthesis. A pharmacodynamic study using the yolk sac model in transgenic neutrophil green fluorescent zebrafish (MPX) showed that both wild and cultivated NI from Sichuan exhibited anti-inflammatory pharmacological effects. The results of this study provide a basis for further research on NI resources and their cultivation, as well as for the development and application of products from these materials. |
Contemporary dental tourism: a review of reporting in the UK news media | 5399cfb9-c283-4044-a85c-459acf92e925 | 11870843 | Dentistry[mh] | In recent years, the practice of international travel for low-cost dental care has increased in popularity. The phenomenon is referred to as dental tourism. Dental tourism is affecting many high-income countries and commonly occurs along regional, as opposed to global, pathways. For example, the combination of a holiday combined with cut-price dental treatment has led Turkey, Hungary and Poland to emerge as key players in the British dental tourism industry; people in the United States seek inexpensive dental care in Argentina, Costa Rica or Peru; and people in Australia travel for low-cost dentistry in Indonesia or Thailand. , In the United Kingdom (UK), dental tourism is on the increase. In 2014, 48,000 people sought dentistry outside of the UK. In 2016, the number had increased to 144,000. Motivations for medical tourism can be categorised into pushing and pulling factors, for example, quality, efficiency, holidays and hospital reputation exemplify pulling factors. Other reasons include reducing treatment timescale or increasing the variety of treatment options on offer. Conversely, pushing factors that drive patients to seek care outside of the UK include high cost of treatment, long waiting lists and lack of dental care availability. Reportedly, some people travel abroad because of a lack of trust toward National Health Service (NHS) dentists, difficulty registering with an NHS dentist and ‘amateurish' results of dental treatment in the UK. Dental professionals have expressed their concern about the recent rise in young people opting for dentistry abroad. The British Dental Association (BDA) surveyed 1,000 dentists who described the adverse health consequences associated with dental tourism. Almost all respondents to the survey reported that they had examined patients who had been on dental tourism trips and most (86%) reported treating people suffering consequences after treatment abroad. Respondents believed that crowns and implant treatments were the most at risk of failure. On average, the cost of remedial dental care in the UK ranged between at least £500 (65%) up to more than £1,000. However, a significant number (20%) of dentists estimated the cost to rectify complications arising from dental tourism as in excess of £5,000. In response to the dental tourism trend, the BDA has issued recommendations for more awareness raising of the risks, including proactive campaigns to inform the public. Further, concerns have been raised about child patients being offered invasive restorative treatment for minor aesthetic concerns as a ‘freebie' to their parent's course of treatment. The Department of Health has previously indicated NHS responsibility to provide emergency care, but not remedial care (eg elective revision), for cosmetic procedures provided outside of the UK. Asher and colleagues have queried whether NHS Trusts could heighten their presence in the international market as providers of private services. They suggest this may eliminate medical-ethical concerns associated with substandard cosmetic tourism. In the UK, controversy has been escalating around the trend for young adults to seek dental transformations and share their experiences on social media platforms such as Instagram and TikTok. The negative impacts of dental tourism were brought to the attention of the wider public consciousness by a BBC documentary entitled Turkey teeth: are they worth it? which was released in July 2022. This created a wave of media interest in the topic. In the UK, newspapers account for almost 40% of the nation's source of news, and attitudes toward the quality, accuracy, impartiality and trustworthiness have remained steady in recent years. Public discourse and the framing of public health problems in national newspapers both shapes and reflects public opinion. , Newspapers can also be a tool for health advocacy. However, different newspapers have different stances on public and political issues and different readerships. Therefore, analysing newspaper content on dental tourism could provide valuable insights into the public opinion toward dental tourism. Thus, the two-fold aim of this study was to understand what the key topics or issues relating to dental tourism in the UK news media are, and how the UK news media frames dental tourism.
Ethics statement The study synthesised and interpreted secondary data in the form of newspaper articles that were already published and readily available in the public domain. No human participants were involved in the study. Therefore, formal processes of ethical approval were not required. Search strategy Newspaper articles were identified using the LexisNexis database. LexisNexis is a data analytics company; its databases are accessed through online portals, including portals for computer-assisted legal research, newspaper search and consumer information. The ten most popular newspapers in the UK were used for the search strategy: popularity was based the Audit Bureau of Circulation data on the circulation of both daily and Sunday print publications, as well as access of online articles. Popularity was based on readership of each newspaper. A pilot search was used to identify key words and to decide the date parameters for inclusion. The pilot search included hand searching 50 news articles using a search engine and a search for academic articles using the search string ‘dental AND tourism'. This investigative search revealed that terminology used by the media differs from that used by the dental community. For example, newspaper articles tended to use phrases such as ‘dentistry overseas' or ‘dentistry abroad'. Contrastingly, dental articles or newspaper articles offering advice from dental professionals commonly used the phrase ‘dental tourism'. The full search strategy is presented in . Inclusion and exclusion criteria Authors have identified media interest in dental tourism as first emerging around two decades ago. However, this review article specifically focused on contemporary narratives and thus used 2018 as the cut-off point. The reason for this decision is that, in 2018, UK reality television celebrities began to disclose having had dental treatment abroad. This has been heralded in many newspaper articles as a defining catalyst for the most recent wave of dental tourism which has uniquely impacted on younger people and social media. Through a process of independent review and collaborative discussion three reviewers (JD, PA, JJ) identified which papers met the inclusion and exclusion criteria. Articles from tabloid and broadsheet newspapers were included. Tabloids are newspapers which are typically sensationalist and report more celebrity material. Broadsheets are perceived to be more intellectual in content. Advertorials were excluded from the analysis, as were articles which pertained to other healthcare procedures performed outside of the UK. Articles which referred to someone as having ‘Turkey teeth' but where the topic of the article was not related to dental tourism were excluded.
The study synthesised and interpreted secondary data in the form of newspaper articles that were already published and readily available in the public domain. No human participants were involved in the study. Therefore, formal processes of ethical approval were not required.
Newspaper articles were identified using the LexisNexis database. LexisNexis is a data analytics company; its databases are accessed through online portals, including portals for computer-assisted legal research, newspaper search and consumer information. The ten most popular newspapers in the UK were used for the search strategy: popularity was based the Audit Bureau of Circulation data on the circulation of both daily and Sunday print publications, as well as access of online articles. Popularity was based on readership of each newspaper. A pilot search was used to identify key words and to decide the date parameters for inclusion. The pilot search included hand searching 50 news articles using a search engine and a search for academic articles using the search string ‘dental AND tourism'. This investigative search revealed that terminology used by the media differs from that used by the dental community. For example, newspaper articles tended to use phrases such as ‘dentistry overseas' or ‘dentistry abroad'. Contrastingly, dental articles or newspaper articles offering advice from dental professionals commonly used the phrase ‘dental tourism'. The full search strategy is presented in .
Authors have identified media interest in dental tourism as first emerging around two decades ago. However, this review article specifically focused on contemporary narratives and thus used 2018 as the cut-off point. The reason for this decision is that, in 2018, UK reality television celebrities began to disclose having had dental treatment abroad. This has been heralded in many newspaper articles as a defining catalyst for the most recent wave of dental tourism which has uniquely impacted on younger people and social media. Through a process of independent review and collaborative discussion three reviewers (JD, PA, JJ) identified which papers met the inclusion and exclusion criteria. Articles from tabloid and broadsheet newspapers were included. Tabloids are newspapers which are typically sensationalist and report more celebrity material. Broadsheets are perceived to be more intellectual in content. Advertorials were excluded from the analysis, as were articles which pertained to other healthcare procedures performed outside of the UK. Articles which referred to someone as having ‘Turkey teeth' but where the topic of the article was not related to dental tourism were excluded.
Data were extracted into an Excel spreadsheet. Articles were mapped to identify the newspaper name, type (tabloid or broadsheet), the annual frequency of articles published on dental tourism each year between 2018-2023, and the central protagonist (dental professionals, patients, journalists, dental organisations, others, or multiple). Critical appraisal was not undertaken due to the non-academic nature of the articles. Full-text papers were read by JJ, JD, and PA. JJ, JD, PA and DM met twice to discuss the codes and the themes in depth. All authors reviewed and refined the final themes. All textual components of the articles, including journalists' narratives and quotations, were used as data and analysed using framework analysis. The following stages of the framework approach were undertaken: familiarisation; coding; developing and applying an analytical framework; and charting and interpreting the data. Inductive coding was carried out to identify descriptive and analytical concepts within the data. Initial codes were created by JD, JJ, PA and DM after reviewing the first 50 articles. These codes were then applied by JD, JJ and PA to the remaining 81 articles. Where new concepts were identified that were not adequately captured by other codes, new ones were created. The codes were reviewed and refined by JD, JJ, PA and DM to merge similar concepts, separate different concepts and refine the code descriptor to concisely reflect the intended meaning. The codes were then reviewed by JD, JJ, PA and DM to organise them into overarching analytical themes to answer the research questions. The findings are presented as themes. Finally, these themes were then assessed by the full authorship team for their relevance and appropriateness to answering the research questions.
The search strategy identified 201 newspaper articles related to dental tourism. A total of 131 articles were included in the analysis . The remainder were excluded because they were advertisements, were not related to the research question, or were duplicated articles published both in paper and online versions of the same newspaper. Most articles were published in 2022 and 2023 . The 2022 and 2023 peaks in article numbers coincided with the publication of dental contract reforms, the emergence of the TikTok trend of #TurkeyTeeth and the release of the investigative television show ‘ Turkey teeth: bargain smiles or big mistake' in July 2022. Ten articles (7.6%) were published in broadsheet newspapers. The remaining articles (92.4%) were published in tabloid newspapers, and of these, 106 (80.9%) were published in either The Sun or a newspaper linked to The Mail (eg Daily Mail, MailOnline etc.) . Five key themes were identified from analysis of the newspaper articles. The themes included: push and pull factors reported to lead to seeking dentistry abroad; patient-reported outcomes and experiences; warnings from dental professionals; amplifying social media hype; and media shaming and stigmatising. Motivators: push and pull factors leading to seeking dentistry abroad Pull factors are those that draw people to another country for dentistry. Push factors are those that encourage people to leave their country of residence. In this study, we found that pull factors included celebrity influence, treatment affordability, and seeking a quick fix to improve appearance and self-esteem. Push factors included perceptions of high costs of dental treatment and difficulties accessing dental care. Self-esteem and social signalling The newspaper articles commonly highlighted celebrities who had sought dentistry abroad. Celebrity status and television shows, specifically celebrities featured on reality shows such as Love Island, were commonly mentioned. All references were from tabloid newspapers: ‘Love Island winner…who travelled to Turkey for ten crowns before going on the reality show in 2018, said he had not realised “it was quite as invasive as it was”. He added: “my mum used to be a dental nurse so I know how expensive it is to get your teeth done. I knew it would be about £10,000 to £15,000 easily in England. So I thought I'd rather just go to Turkey get a bit of sun, have a laugh”' (celebrity, article 2). Motivators for seeking dental treatment abroad also included low self-esteem or confidence related to the appearance of teeth and the lure of a quick fix to improve appearance and boost confidence: ‘I'd read about the Turkey teeth quick fix and went online watching influencers rave about their new cheap smile and a sun-filled holiday. I was won over and instantly booked a trip to Turkey to have my teeth fixed and to grab a much-needed break. I arrived in July 2018 and saw the clinic's dentist, he took an x-ray and told me I needed crowns but no more than that…within hours I was injected with painkillers' (patient, article 77). Some articles described people with multiple vulnerabilities including issues with self-esteem which made them more susceptible to being leveraged by predatory marketing. One article contrasts the marketing for a company who provide cosmetic dental treatment in Turkey (which appeared on the London Underground and on buses) with how UK practitioners are expected to approach marketing, noting that the General Medical Council guidance for practitioners offering cosmetic treatment insists that ‘marketing must be responsible' and should not, ‘minimise or trivialise the risks of interventions'. In one article, a UK dentist says: ‘It's companies that are trying to make money on impressionable and often vulnerable people, who are unaware of the ramifications of what they are getting themselves into. I'm not allowed to advertise like that to the under 18s yet you have a treatment there which isn't general dentistry, it's not your six month check-up…this is very aggressive' (dentist, article 4). Affordability and difficulties accessing dental services in the UK A very common recurring theme of articles was highlighting the significantly lower cost of dental and medical cosmetic procedures abroad, especially Turkey, compared to similar treatment in the UK. Cosmetic treatments that many people might have previously believed was beyond their financial means were now accessible. Patients commonly described choosing dentistry outside of the UK because of cost-saving factors, rather than perceptions of higher-quality care: ‘Cosmetic work in Turkey comes cheap. Incredibly cheap, generally a third of what you'd pay at a UK clinic, sometimes even less. A new nose for 2,500. A full set of “Turkey teeth”, those dazzling, perfect pearly whites that are suddenly everywhere, starts at 3,200…by comparison, rhinoplasty (a nose job) in the UK starts at around 6,200…a new set of teeth at least 12,000' (journalist, article 122) Several articles, especially those covering the joint BDA-BBC documentary Disappearing dentists , highlighted that patients have been unable to unable to gain access to an NHS dentist after calling dozens of dental practices. The narrative described how people who were unable to afford private dentistry in the UK had to decide whether to have no treatment, perform DIY (do-it-yourself) dentistry, or travel abroad: ‘After trying and failing to get an NHS dentist appointment in Britain, recent research found that 91% of UK dental practices are refusing to take new patients - “getting my teeth fixed abroad seemed like a logical option”' (journalist and patient, article 31). Although most articles told the perspective of a patient or of the journalist, and were often sensationalist in their slant, three articles did contain statements from the BDA, offering more context to the crisis of NHS dental access: ‘Unless the government invest loads in the NHS, so everyone can have an NHS dentist, people are going to be in the horrendous position of having to go abroad' (BDA board member and dental practice owner, article 31). Patient-reported outcomes and experiences Many people who had, had dentistry abroad explained that ‘it was worth it', accepting the trade-off of dental health for cosmetic improvements. Some patients accepted dental pain and discomfort in exchange for the improvements to their confidence, self-esteem and self-assessed attractiveness: ‘All in all, absolutely mint, we're made up, couldn't be happier. Highly recommend, it doesn't half change your face. Eating is difficult though, it's very sensitive to cold so it's something to get used to a few days afterwards. It's good for the diet because you do not eat whatsoever, it's difficult to eat. Get used to drinking through a straw' (husband and wife who have been abroad for dental care, article 102) ‘JAW THING: I broke down in tears when I saw my shaved-off Turkey teeth and had to live on cake and omelettes but now I love them' (headline, article 12) ‘TOOTH OF THE MATTER: I got Turkey teeth and I don't care if I regret them in ten years I think they look so good' (headline, article 29). Several articles featured people describing diet modifications, such as soft diet and drinking through a straw, to mitigate post-procedural discomfort. People who had bad experiences with dental procedures performed abroad described issues such as ‘dead stumps', abscesses and pain. Words commonly used to describe the impact of cosmetic dental treatment undertaken abroad included ‘pain', ‘aggressive', ‘invasive', ‘complications', ‘problems' and ‘infection'. Individuals described irreversible physical harm and ongoing anxiety related to both their dental health and the costs of remedial care. Confused about the procedures they'd had, people felt misinformed about the implications of the cosmetic treatment. People described mental and physical harm as a result of dental tourism abroad. A few articles described people who had suffered harm following dental tourism and now used their social media influence as a vehicle to deter others from doing the same. Influencers presented their experiences in different ways; some described themselves as victims of poor-quality dental care and confusion around what procedures they were consenting to, while others explained that, although they were aware of the risks, the improved aesthetics were worth it. Others tried to discourage followers from following in their footsteps and saw their role as a health advocate: ‘You've done a brave thing…to warn people knowing you'll likely get hate. Fair play spreading awareness. This is a new trend among young people and it's going to be a hard pill to swallow for the unlucky ones' (social media commenter, article 14) ‘UH OH: I got Turkey teeth but it was a big mistake. I've now got a lisp and can't close my mouth properly, please don't do it' (headline, article 34). Warnings from dental professionals The destructive dental transformations described in the articles were at odds with minimally invasive dentistry culture in the UK. The articles that described the perspectives of dental professionals had a paternalistic element, with little recognition for the strong motivators/drivers for patients to seek dentistry abroad. Dental professionals quoted in the articles advised that cheap procedures in Turkey may infer inferior-quality treatment and that people can expect to pay similar prices to UK dental treatment for high-quality dental care. Dental professionals unanimously agreed that in the UK, minimally invasive approaches to dental care, such as orthodontic appliances, whitening and bonding, were the preferred practice to maintain health and improve aesthetics. The lack of regulatory processes and absence of access to legal redress or follow-on care was a common concern across the articles. A few articles described remedial care as being provided by the NHS, while other private providers described actively avoiding the provision of care for people who'd had dentistry abroad because of fear of liability: ‘If I did 20 crowns on a 21-year-old for the purpose of improving the colour, I would have my licence revoked, I would be struck off […] at the point you inherit that patient and do any work, that's when the problems really start and that's when the UK dentist becomes liable. A risk we cannot take' (director of a dental clinic in Liverpool, article 51) ‘It's shocking people have no clue what they've done. They talk about veneers - mouldings bonded to the front of a tooth - but in reality, they are crowns, meaning much more aggressive tooth reduction' (NHS dentist, article 10). Amplifying social media hype Newspaper articles commonly built their narrative using a single post lifted from social media, specifically TikTok. This tactic frames social media posts and perspectives as noteworthy and of importance in contemporary thinking about dental tourism. These posts documented journeys, complications and/or enhanced aesthetics. Common among them was the use of the #TurkeyTeeth hashtag. The phrase ‘Turkey teeth' has created a group identity: people who have all been to similar places, to have similar procedures, have had similar experiences on their journey and ultimately, obtained a similar aesthetic outcome, and in the future will go through the same challenges, disappointments and expenditure on remedial care. Social media provided a space for people to encourage others to seek dental tourism, to be seen as attractive and rewarded with likes and followers, or to generate a buzz around the potential complications of dental tourism. Journalists used emotionally charged language, for example, describing the numbers of views garnered by videos about dental tourism and #TurkeyTeeth as ‘whopping'. Conversely, newspapers can drive readers to their articles by using the stories of influencers and using provocative negative comments to engage readers. Both positive and negative information about the medical enhancement procedures make headlines: ‘Her video has clearly shocked many, as it has quickly gone viral and has racked up a whopping six million views. It has 397.6k likes, 4,595 comments and 6,968 shares' (article 39). Media shaming and stigmatising People were discredited by journalistic descriptions of unrelated character labels (eg blonde-bombshell, woman bitten by a bat, blonde-haired woman) and referred to as ‘brutally mocked' by online trolls (article 22). The language used by journalists conveys a tilt toward scorn, for example, the repeated use of the word ‘dubbed' in reference to the phenomenon of ‘Turkey teeth' and the use of terms that are comical in reference to teeth, for example, calling them ‘gnashers' which is commonly associated with fake teeth (article 53). Many headlines likened people's appearance to animals or made ridiculing comments about the appearance of their teeth. Quotations drawn from comments on the featured social media posts ranged from support, encouragement and compliments through to victim blaming, stereotyping, labelling, stigmatising and shaming: ‘NOT ALL WHITE: I splashed 3.6k on a set of white Turkey teeth but trolls say I look like a horse and most people think I've been scammed' (headline, article 38). A dichotomy was apparent - those who responded to full-mouth transformations as a symbol of good taste and others for whom it was perceived as distasteful and to be avoided or ridiculed: ‘His teeth were fine before but it's up to him what he spends his money on'; ‘looks like that smile filter'; ‘too white? Just me? I mean, they look good and everything but too white, no?'; ‘Turkey is where poor working class go' (TikTok user comments, article 63) ‘NOT WHITE: My man and I jetted to Turkey for new teeth - we love our gnashers but it's REALLY divided opinion' (headline, article 69). Common denigrations included warnings that influencers would live to regret their choices, recommending minimally invasive approaches as a preferred alternative, and referring to people as vain, fake-looking or foolish.
Pull factors are those that draw people to another country for dentistry. Push factors are those that encourage people to leave their country of residence. In this study, we found that pull factors included celebrity influence, treatment affordability, and seeking a quick fix to improve appearance and self-esteem. Push factors included perceptions of high costs of dental treatment and difficulties accessing dental care. Self-esteem and social signalling The newspaper articles commonly highlighted celebrities who had sought dentistry abroad. Celebrity status and television shows, specifically celebrities featured on reality shows such as Love Island, were commonly mentioned. All references were from tabloid newspapers: ‘Love Island winner…who travelled to Turkey for ten crowns before going on the reality show in 2018, said he had not realised “it was quite as invasive as it was”. He added: “my mum used to be a dental nurse so I know how expensive it is to get your teeth done. I knew it would be about £10,000 to £15,000 easily in England. So I thought I'd rather just go to Turkey get a bit of sun, have a laugh”' (celebrity, article 2). Motivators for seeking dental treatment abroad also included low self-esteem or confidence related to the appearance of teeth and the lure of a quick fix to improve appearance and boost confidence: ‘I'd read about the Turkey teeth quick fix and went online watching influencers rave about their new cheap smile and a sun-filled holiday. I was won over and instantly booked a trip to Turkey to have my teeth fixed and to grab a much-needed break. I arrived in July 2018 and saw the clinic's dentist, he took an x-ray and told me I needed crowns but no more than that…within hours I was injected with painkillers' (patient, article 77). Some articles described people with multiple vulnerabilities including issues with self-esteem which made them more susceptible to being leveraged by predatory marketing. One article contrasts the marketing for a company who provide cosmetic dental treatment in Turkey (which appeared on the London Underground and on buses) with how UK practitioners are expected to approach marketing, noting that the General Medical Council guidance for practitioners offering cosmetic treatment insists that ‘marketing must be responsible' and should not, ‘minimise or trivialise the risks of interventions'. In one article, a UK dentist says: ‘It's companies that are trying to make money on impressionable and often vulnerable people, who are unaware of the ramifications of what they are getting themselves into. I'm not allowed to advertise like that to the under 18s yet you have a treatment there which isn't general dentistry, it's not your six month check-up…this is very aggressive' (dentist, article 4). Affordability and difficulties accessing dental services in the UK A very common recurring theme of articles was highlighting the significantly lower cost of dental and medical cosmetic procedures abroad, especially Turkey, compared to similar treatment in the UK. Cosmetic treatments that many people might have previously believed was beyond their financial means were now accessible. Patients commonly described choosing dentistry outside of the UK because of cost-saving factors, rather than perceptions of higher-quality care: ‘Cosmetic work in Turkey comes cheap. Incredibly cheap, generally a third of what you'd pay at a UK clinic, sometimes even less. A new nose for 2,500. A full set of “Turkey teeth”, those dazzling, perfect pearly whites that are suddenly everywhere, starts at 3,200…by comparison, rhinoplasty (a nose job) in the UK starts at around 6,200…a new set of teeth at least 12,000' (journalist, article 122) Several articles, especially those covering the joint BDA-BBC documentary Disappearing dentists , highlighted that patients have been unable to unable to gain access to an NHS dentist after calling dozens of dental practices. The narrative described how people who were unable to afford private dentistry in the UK had to decide whether to have no treatment, perform DIY (do-it-yourself) dentistry, or travel abroad: ‘After trying and failing to get an NHS dentist appointment in Britain, recent research found that 91% of UK dental practices are refusing to take new patients - “getting my teeth fixed abroad seemed like a logical option”' (journalist and patient, article 31). Although most articles told the perspective of a patient or of the journalist, and were often sensationalist in their slant, three articles did contain statements from the BDA, offering more context to the crisis of NHS dental access: ‘Unless the government invest loads in the NHS, so everyone can have an NHS dentist, people are going to be in the horrendous position of having to go abroad' (BDA board member and dental practice owner, article 31).
The newspaper articles commonly highlighted celebrities who had sought dentistry abroad. Celebrity status and television shows, specifically celebrities featured on reality shows such as Love Island, were commonly mentioned. All references were from tabloid newspapers: ‘Love Island winner…who travelled to Turkey for ten crowns before going on the reality show in 2018, said he had not realised “it was quite as invasive as it was”. He added: “my mum used to be a dental nurse so I know how expensive it is to get your teeth done. I knew it would be about £10,000 to £15,000 easily in England. So I thought I'd rather just go to Turkey get a bit of sun, have a laugh”' (celebrity, article 2). Motivators for seeking dental treatment abroad also included low self-esteem or confidence related to the appearance of teeth and the lure of a quick fix to improve appearance and boost confidence: ‘I'd read about the Turkey teeth quick fix and went online watching influencers rave about their new cheap smile and a sun-filled holiday. I was won over and instantly booked a trip to Turkey to have my teeth fixed and to grab a much-needed break. I arrived in July 2018 and saw the clinic's dentist, he took an x-ray and told me I needed crowns but no more than that…within hours I was injected with painkillers' (patient, article 77). Some articles described people with multiple vulnerabilities including issues with self-esteem which made them more susceptible to being leveraged by predatory marketing. One article contrasts the marketing for a company who provide cosmetic dental treatment in Turkey (which appeared on the London Underground and on buses) with how UK practitioners are expected to approach marketing, noting that the General Medical Council guidance for practitioners offering cosmetic treatment insists that ‘marketing must be responsible' and should not, ‘minimise or trivialise the risks of interventions'. In one article, a UK dentist says: ‘It's companies that are trying to make money on impressionable and often vulnerable people, who are unaware of the ramifications of what they are getting themselves into. I'm not allowed to advertise like that to the under 18s yet you have a treatment there which isn't general dentistry, it's not your six month check-up…this is very aggressive' (dentist, article 4).
A very common recurring theme of articles was highlighting the significantly lower cost of dental and medical cosmetic procedures abroad, especially Turkey, compared to similar treatment in the UK. Cosmetic treatments that many people might have previously believed was beyond their financial means were now accessible. Patients commonly described choosing dentistry outside of the UK because of cost-saving factors, rather than perceptions of higher-quality care: ‘Cosmetic work in Turkey comes cheap. Incredibly cheap, generally a third of what you'd pay at a UK clinic, sometimes even less. A new nose for 2,500. A full set of “Turkey teeth”, those dazzling, perfect pearly whites that are suddenly everywhere, starts at 3,200…by comparison, rhinoplasty (a nose job) in the UK starts at around 6,200…a new set of teeth at least 12,000' (journalist, article 122) Several articles, especially those covering the joint BDA-BBC documentary Disappearing dentists , highlighted that patients have been unable to unable to gain access to an NHS dentist after calling dozens of dental practices. The narrative described how people who were unable to afford private dentistry in the UK had to decide whether to have no treatment, perform DIY (do-it-yourself) dentistry, or travel abroad: ‘After trying and failing to get an NHS dentist appointment in Britain, recent research found that 91% of UK dental practices are refusing to take new patients - “getting my teeth fixed abroad seemed like a logical option”' (journalist and patient, article 31). Although most articles told the perspective of a patient or of the journalist, and were often sensationalist in their slant, three articles did contain statements from the BDA, offering more context to the crisis of NHS dental access: ‘Unless the government invest loads in the NHS, so everyone can have an NHS dentist, people are going to be in the horrendous position of having to go abroad' (BDA board member and dental practice owner, article 31).
Many people who had, had dentistry abroad explained that ‘it was worth it', accepting the trade-off of dental health for cosmetic improvements. Some patients accepted dental pain and discomfort in exchange for the improvements to their confidence, self-esteem and self-assessed attractiveness: ‘All in all, absolutely mint, we're made up, couldn't be happier. Highly recommend, it doesn't half change your face. Eating is difficult though, it's very sensitive to cold so it's something to get used to a few days afterwards. It's good for the diet because you do not eat whatsoever, it's difficult to eat. Get used to drinking through a straw' (husband and wife who have been abroad for dental care, article 102) ‘JAW THING: I broke down in tears when I saw my shaved-off Turkey teeth and had to live on cake and omelettes but now I love them' (headline, article 12) ‘TOOTH OF THE MATTER: I got Turkey teeth and I don't care if I regret them in ten years I think they look so good' (headline, article 29). Several articles featured people describing diet modifications, such as soft diet and drinking through a straw, to mitigate post-procedural discomfort. People who had bad experiences with dental procedures performed abroad described issues such as ‘dead stumps', abscesses and pain. Words commonly used to describe the impact of cosmetic dental treatment undertaken abroad included ‘pain', ‘aggressive', ‘invasive', ‘complications', ‘problems' and ‘infection'. Individuals described irreversible physical harm and ongoing anxiety related to both their dental health and the costs of remedial care. Confused about the procedures they'd had, people felt misinformed about the implications of the cosmetic treatment. People described mental and physical harm as a result of dental tourism abroad. A few articles described people who had suffered harm following dental tourism and now used their social media influence as a vehicle to deter others from doing the same. Influencers presented their experiences in different ways; some described themselves as victims of poor-quality dental care and confusion around what procedures they were consenting to, while others explained that, although they were aware of the risks, the improved aesthetics were worth it. Others tried to discourage followers from following in their footsteps and saw their role as a health advocate: ‘You've done a brave thing…to warn people knowing you'll likely get hate. Fair play spreading awareness. This is a new trend among young people and it's going to be a hard pill to swallow for the unlucky ones' (social media commenter, article 14) ‘UH OH: I got Turkey teeth but it was a big mistake. I've now got a lisp and can't close my mouth properly, please don't do it' (headline, article 34).
The destructive dental transformations described in the articles were at odds with minimally invasive dentistry culture in the UK. The articles that described the perspectives of dental professionals had a paternalistic element, with little recognition for the strong motivators/drivers for patients to seek dentistry abroad. Dental professionals quoted in the articles advised that cheap procedures in Turkey may infer inferior-quality treatment and that people can expect to pay similar prices to UK dental treatment for high-quality dental care. Dental professionals unanimously agreed that in the UK, minimally invasive approaches to dental care, such as orthodontic appliances, whitening and bonding, were the preferred practice to maintain health and improve aesthetics. The lack of regulatory processes and absence of access to legal redress or follow-on care was a common concern across the articles. A few articles described remedial care as being provided by the NHS, while other private providers described actively avoiding the provision of care for people who'd had dentistry abroad because of fear of liability: ‘If I did 20 crowns on a 21-year-old for the purpose of improving the colour, I would have my licence revoked, I would be struck off […] at the point you inherit that patient and do any work, that's when the problems really start and that's when the UK dentist becomes liable. A risk we cannot take' (director of a dental clinic in Liverpool, article 51) ‘It's shocking people have no clue what they've done. They talk about veneers - mouldings bonded to the front of a tooth - but in reality, they are crowns, meaning much more aggressive tooth reduction' (NHS dentist, article 10).
Newspaper articles commonly built their narrative using a single post lifted from social media, specifically TikTok. This tactic frames social media posts and perspectives as noteworthy and of importance in contemporary thinking about dental tourism. These posts documented journeys, complications and/or enhanced aesthetics. Common among them was the use of the #TurkeyTeeth hashtag. The phrase ‘Turkey teeth' has created a group identity: people who have all been to similar places, to have similar procedures, have had similar experiences on their journey and ultimately, obtained a similar aesthetic outcome, and in the future will go through the same challenges, disappointments and expenditure on remedial care. Social media provided a space for people to encourage others to seek dental tourism, to be seen as attractive and rewarded with likes and followers, or to generate a buzz around the potential complications of dental tourism. Journalists used emotionally charged language, for example, describing the numbers of views garnered by videos about dental tourism and #TurkeyTeeth as ‘whopping'. Conversely, newspapers can drive readers to their articles by using the stories of influencers and using provocative negative comments to engage readers. Both positive and negative information about the medical enhancement procedures make headlines: ‘Her video has clearly shocked many, as it has quickly gone viral and has racked up a whopping six million views. It has 397.6k likes, 4,595 comments and 6,968 shares' (article 39).
People were discredited by journalistic descriptions of unrelated character labels (eg blonde-bombshell, woman bitten by a bat, blonde-haired woman) and referred to as ‘brutally mocked' by online trolls (article 22). The language used by journalists conveys a tilt toward scorn, for example, the repeated use of the word ‘dubbed' in reference to the phenomenon of ‘Turkey teeth' and the use of terms that are comical in reference to teeth, for example, calling them ‘gnashers' which is commonly associated with fake teeth (article 53). Many headlines likened people's appearance to animals or made ridiculing comments about the appearance of their teeth. Quotations drawn from comments on the featured social media posts ranged from support, encouragement and compliments through to victim blaming, stereotyping, labelling, stigmatising and shaming: ‘NOT ALL WHITE: I splashed 3.6k on a set of white Turkey teeth but trolls say I look like a horse and most people think I've been scammed' (headline, article 38). A dichotomy was apparent - those who responded to full-mouth transformations as a symbol of good taste and others for whom it was perceived as distasteful and to be avoided or ridiculed: ‘His teeth were fine before but it's up to him what he spends his money on'; ‘looks like that smile filter'; ‘too white? Just me? I mean, they look good and everything but too white, no?'; ‘Turkey is where poor working class go' (TikTok user comments, article 63) ‘NOT WHITE: My man and I jetted to Turkey for new teeth - we love our gnashers but it's REALLY divided opinion' (headline, article 69). Common denigrations included warnings that influencers would live to regret their choices, recommending minimally invasive approaches as a preferred alternative, and referring to people as vain, fake-looking or foolish.
To our knowledge, this study is the first to explore the UK news media content on dental tourism. The articles included in this review were published after the 2018 celebrity endorsements of cosmetic dental tourism abroad. Most articles (92.4%) included in this study that pertained to dental tourism were reported in tabloid newspapers. Most articles were published in Daily Mail publications and The Sun ; these are the first and fourth most popular newspapers in the UK, respectively. Many articles in this study were distinctly stigmatising of people seeking and suffering from the consequences of dental tourism. Almost all of the articles about post-operative complications were Turkey-centric, primarily because the media were focusing on social media accounts using #TurkeyTeeth, as opposed to Turkey specifically being implicated in providing substandard dentistry. The convenience of collectively placing all overseas dentistry within this category should be acknowledged as both unfair and inappropriate, and yet has become a recognised colloquial term in the zeitgeist for what is otherwise a difficult-to-define and complex social phenomenon. Celebrity culture, class and dental treatment Social media and celebrity status have previously been identified as motivators for patients' choice of dental clinic and their perceptions of what constitutes the ideal ‘Hollywood' smile. , People who feel that they are failing to meet the benchmark for the appearance of oral health may experience low self-esteem and dissatisfaction with their appearance. This creates a desire for improved aesthetics and to feel more confident about oneself, which can be achieved by undergoing cosmetic dental procedures. As a result, some people may be increasingly susceptible to predatory marketing which offers cheap, quick fixes and radical overnight transformations abroad. In 2018, Forbes described one of the fastest growing trends in the beauty industry as ‘the instant fix' - products that offered instant gratification and immediate improvements in appearance. However, this is at odds with conservative approaches to cosmetic dentistry, which often require more time (eg whitening, orthodontics, composite bonding) and after-care (eg retainers, top-up whitening, polishing) compared with the immediate placement of indirect restorations described in the newspaper articles. Therefore, the professional approach to aesthetic/cosmetic dental treatment in the UK is directly at odds with recognised consumer preferences that demand and expect rapid cosmetic outcomes. Conspicuous consumerism is the overspending on goods or services to display one's wealth and social status. It has been associated with low social self-esteem in those who identify as being in a higher subjective social class. The impact ofconspicuous consumerism in cosmetic practice drives those seeking dental enhancement to opt for more, rather than less, adjustment to their appearance. In this circumstance, individuals pursue a smile which they and others perceive to be not only attractive, but also one which signals their means to be able to afford this treatment in the first place. , In this paradigm, teeth are modified to be brighter and whiter than is naturally attainable so that there can be no mistaking cosmetic dental intervention has been undertaken. An interesting evolution to the phenomenon of cosmetic conspicuous consumerism is not only the promotion of the aftereffects, but also the cosmetic journey. Individuals may openly publicise the process of teeth being modified for cosmetic adjustment, with the destructive component of teeth preparation being a very visible stage of this path. Many tabloid articles had a moralising undertone that used likenesses to animals and disparaging descriptors to mock or undermine people who are experiencing the consequences of dentistry abroad. In other studies, tabloid newspapers have been found to present significantly more stigmatising content related to health conditions when compared to broadsheet publications. Newspaper articles in this study tended to amplify the voices of individual social media influencers and their scathing critics, infrequently providing further information beyond the original text and comments. Further, by subtly discrediting individuals (eg blonde bombshells), journalists legitimised the shaming and ostracising of people who have had cosmetic dentistry abroad, thereby diminishing compassion toward them. Many articles described divided public views around cosmetic dentistry and unnatural appearances of teeth in terms of their shape and shade. In recent years, there has been extensive and increasing coverage of ‘culture wars' in UK tabloid and broadsheet media. Culture wars are conflicts between groups with different cultural ideals, beliefs or philosophies. There are references to ‘Turkey teeth' being associated with poor or working-class individuals and younger people. The newspaper articles in this study demonstrate culture wars, with people being stigmatised as vain or foolish in articles for the consumption of another group of people. Tabloid newspapers in particular legitimised the shaming and ridiculing of people who are unable to afford or access dentistry in the UK and the harm that has come to them as a result of seeking affordable dentistry abroad. As negative stories about #TurkeyTeeth have gone viral, the outcome of this could be to dissuade people from having ‘Turkey teeth' if they believe they might suffer the same physical and social ramifications of stigma and shame. We cannot be certain whether social media shaming will stem the tide of young people seeking cosmetic dentistry abroad. The current evidence is at odds. While some authors report that the anticipation of public or private (internal) shame can reduce the likelihood of making risky decisions, others have found that anticipated embarrassment can lead to a search for risk. Dental tourism and health policy The journalism examined in this study fails to recognise and acknowledge that the phenomenon of dental tourism should be broken into two separate components: 1) dental tourism that promotes access to dental care; and 2) dental tourism for purely cosmetic reasons. Where an individual decides to seek dental care abroad, either for sociocultural reasons (ie returning to a home country to have care in a more culturally inclusive setting) or for socioeconomic reasons (ie to be able to access affordable care more easily), there should be greater reflection on whether criticism from either the dental profession or the public is appropriate. Most professional dental organisations (representing the profession as members) tend to take a negative view of patients travelling abroad for care and this perspective often fails to account for sociocultural reasons for seeking care. , There is no objective evidence to show that the majority of patients travelling abroad for care experience exploitation or poor clinical outcomes and little evidence-based reflection on the comparative risks of similar issues occurring while seeking care in one's own country. Much of the anecdotal commentary from the profession where dentists report seeing poor outcomes from overseas dentistry could be equally made of care provided domestically. One of the comments spotlighted in this study of dental tourism being ‘horrendous' is not commensurate with the reality that for some, travelling abroad for care is not a choice begrudgingly taken. Many of the contemporary dental tourism news articles included in this review specifically focused on the viral hashtag #TurkeyTeeth and the boom of positive and negative social media interest in the topic. At time of writing, #TurkeyTeeth has 700.8 million views on TikTok and 18.2k Instagram posts. In recent years, social media has become a key area of concern for public health. Indeed, social media is now identified as a commercial determinant of health. Social media platforms have been criticised for failing to moderate mis/disinformation across a number of areas (racism, sexism etc) and healthcare is no exception. The discourse presented in the newspapers reflects the dangerously individualised and downstream focus of responsibility attributed to the individual decisions and actions regarding dental healthcare utilisation of users. This has led to some professional attitudes promoting a policy position that would prevent those who have paid for dental work overseas from accessing publicly funded dentistry. While a shallow assessment might find some merit in suggesting that personal decisions to have care aboard should not become a burden on the state, this narrative is problematic, as it fails to account for diaspora returning for care in their home country or for those emigrating. There are no similar calls to prevent those who have extensive or expensive care provided privately at home and who then become unable to self-fund maintenance or remediation of dental treatment from accessing care. Strengths and limitations The strengths of this study are that it gives an understanding of the key issues around dental tourism as presented in the UK press media. The study used a comprehensive search strategy and robust methodology to qualitatively analyse the data and develop the study themes. A limitation of this study is that the data were limited to the ten newspapers with the highest print and online readership. We did not include data from non-newspaper sources, such as the BBC website, or television. Further, we specifically sought newspaper narratives from the past five years to understand contemporary perspectives around dental tourism. This limited the number of articles included in the review and does not reflect insights into earlier news reporting of dental tourism. The date of the search is up to May 2023. Therefore, narratives around dental tourism between search and publication data are not represented in this article and may have changed during this time. A content analysis may have been an alternative approach for the analysis of the data presented in this article. The study is UK-centric and the findings may lack transferability to other cultural contexts. Implications for practice Following on from the findings of this study, we make three key recommendations. Regulation of social media advertising for cosmetic dentistry abroad which may be accessed by under 18s Last year, following a public consultation, the Committee of Advertising Practice and Broadcast Committee of Advertising Practice introduced restrictions prohibiting cosmetic intervention advertising from being directed at people under the age of 18. The restrictions have been in effect since 25 May 2022. The regulations stipulate that cosmetic interventions must not appear in non-broadcast media directed at under 18s, where under 18s make up over 25% of the audience, and during or adjacent to programmes commissioned for, principally directed at, or likely to appeal particularly to, under 18s. We recommend similar guidelines for cosmetic dental advertising on social media to safeguard children from being exposed to predatory messaging. Guidance for consumers The General Dental Council has recently published a document about going abroad for dental care which supports consumers during the decision-making process. However, people who wish to seek dentistry abroad may be vulnerable to misinformation and may not understand the treatment that they are being offered. Guidance at a national level should also provide insight about the difference between cosmetic options (eg the difference between crowns or veneers versus Invisalign and whitening), as well as the advantages and disadvantages of each treatment. The NHS has a treatment abroad checklist which identifies a series of warning signs when seeking healthcare outside of the UK. A list of reputable overseas dental care providers could be compiled to guide patients toward accessing better-quality care. Further, we are in agreement with BDA recommendations for raising more awareness of the risks, including proactive campaigns to inform the public; although, we also recognise that professional narratives can be vulnerable to bias. In the absence of national leadership from regulatory or health bodies, this role could be facilitated by joint enterprise from professional groups within dentistry and consumer advocacy groups, aligning with professional associations' missions and roles within the social contract. Compassionate dental care When people and our patients are making difficult decisions about how to improve their appearance and how to do so within the boundaries of their available resources, it is important to respond with compassion. As a profession, it is crucial for dentistry to have an insight into the strong pressures exerted by the aesthetic expectations of the society in which we live and work and the potential consequences for people who do not conform (eg lower perceptions of intelligence, impacted job opportunities). As a result of oral health stigma and shame, people can experience loneliness, lower self-confidence and poorer-quality of life. These are strong motivators to seek appearance-enhancing procedures abroad, irrespective of the potential negative consequences: ‘This decision isn't an easy one for many, but if you see someone with a Turkey smile, please be kind and have understanding how hard and terrifying it was for those people to undergo this. It's no walk in the park' (woman who has had dentistry abroad, article 12). Future research This article has highlighted that journalists and the wider public are sourcing and sharing health information about dental tourism from social media platforms. In other studies, cross-sectional analysis of healthcare messaging on TikTok has been undertaken; the protocols for these are readily available online. For example, studies have been undertaken to ascertain the #oralhealtheducation messaging of TikTok videos. Further to this review, we recommend further research, including content analysis of TikTok videos undertaken at time intervals, to understand how the dental tourism landscape is changing and how to monitor and manage access to health content that could have long-term repercussions for oral health.
Social media and celebrity status have previously been identified as motivators for patients' choice of dental clinic and their perceptions of what constitutes the ideal ‘Hollywood' smile. , People who feel that they are failing to meet the benchmark for the appearance of oral health may experience low self-esteem and dissatisfaction with their appearance. This creates a desire for improved aesthetics and to feel more confident about oneself, which can be achieved by undergoing cosmetic dental procedures. As a result, some people may be increasingly susceptible to predatory marketing which offers cheap, quick fixes and radical overnight transformations abroad. In 2018, Forbes described one of the fastest growing trends in the beauty industry as ‘the instant fix' - products that offered instant gratification and immediate improvements in appearance. However, this is at odds with conservative approaches to cosmetic dentistry, which often require more time (eg whitening, orthodontics, composite bonding) and after-care (eg retainers, top-up whitening, polishing) compared with the immediate placement of indirect restorations described in the newspaper articles. Therefore, the professional approach to aesthetic/cosmetic dental treatment in the UK is directly at odds with recognised consumer preferences that demand and expect rapid cosmetic outcomes. Conspicuous consumerism is the overspending on goods or services to display one's wealth and social status. It has been associated with low social self-esteem in those who identify as being in a higher subjective social class. The impact ofconspicuous consumerism in cosmetic practice drives those seeking dental enhancement to opt for more, rather than less, adjustment to their appearance. In this circumstance, individuals pursue a smile which they and others perceive to be not only attractive, but also one which signals their means to be able to afford this treatment in the first place. , In this paradigm, teeth are modified to be brighter and whiter than is naturally attainable so that there can be no mistaking cosmetic dental intervention has been undertaken. An interesting evolution to the phenomenon of cosmetic conspicuous consumerism is not only the promotion of the aftereffects, but also the cosmetic journey. Individuals may openly publicise the process of teeth being modified for cosmetic adjustment, with the destructive component of teeth preparation being a very visible stage of this path. Many tabloid articles had a moralising undertone that used likenesses to animals and disparaging descriptors to mock or undermine people who are experiencing the consequences of dentistry abroad. In other studies, tabloid newspapers have been found to present significantly more stigmatising content related to health conditions when compared to broadsheet publications. Newspaper articles in this study tended to amplify the voices of individual social media influencers and their scathing critics, infrequently providing further information beyond the original text and comments. Further, by subtly discrediting individuals (eg blonde bombshells), journalists legitimised the shaming and ostracising of people who have had cosmetic dentistry abroad, thereby diminishing compassion toward them. Many articles described divided public views around cosmetic dentistry and unnatural appearances of teeth in terms of their shape and shade. In recent years, there has been extensive and increasing coverage of ‘culture wars' in UK tabloid and broadsheet media. Culture wars are conflicts between groups with different cultural ideals, beliefs or philosophies. There are references to ‘Turkey teeth' being associated with poor or working-class individuals and younger people. The newspaper articles in this study demonstrate culture wars, with people being stigmatised as vain or foolish in articles for the consumption of another group of people. Tabloid newspapers in particular legitimised the shaming and ridiculing of people who are unable to afford or access dentistry in the UK and the harm that has come to them as a result of seeking affordable dentistry abroad. As negative stories about #TurkeyTeeth have gone viral, the outcome of this could be to dissuade people from having ‘Turkey teeth' if they believe they might suffer the same physical and social ramifications of stigma and shame. We cannot be certain whether social media shaming will stem the tide of young people seeking cosmetic dentistry abroad. The current evidence is at odds. While some authors report that the anticipation of public or private (internal) shame can reduce the likelihood of making risky decisions, others have found that anticipated embarrassment can lead to a search for risk.
The journalism examined in this study fails to recognise and acknowledge that the phenomenon of dental tourism should be broken into two separate components: 1) dental tourism that promotes access to dental care; and 2) dental tourism for purely cosmetic reasons. Where an individual decides to seek dental care abroad, either for sociocultural reasons (ie returning to a home country to have care in a more culturally inclusive setting) or for socioeconomic reasons (ie to be able to access affordable care more easily), there should be greater reflection on whether criticism from either the dental profession or the public is appropriate. Most professional dental organisations (representing the profession as members) tend to take a negative view of patients travelling abroad for care and this perspective often fails to account for sociocultural reasons for seeking care. , There is no objective evidence to show that the majority of patients travelling abroad for care experience exploitation or poor clinical outcomes and little evidence-based reflection on the comparative risks of similar issues occurring while seeking care in one's own country. Much of the anecdotal commentary from the profession where dentists report seeing poor outcomes from overseas dentistry could be equally made of care provided domestically. One of the comments spotlighted in this study of dental tourism being ‘horrendous' is not commensurate with the reality that for some, travelling abroad for care is not a choice begrudgingly taken. Many of the contemporary dental tourism news articles included in this review specifically focused on the viral hashtag #TurkeyTeeth and the boom of positive and negative social media interest in the topic. At time of writing, #TurkeyTeeth has 700.8 million views on TikTok and 18.2k Instagram posts. In recent years, social media has become a key area of concern for public health. Indeed, social media is now identified as a commercial determinant of health. Social media platforms have been criticised for failing to moderate mis/disinformation across a number of areas (racism, sexism etc) and healthcare is no exception. The discourse presented in the newspapers reflects the dangerously individualised and downstream focus of responsibility attributed to the individual decisions and actions regarding dental healthcare utilisation of users. This has led to some professional attitudes promoting a policy position that would prevent those who have paid for dental work overseas from accessing publicly funded dentistry. While a shallow assessment might find some merit in suggesting that personal decisions to have care aboard should not become a burden on the state, this narrative is problematic, as it fails to account for diaspora returning for care in their home country or for those emigrating. There are no similar calls to prevent those who have extensive or expensive care provided privately at home and who then become unable to self-fund maintenance or remediation of dental treatment from accessing care.
The strengths of this study are that it gives an understanding of the key issues around dental tourism as presented in the UK press media. The study used a comprehensive search strategy and robust methodology to qualitatively analyse the data and develop the study themes. A limitation of this study is that the data were limited to the ten newspapers with the highest print and online readership. We did not include data from non-newspaper sources, such as the BBC website, or television. Further, we specifically sought newspaper narratives from the past five years to understand contemporary perspectives around dental tourism. This limited the number of articles included in the review and does not reflect insights into earlier news reporting of dental tourism. The date of the search is up to May 2023. Therefore, narratives around dental tourism between search and publication data are not represented in this article and may have changed during this time. A content analysis may have been an alternative approach for the analysis of the data presented in this article. The study is UK-centric and the findings may lack transferability to other cultural contexts.
Following on from the findings of this study, we make three key recommendations. Regulation of social media advertising for cosmetic dentistry abroad which may be accessed by under 18s Last year, following a public consultation, the Committee of Advertising Practice and Broadcast Committee of Advertising Practice introduced restrictions prohibiting cosmetic intervention advertising from being directed at people under the age of 18. The restrictions have been in effect since 25 May 2022. The regulations stipulate that cosmetic interventions must not appear in non-broadcast media directed at under 18s, where under 18s make up over 25% of the audience, and during or adjacent to programmes commissioned for, principally directed at, or likely to appeal particularly to, under 18s. We recommend similar guidelines for cosmetic dental advertising on social media to safeguard children from being exposed to predatory messaging. Guidance for consumers The General Dental Council has recently published a document about going abroad for dental care which supports consumers during the decision-making process. However, people who wish to seek dentistry abroad may be vulnerable to misinformation and may not understand the treatment that they are being offered. Guidance at a national level should also provide insight about the difference between cosmetic options (eg the difference between crowns or veneers versus Invisalign and whitening), as well as the advantages and disadvantages of each treatment. The NHS has a treatment abroad checklist which identifies a series of warning signs when seeking healthcare outside of the UK. A list of reputable overseas dental care providers could be compiled to guide patients toward accessing better-quality care. Further, we are in agreement with BDA recommendations for raising more awareness of the risks, including proactive campaigns to inform the public; although, we also recognise that professional narratives can be vulnerable to bias. In the absence of national leadership from regulatory or health bodies, this role could be facilitated by joint enterprise from professional groups within dentistry and consumer advocacy groups, aligning with professional associations' missions and roles within the social contract. Compassionate dental care When people and our patients are making difficult decisions about how to improve their appearance and how to do so within the boundaries of their available resources, it is important to respond with compassion. As a profession, it is crucial for dentistry to have an insight into the strong pressures exerted by the aesthetic expectations of the society in which we live and work and the potential consequences for people who do not conform (eg lower perceptions of intelligence, impacted job opportunities). As a result of oral health stigma and shame, people can experience loneliness, lower self-confidence and poorer-quality of life. These are strong motivators to seek appearance-enhancing procedures abroad, irrespective of the potential negative consequences: ‘This decision isn't an easy one for many, but if you see someone with a Turkey smile, please be kind and have understanding how hard and terrifying it was for those people to undergo this. It's no walk in the park' (woman who has had dentistry abroad, article 12).
Last year, following a public consultation, the Committee of Advertising Practice and Broadcast Committee of Advertising Practice introduced restrictions prohibiting cosmetic intervention advertising from being directed at people under the age of 18. The restrictions have been in effect since 25 May 2022. The regulations stipulate that cosmetic interventions must not appear in non-broadcast media directed at under 18s, where under 18s make up over 25% of the audience, and during or adjacent to programmes commissioned for, principally directed at, or likely to appeal particularly to, under 18s. We recommend similar guidelines for cosmetic dental advertising on social media to safeguard children from being exposed to predatory messaging.
The General Dental Council has recently published a document about going abroad for dental care which supports consumers during the decision-making process. However, people who wish to seek dentistry abroad may be vulnerable to misinformation and may not understand the treatment that they are being offered. Guidance at a national level should also provide insight about the difference between cosmetic options (eg the difference between crowns or veneers versus Invisalign and whitening), as well as the advantages and disadvantages of each treatment. The NHS has a treatment abroad checklist which identifies a series of warning signs when seeking healthcare outside of the UK. A list of reputable overseas dental care providers could be compiled to guide patients toward accessing better-quality care. Further, we are in agreement with BDA recommendations for raising more awareness of the risks, including proactive campaigns to inform the public; although, we also recognise that professional narratives can be vulnerable to bias. In the absence of national leadership from regulatory or health bodies, this role could be facilitated by joint enterprise from professional groups within dentistry and consumer advocacy groups, aligning with professional associations' missions and roles within the social contract.
When people and our patients are making difficult decisions about how to improve their appearance and how to do so within the boundaries of their available resources, it is important to respond with compassion. As a profession, it is crucial for dentistry to have an insight into the strong pressures exerted by the aesthetic expectations of the society in which we live and work and the potential consequences for people who do not conform (eg lower perceptions of intelligence, impacted job opportunities). As a result of oral health stigma and shame, people can experience loneliness, lower self-confidence and poorer-quality of life. These are strong motivators to seek appearance-enhancing procedures abroad, irrespective of the potential negative consequences: ‘This decision isn't an easy one for many, but if you see someone with a Turkey smile, please be kind and have understanding how hard and terrifying it was for those people to undergo this. It's no walk in the park' (woman who has had dentistry abroad, article 12).
This article has highlighted that journalists and the wider public are sourcing and sharing health information about dental tourism from social media platforms. In other studies, cross-sectional analysis of healthcare messaging on TikTok has been undertaken; the protocols for these are readily available online. For example, studies have been undertaken to ascertain the #oralhealtheducation messaging of TikTok videos. Further to this review, we recommend further research, including content analysis of TikTok videos undertaken at time intervals, to understand how the dental tourism landscape is changing and how to monitor and manage access to health content that could have long-term repercussions for oral health.
Dental tourism for cosmetic enhancement is a rapidly growing trend, specifically among younger people and social media users. In this article, we have identified the common issues described in the UK media and the attitudes conveyed about cosmetic dentistry through journalistic strategies. Social media viral health trends were a means of distributing health (dis/mis)information and played a part in informing people of the opportunities of dental treatment outside of the UK. Commonly, people were quoted as underplaying or ignoring short- and long-term risks of aggressive cosmetic procedures. Contrastingly, others chose to share their poor dental outcomes in a bid to discourage others. The UK dental profession strongly advocated for minimally invasive approaches and the risk of litigation discouraged treatment of people who required remedial care following dentistry abroad. The media conveyed an undertone of scorn and stigmatisation toward people who had had cosmetic dentistry abroad and amplified the perspectives of social media users.
|
A pharmacogenomic assessment of psychiatric adverse drug reactions to levetiracetam | c52bbd3d-40de-4a2f-b501-6ec2d8080af0 | 9321556 | Pharmacology[mh] | INTRODUCTION Levetiracetam (LEV) is an effective antiseizure medicine (ASM), first licensed to treat epilepsy in 1999. Upon binding to its target, the synaptic vesicle protein SV2A, seizure activity is suppressed by LEV, which putatively modulates exocytosis from synaptic vesicles, thereby inhibiting presynaptic neurotransmitter release. , As a first‐prescription monotherapy, LEV can provide seizure freedom in over 50% of people with epilepsy. , Adjunctive LEV treatment stopped focal and generalized seizures, which were previously drug resistant. , , LEV is commonly used for both monotherapy and polytherapy to treat a broad spectrum of seizure types. , Adverse drug reactions (ADRs) are associated with LEV treatment. An estimated 18% of people with epilepsy treated with LEV will experience some neuropsychiatric response, resulting in dosage lowering or, more frequently, cessation of treatment. LEV‐associated ADRs cover many phenotypes, including behavioral disorders such as irritability and personality change and affective disorders such as depression and suicidal ideation. Furthermore, ~1% of people exposed to LEV will experience drug‐induced psychotic reactions, a significantly higher rate than associated with other ASMs. , As a group, psychiatric and behavioral side effects have the highest economic burden of all ASM‐related ADRs. Previous pharmacogenomic research into ASM‐associated ADRs, primarily focused on univariate analyses, has identified several clinically important predictors of clinical relevance. For example, human leukocyte antigen (HLA) region alleles HLA‐B*15:02 and HLA‐A*31:01, as well as the cytochrome P450 ( CYP)2C9 *3 allele are strong predictors of aromatic ASM‐induced severe cutaneous adverse reactions. , , , A previous effort, focused on a limited number of single nucleotide polymorphisms (SNPs), reported a correlation between LEV‐induced psychiatric ADRs and genetic variation linked to dopaminergic activity. To date, there has been no genomic investigation of LEV psychiatric ADRs. Polygenic risk scoring (PRS) is a method used to assess an individual's cumulative burden of common genetic variants associated with a disease or trait. The predictive potential of PRS in the field of pharmacogenomics has been demonstrated previously. For example, people with bipolar disorder who have a higher PRS for schizophrenia were shown to be less likely to respond to mood‐stabilization treatment with lithium. PRS for non‐melanoma skin cancer has been shown to predict the risk of and time to azathioprine‐associated post‐transplant skin cancer. The role of rare genetic variation in pharmacogenomics is less well assessed. Rare genetic variants in the SLCO1B1 gene seem to influence the clearance of methotrexate, a chemotherapeutic agent. Rare variation in the CYP genes CYP3A4 and CYP2C9 appears to explain the 18.4% and 43.1% spectrum of enzyme activity. Bioinformatic predictions of the contribution of rare variation to drug metabolism suggest that rare variants may account for a substantial proportion of inter‐individual variability of the metabolism of drugs such as warfarin and the statin medication simvastatin. We utilized a variety of approaches to assess the role of genetic variation in psychiatric and behavioral ADRs associated with LEV. First, we applied a univariate genome‐wide association study (GWAS) approach to identify individual common genetic risk loci for LEV‐induced behavioral ADRs or LEV‐associated psychotic reaction. We then applied a polygenic approach, using PRS to test whether a higher polygenic burden for schizophrenia can predict LEV‐associated psychotic reactions. Finally, we performed burden analysis of exome data to identify if rare variants are associated with this clinical condition compared to controls.
METHODS All research participants (or their legal guardians in the case of minors or individuals with intellectual disability) provided written, informed consent. The study was approved by ethics committees at each study site. 2.1 Cohort assembly Genetic and phenotypic data on cases and controls were obtained from various recruitment sites. All cases and controls were people with epilepsy and a history of treatment with LEV. EpiPGX Consortium samples were contributed from the following 10 sites: The Royal College of Surgeons (Dublin, Ireland), Antwerp University Hospital (Belgium), Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) “G. Gaslini” Institute (Genova, Italy), the University of Liverpool (UK), the University of Tubingen (Germany), University Medical Centre, (Utrecht, The Netherlands), UCL Queen Square Institute of Neurology (UK), the University of Glasgow (UK), the University of Bonn (Germany), and the University of Melbourne (Australia). We obtained additional cases (beyond EpiPGX) from the Beaumont Hospital Epilepsy Biobank (Dublin, Ireland), the Columbia University Medical Center (United States), and the Department of Medicine at the University of Melbourne, Austin Health (Australia). 2.2 Case and control phenotyping All phenotyping was conducted by the neurology team where the participant was recruited. To meet the criteria of an ADR, each case must have (1) occurred within 6 months of the initiation of LEV treatment, (2) led to withdrawal or dose reduction of LEV, (3) ADR reversed or improved after withdrawal or dose reduction, and (4) ADR not attributed to any other cause by the treating or phenotyping clinician. We specifically examined two LEV ADR phenotypes: 1: Any LEV‐induced behavioral disorder. Defined as one or more of the following: agitation, aggression, irritability, confusion, or cognitive decline. 2: LEV‐induced psychotic reaction: vivid hallucinations, misidentifications, delusions and or ideas of reference (often of a paranoid or persecutory nature), psychomotor disturbances (excitement or stupor), and an abnormal affect, ranging from intense fear to ecstasy. The sensorium is usually clear, but some degree of clouding of consciousness may be present, although not severe confusion. A psychiatrist must have confirmed the diagnosis. Any cases with a previous history of psychotic illness were excluded. Controls were LEV‐treated people with epilepsy with no psychiatric side‐effects reported in clinical notes after a minimum of 6 months of treatment. 2.3 Genotyping and quality control EpiPGX samples were genotyped on various Illumina chips and underwent imputation and quality control (QC) processes, as reported previously. The additional samples from Dublin (Beaumont) and Melbourne were genotyped on the Illumina Global Screening Array chip and imputed on the Sanger imputation server ( https://imputation.sanger.ac.uk/ ) using the Haplotype Reference Consortium release 1.1 panel as a reference. The newly genotyped samples underwent the same QC procedures as the EpiPGX cohort (see Ref. 24), and were then merged with the EpiPGX data set for further analysis. To ensure European ancestry and genetic homogeneity all samples were merged with the Human Genome Diversity Project (HGDP). Principal component analysis was conducted by thinning for linkage disequilibrium using PLINK 1.9 (‐‐indep‐pairwise 1000, 100, 0.1), and then estimating principal components (PCs). The top two PCs were graphed using R v3.5, and any samples which did not overlay the European HGDP samples on the PCA plot were excluded. 2.4 GWAS We used the PGA2 software to estimate GWAS power, based on a minimum minor allele frequency of 5% to detect an association to the alpha level of 5 × 10 −8 under an additive model. GWAS analyses were carried out using a frequentist association model in SNPTEST, with sex and the top six principal components included as covariates to account for bias and population stratification. The threshold for genome‐wide significance was set at p < 5 × 10 −8 . We included only autosomal SNPs in our analyses. 2.5 Polygenic risk scoring Polygenic risk scores for schizophrenia were estimated for all samples with LEV‐induced psychosis, and controls, using PRSice2. GWAS results for schizophrenia were obtained from the Psychiatric Genomics Consortium. All SNPs from the schizophrenia GWAS with p ‐values ≤ .5 were included in the PRS analysis. PRS were normalized to mean 0 and SD 1 and then regressed onto LEV psychosis case: control status using R v3.5, with the top six PCs and sex included as covariates. We used the pROC R package to estimate the area under the receiver‐operating characteristic (ROC) curve of the above PRS model, compared to the null model, and a model ccomprising covariates only (PCs 1–6 and sex). 2.6 Exome sequencing and analysis Whole‐exome sequencing was conducted at deCODE genetics on the Illumina HiSeq 2500 with the Nextera Rapid Capture Expanded Exome kit (Illumina). Adapter sequences were removed, and the data were put through a Genome Analysis Toolkit (GATK ;) best practices pipeline with the GRCh37 human reference genome for joint calling, recalibration, filtering, and variant annotation. We excluded any variant position with a mean depth of less than 10 in all samples. Only samples with more than 30× mean coverage or more than 70% of the exome intervals covered by at least 20× mean coverage were included for analysis. We first performed a hypothesis‐free test single‐gene collapsing analysis with the combined and multivariate collapsing (CMC) method with a two‐sided Fisher exact test using rvtests. We then performed gene set collapsing tests with the regression‐based two‐sided SKAT‐O method, testing for a burden of functional variants in genes that had been associated previously with schizophrenia ( SLC6A1, SETD1A, RBM12 ). PCs 1–6 and sex were included as covariates.
Cohort assembly Genetic and phenotypic data on cases and controls were obtained from various recruitment sites. All cases and controls were people with epilepsy and a history of treatment with LEV. EpiPGX Consortium samples were contributed from the following 10 sites: The Royal College of Surgeons (Dublin, Ireland), Antwerp University Hospital (Belgium), Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) “G. Gaslini” Institute (Genova, Italy), the University of Liverpool (UK), the University of Tubingen (Germany), University Medical Centre, (Utrecht, The Netherlands), UCL Queen Square Institute of Neurology (UK), the University of Glasgow (UK), the University of Bonn (Germany), and the University of Melbourne (Australia). We obtained additional cases (beyond EpiPGX) from the Beaumont Hospital Epilepsy Biobank (Dublin, Ireland), the Columbia University Medical Center (United States), and the Department of Medicine at the University of Melbourne, Austin Health (Australia).
Case and control phenotyping All phenotyping was conducted by the neurology team where the participant was recruited. To meet the criteria of an ADR, each case must have (1) occurred within 6 months of the initiation of LEV treatment, (2) led to withdrawal or dose reduction of LEV, (3) ADR reversed or improved after withdrawal or dose reduction, and (4) ADR not attributed to any other cause by the treating or phenotyping clinician. We specifically examined two LEV ADR phenotypes: 1: Any LEV‐induced behavioral disorder. Defined as one or more of the following: agitation, aggression, irritability, confusion, or cognitive decline. 2: LEV‐induced psychotic reaction: vivid hallucinations, misidentifications, delusions and or ideas of reference (often of a paranoid or persecutory nature), psychomotor disturbances (excitement or stupor), and an abnormal affect, ranging from intense fear to ecstasy. The sensorium is usually clear, but some degree of clouding of consciousness may be present, although not severe confusion. A psychiatrist must have confirmed the diagnosis. Any cases with a previous history of psychotic illness were excluded. Controls were LEV‐treated people with epilepsy with no psychiatric side‐effects reported in clinical notes after a minimum of 6 months of treatment.
Genotyping and quality control EpiPGX samples were genotyped on various Illumina chips and underwent imputation and quality control (QC) processes, as reported previously. The additional samples from Dublin (Beaumont) and Melbourne were genotyped on the Illumina Global Screening Array chip and imputed on the Sanger imputation server ( https://imputation.sanger.ac.uk/ ) using the Haplotype Reference Consortium release 1.1 panel as a reference. The newly genotyped samples underwent the same QC procedures as the EpiPGX cohort (see Ref. 24), and were then merged with the EpiPGX data set for further analysis. To ensure European ancestry and genetic homogeneity all samples were merged with the Human Genome Diversity Project (HGDP). Principal component analysis was conducted by thinning for linkage disequilibrium using PLINK 1.9 (‐‐indep‐pairwise 1000, 100, 0.1), and then estimating principal components (PCs). The top two PCs were graphed using R v3.5, and any samples which did not overlay the European HGDP samples on the PCA plot were excluded.
GWAS We used the PGA2 software to estimate GWAS power, based on a minimum minor allele frequency of 5% to detect an association to the alpha level of 5 × 10 −8 under an additive model. GWAS analyses were carried out using a frequentist association model in SNPTEST, with sex and the top six principal components included as covariates to account for bias and population stratification. The threshold for genome‐wide significance was set at p < 5 × 10 −8 . We included only autosomal SNPs in our analyses.
Polygenic risk scoring Polygenic risk scores for schizophrenia were estimated for all samples with LEV‐induced psychosis, and controls, using PRSice2. GWAS results for schizophrenia were obtained from the Psychiatric Genomics Consortium. All SNPs from the schizophrenia GWAS with p ‐values ≤ .5 were included in the PRS analysis. PRS were normalized to mean 0 and SD 1 and then regressed onto LEV psychosis case: control status using R v3.5, with the top six PCs and sex included as covariates. We used the pROC R package to estimate the area under the receiver‐operating characteristic (ROC) curve of the above PRS model, compared to the null model, and a model ccomprising covariates only (PCs 1–6 and sex).
Exome sequencing and analysis Whole‐exome sequencing was conducted at deCODE genetics on the Illumina HiSeq 2500 with the Nextera Rapid Capture Expanded Exome kit (Illumina). Adapter sequences were removed, and the data were put through a Genome Analysis Toolkit (GATK ;) best practices pipeline with the GRCh37 human reference genome for joint calling, recalibration, filtering, and variant annotation. We excluded any variant position with a mean depth of less than 10 in all samples. Only samples with more than 30× mean coverage or more than 70% of the exome intervals covered by at least 20× mean coverage were included for analysis. We first performed a hypothesis‐free test single‐gene collapsing analysis with the combined and multivariate collapsing (CMC) method with a two‐sided Fisher exact test using rvtests. We then performed gene set collapsing tests with the regression‐based two‐sided SKAT‐O method, testing for a burden of functional variants in genes that had been associated previously with schizophrenia ( SLC6A1, SETD1A, RBM12 ). PCs 1–6 and sex were included as covariates.
RESULTS 3.1 Cohort description We included 1106 people with epilepsy treated with LEV in our analysis, of which 149 had LEV‐associated behavioral disorder, 37 had LEV‐associated psychotic reaction, and 920 were controls. A full breakdown of case phenotypes and controls is provided in Table . Fifty‐four percent of cases in our study were female, compared to 55% of controls. Cases had an average of 46, and controls an average age of 51, with an average age at first seizure of 17 for cases and 21 for controls. Twenty‐seven percent of cases had generalized epilepsy, 59% had focal epilepsy, and 14% had unclassified epilepsy. Twenty‐five percent of controls had generalized epilepsy, 67% had focal epilepsy, and 9% were unclassified. Among cases, the most common EEG finding was generalized spike/wave discharges. 3.2 Genome‐wide association analyses of LEV‐associated psychiatric ADRs We conducted a GWAS of 149 cases with LEV‐associated behavioral disorder vs 920 controls. After quality control, 3.8 million SNPs were included in the association analysis. Our analysis had 80% power to detect a genetic variant with a relative risk of 3.34 or greater. We did not observe any variants that surpassed the significance threshold of 5 × 10 −8 (Figure ). The variant rs1800497, which had been reported previously to predict LEV‐induced psychiatric ADRs, was found in our LEV‐associated behavioral disorder GWAS to have an uncorrected p ‐value of .458, although the phenotype criteria used in this study was not an exact match. We next conducted a GWAS of LEV‐induced psychotic reaction, which included 37 cases and 920 controls across 3.8 million SNPs. We estimated 80% power to detect a variant with a relative risk of 7.22 or greater. No genome‐wide significant signals were observed (Figure ). 3.3 Polygenic risk score analysis We tested the hypothesis that people who experience LEV‐induced psychotic reaction harbor an excess of common variants associated with schizophrenia using PRS analysis (see ). We found that the PRS for schizophrenia was significantly higher in LEV‐psychotic reaction cases compared to controls (estimate = .4886, standard error [SE] = .1881, p = .0097). Schizophrenia PRS explained 4% of the phenotypic variance in case‐control status. Generating a ROC curve of LEV‐psychotic reaction case: control status from a model of schizophrenia PRS, PCs 1–6, and sex produced a curve with an area under the curve (AUC; predictive power) of 0.65 (Figure ). This is greater than the AUCs of the null model (0.50) and a model built on covariates alone (0.57). LEV‐psychosis cases make up 3.87% of our cohort ( n cases = 37, n controls = 920; Table ). If we take only samples in the top 10% of the schizophrenia PRS distribution, we find that LEV‐psychosis cases make up 8.3% of the cohort ( n cases = 8, n controls = 88). The bottom 10% of the schizophrenia PRS distribution contains only 1% LEV‐psychosis cases ( n cases = 1, n controls = 99). 3.4 Rare variant burden analysis To test the hypothesis that rare variant burden can contribute to the LEV‐psychotic reaction, we performed rare variant analysis on people with LEV‐induced psychotic reaction. First, all genes were tested individually for the enrichment of variation. After Bonferroni correction for 18 668 protein‐coding genes, no gene reached the threshold of statistical significance ( p < 2.67 −6 ). We tested for rare variant burden in genes previously found to harbor rare variants in people with schizophrenia. We found no significant enrichment of rare variation in SLC6A1 ( p = .819), SETD1A ( p = .030), or RBM12 ( p = .220), given a threshold of statistical significance of p < .016. Testing rare‐variant burden in these schizophrenia‐associated genes together as a unit also found no significant enrichment ( p = .83). We did not observe a burden of rare variants in the target of LEV SV2A ( p = .492).
Cohort description We included 1106 people with epilepsy treated with LEV in our analysis, of which 149 had LEV‐associated behavioral disorder, 37 had LEV‐associated psychotic reaction, and 920 were controls. A full breakdown of case phenotypes and controls is provided in Table . Fifty‐four percent of cases in our study were female, compared to 55% of controls. Cases had an average of 46, and controls an average age of 51, with an average age at first seizure of 17 for cases and 21 for controls. Twenty‐seven percent of cases had generalized epilepsy, 59% had focal epilepsy, and 14% had unclassified epilepsy. Twenty‐five percent of controls had generalized epilepsy, 67% had focal epilepsy, and 9% were unclassified. Among cases, the most common EEG finding was generalized spike/wave discharges.
Genome‐wide association analyses of LEV‐associated psychiatric ADRs We conducted a GWAS of 149 cases with LEV‐associated behavioral disorder vs 920 controls. After quality control, 3.8 million SNPs were included in the association analysis. Our analysis had 80% power to detect a genetic variant with a relative risk of 3.34 or greater. We did not observe any variants that surpassed the significance threshold of 5 × 10 −8 (Figure ). The variant rs1800497, which had been reported previously to predict LEV‐induced psychiatric ADRs, was found in our LEV‐associated behavioral disorder GWAS to have an uncorrected p ‐value of .458, although the phenotype criteria used in this study was not an exact match. We next conducted a GWAS of LEV‐induced psychotic reaction, which included 37 cases and 920 controls across 3.8 million SNPs. We estimated 80% power to detect a variant with a relative risk of 7.22 or greater. No genome‐wide significant signals were observed (Figure ).
Polygenic risk score analysis We tested the hypothesis that people who experience LEV‐induced psychotic reaction harbor an excess of common variants associated with schizophrenia using PRS analysis (see ). We found that the PRS for schizophrenia was significantly higher in LEV‐psychotic reaction cases compared to controls (estimate = .4886, standard error [SE] = .1881, p = .0097). Schizophrenia PRS explained 4% of the phenotypic variance in case‐control status. Generating a ROC curve of LEV‐psychotic reaction case: control status from a model of schizophrenia PRS, PCs 1–6, and sex produced a curve with an area under the curve (AUC; predictive power) of 0.65 (Figure ). This is greater than the AUCs of the null model (0.50) and a model built on covariates alone (0.57). LEV‐psychosis cases make up 3.87% of our cohort ( n cases = 37, n controls = 920; Table ). If we take only samples in the top 10% of the schizophrenia PRS distribution, we find that LEV‐psychosis cases make up 8.3% of the cohort ( n cases = 8, n controls = 88). The bottom 10% of the schizophrenia PRS distribution contains only 1% LEV‐psychosis cases ( n cases = 1, n controls = 99).
Rare variant burden analysis To test the hypothesis that rare variant burden can contribute to the LEV‐psychotic reaction, we performed rare variant analysis on people with LEV‐induced psychotic reaction. First, all genes were tested individually for the enrichment of variation. After Bonferroni correction for 18 668 protein‐coding genes, no gene reached the threshold of statistical significance ( p < 2.67 −6 ). We tested for rare variant burden in genes previously found to harbor rare variants in people with schizophrenia. We found no significant enrichment of rare variation in SLC6A1 ( p = .819), SETD1A ( p = .030), or RBM12 ( p = .220), given a threshold of statistical significance of p < .016. Testing rare‐variant burden in these schizophrenia‐associated genes together as a unit also found no significant enrichment ( p = .83). We did not observe a burden of rare variants in the target of LEV SV2A ( p = .492).
DISCUSSION Levetiracetam (or LEV) is a highly effective ASM that is associated with behavioral adverse events in a proportion of patients, including affective disorder, aggression, and psychotic reactions. We applied various analytical models to assess the role of genetic variation in LEV behavioral ADRs. We present evidence that the genetic burden for schizophrenia, as quantified by PRS, is a risk factor for LEV‐induced psychotic reactions in people with epilepsy. We found no evidence of rare variant burden in LEV psychosis. From univariate GWAS analysis, we can conclude that there are no common variants with an OR >7.22 associated with LEV‐induced psychotic reaction, or an OR >3.34 associated with an LEV behavioral disorder. We then constructed a predictive model for LEV‐psychotic reaction using schizophrenia PRS with a predictive power (as measured by AUC/ROC analysis) of 65%. This model explained 4% of the variation in case: control status for LEV psychosis in our cohort. The schizophrenia GWAS used to estimate the PRS explained 7% of the variation in schizophrenia case: control status, representing the upper limit of phenotypic variation that could be explained by a PRS model generated from it. More powerful GWAS of schizophrenia that explains more phenotypic variation may allow more accurate PRS models in the future. These results raise the possibility of screening people with epilepsy to identify those at risk of developing psychotic reactions as an ADR, before exposure to LEV. Our findings must be validated in an independent sample ideally collected in a prospective study to clarify clinical potential. The ability to screen for individuals at risk of developing LEV‐induced psychotic reactions could be improved by including known clinical risk factors such as a history of depression or anxiety or a history of recreational drug use. Given that LEV is a commonly prescribed first‐line ASM, and that up to 18% of people prescribed LEV will experience some side effects, identifying those at risk of ADRs would appear clinically attractive. Our study has limitations. First, we focused on people of European ancestry. Given that PRS effects cannot be assumed to act consistently across ethnic backgrounds, the role of schizophrenia PRS in non‐European LEV‐psychosis cases must be assessed separately. Second, the relatively low number of cases included in our analyses limited our power to detect effects, particularly in the case of the rare variant analysis. Finally, although the potential dose/concentration‐dependent LEV‐induced psychosis could not be explored in this study, clinicians could consider optimizing therapeutic LEV treatment in the patients. In summary, we showed that polygenic burden for schizophrenia is a risk factor for LEV‐induced psychotic reactions. To assess the clinical utility of this result, it should be tested in an independent and ideally prospective cohort. The following steps would include testing larger cohorts for univariate GWAS signals and further exome analysis in larger samples to assess the rare variant contribution to LEV psychiatric ADRs. Future research could also perform similar genetic analysis on other ASMs that are known to associate with behavioral and/or psychiatric ADRs.
The authors have no financial conflicts of interest to disclose. We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.
|
Management of hyponatraemia and hypernatraemia during the Covid-19 pandemic: a consensus statement of the Spanish Society for Endocrinology (Acqua Neuroendocrinology Group) | eabe6f68-8acd-4148-9e0a-0716e3f03b07 | 7864617 | Physiology[mh] | Mismanagement of water balance disorders during acute illness can worsen morbidity and mortality . Thus, adequate evidence-based treatment of CH or DI, as previously described in European and American Hyponatraemia Guidelines and in the European Society for Endocrinology statement on the management of these disorders during COVID-19 pandemic , may improve outcomes, including risk of hospitalization and therefore SARS-COV2 exposure. COVID-19, particularly in severe forms, can induce numerous electrolyte abnormalities that might be particularly severe in patients with chronic water balance abnormalities . According to published literature, these are some of the most common alterations that might complicate the management of patients with wáter balance abnormalities and SARS-COV2 infection: Complications linked to the COVID-19 pandemic that might worsen the outcome of patients with either DI or CH: COVID-19--related electrolyte abnormalities: hyponatraemia, hypernatraemia, hypokalemia, mixed alcalosis (respiratory alcalosis due to hyperventilation, metabolic alcalosis linked to hypovolaemia) Loss of follow-up of CH or DI due to patient’s and/or health care worker fear of contagion or work overload by the health care worker during COVID-19 outbreaks. Difficulties accessing water due to psychosocial factors, such as social isolation during confinement and anxiety or depression linked to outbreaks, can be particularly harmful in CH and DI patients. Futhermore, water intake can activate mesolimbic reward circuits in the central nervous system, which could lead to psychogenic polydipsia in these patients and ultimately worsen water balance . b. DI-specific risks during COVID-19 pandemic: Loss of follow-up during the pandemic might lead to undetected hyponatraemia due to desmopressin overdose, particularly when administered through intranasal route . SARS-COV2 infection might also increase antidiuresis. However, larger series with more in-depth description of volume status and serum and urine electrolytes are required. During SARS-COV2 infection, extrarenal water loses might increase due to hyperpyrexia, hyperventilation and/or diarrhoea and cause hypernatraemia. This could worsen further if either the patient or health care workers do not ensure timely administration of desmopressin and monitorization of sodium levels take place regularly in patients with DI . Patients with central DI often suffer as well from adrenal insufficiency and hypothyroidism. Timely administration of thyroid hormones and glucocorticoid replacement is paramount, increasing the dose of glucocorticoids during intercurrent illnesses as described in clinical guidelines . Obesity is common in patients with central DI, particularly when other pituitary hormonal deficiencies or hypersecretion syndromes (i.e. Cushing’s disease) are present. Obese patients suffer from worse COVID-19 related prognosis , which highlights the need to reduce contagion risk in these patients. Metabolic comorbidities in patients with central DI that could worsen COVID-19 prognosis: diabetes mellitus, hypertension, obstructive sleep apnea Increased risk of thrombosis in DI due to hypernatraemia, which might worsen during SARS-COV2 infection, also associated with hypercoagulability, or in non-infected patients due to confinement, increased weight and lack of physical activity . Therefore, thromboprophylaxis should be systematically considered in COVID-19 patients with chronic DI. c. Chronic hyponatraemia specific risks during COVID-19 pandemic: Patients with chronic SIADH might suffer from decompensated hyponatraemia during COVID-19 infection due to thirst abnormalities during acute illness, lack of treatment adherence (i.e. tolvaptan, urea...) and/or treatment monitorization. Changes in volume status during intercurrent illness, i.e. hypovolaemia caused by vomiting or hyperpirexia with increased perspiration. Both tolvaptan and urea are licensed for use during euvolemic hyponatraemia due to SIADH, but patients with hyponatraemia might change volume status during acute illness. iii. Chronic comorbidities common in patients with SIADH that might worsen COVID-19 prognosis: cancer, respiratory disorders, central nervous system disorders...
Non-infected patients: Outpatient follow-up according to clinical guidelines . If patients have preserved thirst sensation and are capable of regulating water intake and maintain treatment adherence, follow up could be performed through telemedicine (phone contact, videocalls...) with lab tests as required. More stringent follow-up is indicated in the following scenarios: adipsic DI, neurocognitive impairment that concerns water intake or previous episodes of hyponatraemia due to desmopressin overdose. b. COVID-19 in patients with chronic DI: Cooperation between health care workers dealing with COVID-19 and clinicians with expertise in water balance disorders is paramount in this scenario. Some aspects of DI treatment may vary according to COVID-19 severity and water balance status on presentation.
Outpatients with chronic DI and mild to moderate COVID-19 should at least have one blood and urine test measuring osmolality, sodium, creatinine and potassium during the acute phase and whenever the medical team has identified significant changes in factors influencing water balance (fever, loss of thirst sensation, hyperventilation...) Hospital admission of COVID-19 patients with chronic DI, besides following COVID-19 evolution, should be implemented if there are significant changes in plasma sodium levels, loss of thirst or new-onset neurocognitive impairment that hinders outpatient management (see section on “severe COVID-19”) Inpatient management: if admission is required for the treatment of COVID-19,conscious patients with intact thirst may maintain their usual desmopressin replacement regime with daily monitoring of plasma and urine sodium, potassium, creatinine and osmolality during the acute phase of COVID-19 as inpatients. Samples should be extracted one hour before the morning doseof desmopressin. It is strongly recommended that subjects with chronic DI wear medical alert bracelets or cards and duly inform their medical team of their condition at first contact. An example of medical alert card for diabetes insipidus created by our group is provided in Fig. .
It is mandatory to check daily fluid balance: oral and parenteral fluid administration (including fluids needed to administer parenteral drugs), diuresis and extrarenal fluid losses (i.e. diarrhea, vomiting, fever...). Daily monitoring of plasma and urine sodium, potasium, creatinine and osmolality Parenteral desmopressin administration should be first choice in severe COVID-19 patients to ensure stable and predictable desmopressin activity; conversion to parenteral doses should consider the pharmacokinetic bioequivalence among different administration routes: 1 mcg of parenteral desmopressin equals 10 mcg of intranasal and 100 mcg of oral desmopressin; thus, a patient on chronic treatment with intranasal desmopressin 10 mcg bd should be initially changed to 1 mcg parenteral DDAVP bd. It is advisable to ensure osmotic homeostasis is maintained through at least daily monitoring of plasma and urine osmolality If intercurrent illness or underlying chronic conditions impair the patient’s ability to cover daily water, electrolytes and/or nutrients needs through oral intake, osmotic balance and nutritional status should be maintained using parenteral route. Standard fluid requirements include 25–30 ml/kg-day of water, 1 mmol/kg-day of potassium, sodium and chloride and 100–150 g/day of glucose. If extrarenal water loss increases, corrections should be implemented accordingly (diarrhea, vomiting...); i.e. daily water administration should increase by 100 ml for each temperature degree above 37 °C) If the patient needs enteral or parenteral nutritional support according to stablished guidelines, water, electrolyte and glucose replacement should be included in nutritional formulations . In obese patients with normal nutritional status, water, glucose and electrolyte requirements will be tailored to the patient’s ideal weight. Obesity often does not preclude malnutrition, which should be equally screened and treated. If a patient with chronic DI develops hyponatraemia during COVID-19, three underlying factors should be identified and treatment: Desmopressin overdose, that can be corrected reducing the dose of DDAVP and monitoring clinical evolution (consciousness, gait, speech…) and biochemical evolution Secondary adrenal insufficiency, common in patients with central DI and whose under-replacement might lead to hyponatraemia. To our knowledge, no case of adrenal insufficiency due to SARS-COV2 infection has been published so far. Patients with hypopituitarism often suffer from metabolic comorbidities that make them prone to developing stress-induced hyperglycemia during COVID-19. Hyperglycemia might provoke increased osmotic diuresis and disturb water balance, particularly in patients with chronic DI. This situation should be dealt with treating hyperglycemia according to evidence-based guidelines and covering fluid losses orally or parenterally. Desmopressin dose, however, should not be modified if osmotic diuresis is the only factor altering water balance unless hypernatraemia develops.
Non-infected patients: Patients with chronic SIADH should regularly receive clinical examination that includes the assessment of internal jugular vein pressure. These data are needed to appraise dose changes and even interruption of hyponatraemia pharmacotherapy (tolvaptan, urea...). Thus, patients with pharmacologically treated and chronic hyponatraemia are not fit for telemedicine follow-up appointments wherever this is problem is concerned. Patients with chronic SIADH usually suffer from malnutrition as well due to common precipitating factors (multimorbidity). Malnutrition is an absolute contraindication for water restriction as a treatment for hyponatraemia, as water-restricted dieting might worsen nutritional outcomes. In patients with SIADH and without malnutrition, confinement and/or loss of social interaction might also hinder adherence to water restriction; in this situation pharmacotherapy for hyponatraemia should also be considered according to stablished guidelines . New-onset hyponatraemia and COVID-19: Hyponatraemia, whatever the underlying causes, represents a risk factor for mortality and increased length of hospitalization. Therefore, hyponatraemia should be identified and treated accordingly in COVID-19 patients Diagnosis of new-onset hyponatraemia in COVID-19 patients should initially include plasma and urine osmolality, sodium and potassium and clinical assessment of volume status. Further etiological tests should be guided by the patient’s underlying conditions and risk factors, volume status and evolution. COVID-19 might be associated with multiple underlying disorders that might precipitate or aggravate hyponatraemia (Fig. ). Among those etiologies, endocrinological assessment should include free thyroxine and thyroid stimulating hormone measurement, as COVID-19 can be associated with thyroid dysfunction , as well as diagnostic tests for adrenal insufficiency (9 am cortisol and/or synacthen test). Adrenal failure might be elicited by direct SARS-COV2 infiltration of adrenal cells or by dexamethasone-induced adrenal suprression . Patients suffering from COVID-19 should not be water-restricted in order to treat hyponatraemia. SARS-COV2 infection is often associated with malnutrition, increased perspiration and hyperventilation, and these factors may aggravate the adverse effects of water restriction. . Monitoring of COVID-19 patients with hyponatraemia should include daily measurements of plasma and urine sodium, potassium, creatinine and osmolality and clinical assessment of volume status (measurement of internal jugular vein pressure, intraocular pressure...). More intensive monitoring could be required in patients with severe hyponatraemia or those with rapidly evolving hyponatraemia. Hyponatraemia pharmacotherapy during COVID-19: i. Indications for hyponatraemia pharmacotherapy in COVID-19 patients: euvolemic hyponatraemia with plasma sodium >120 mEq/l, after excluding adrenal insufficiency and hypothyroidism. Most COVID-19 patients will not be suitable candidates for water restriction as a treatment for hyponatraemia due to malnutrition, loss of thirst and increased extraurinary water loss. These patients should therefore be treated with tolvaptan or urea according to clinical guidelines . ii. Treatment interactions between hyponatraemia and SARS-COV2 pharmacotherapy: described in Table and for each drug in its specific section. Patients with plasma sodium <120 mEq/l regardless of their clinical status or > 120 mEq/l with severe neurological symptoms should be treated with hypertonic saline serum according to published evidence-based guidelines Euvolemic or hypervolemic hyponatraemia, particularly with urine osmolality >350 mOsm/kg, may be treated with loop diuretics after an oral load of 4–5 g of NaCl Hypovolaemic hyponatraemia may be treated with isotonic saline fluids, no more than 30 ml/kg-day to avoid hypercorrection . c.1 Tolvaptan in COVID-19 patients: Initialdose: 7.5 mg/day Contraindicated if severe liver or kidney failure (Child C orGFR <30 ml/min). Precautions: Treatment with lopinavir/ritonavir inhibits CYP3A4, a potent pharmacological interaction with tolvaptan that increases plasma levels of the latter and could increase its acuaretic effect. Simultaneous use of both compounds should be closely monitored Tocilizumab enhances CYP3A4 activity, which could decrease tolvaptan’s circulating levels and its acuaretic effect No significant interactions have been described between hydroxychloroquine and any drug licensed for use in hyponatraemia c.2 Urea in COVID-19: No clinical experience has been published with urea in SARS-COV2 infected patients with hyponatraemia Warnings: Urea may decrease thirst, treated patients should be monitored, measuring intake and loss of fluids, changes in thirst, urine volume and new-onset nocturia. c.3 Furosemide in COVID-19 patients: Patients with euvolemic or hypervolemic hyponatraemia with urine osmolality >350 mOsm/kg. Furosemide may exacerbate hypokalemia in COVID-19 patients, who also display high activity of the renin-angiotensin-aldosterone axis .
Hypernatraemia is a common laboratory abnormality in COVID-19 patients, found in 3,7% of patients hospitalised due to COVID-19 in the HOPE registry . Similarly to hyponatraemia, increased plasma sodium levels are an adverse prognostic factor in non-COVID patients although this awaits confirmation in COVID-19 series. Hypernatraemia in COVID-19 develops due to an inbalance between increased water loss (hyperpyrexia, diarrhoea, vomiting, polyuria due to hyperglycemia or acute kidney injury...) and decreased water supply (anorexia with adipsia, altered consciousness, insufficient parenteral replacement...) . Diabetes insipidus as a direct consequence of SARS-COV2 has not been pathophysiologically described so far. Therefore, while awaiting descriptive studies of hypernatraemia in this population, it could be safely assumed that COVID-19 patients with hypernatraemia mostly suffer increased extrarenal water loss. Treatment of hypernatraemia in COVID-19 patients is similar to that described in the general population, based on parenteral fluid replacement . General recommendations: Thrombosis prevention: hypernatraemia and SARS-COV2 infection are both independent thrombosis risk factors. If no contraindication is found, low weight heparine profilaxis should be initiated on admission. Plasma and urine sodium measurement every 12–24 h Objective: reduction of sodium levels <10 mEq/l every 24 h to reduce the risk of cerebral edema Severe hypernatraemia with loss of consciousness Dextrose 5% serum at 1.35 ml/kg-hour If there is clinical hypovolaemia, administer isotonic serum (saline serum 0,9% or Ringer) parallel to dextrose at 30 ml/kg/day to increase effective circulating volume. Patients with increased risk of cerebral edema (liver failure, heart failure) may receive instead saline serum 0,45% at 30 ml/kg-day.
|
A comparative analysis of GPT-3.5 and GPT-4.0 on a multiple-choice ophthalmology question bank: A study on artificial intelligence developments | 87dd2810-33b5-45db-972b-045610904159 | 11809821 | Ophthalmology[mh] | The medical industry is among the many fields where artificial intelligence (AI) has shown increasing promise. In recent years, doctors have frequently used artificial intelligence to assist them in diagnosis, treatment, and research . In the past, AI has been utilized to identify different retinal pathologies, such as age-related macular degeneration and diabetic retinopathy . The literature also shows how AI can be helpful in conditions other than retinal pathologies . Large language model (LLM) Generative Pretrained Transformer 3 (GPT-3) produces text that appears human. It received training on a vast corpus of text (more than 400 billion words) from the internet, which included webpages, books, and articles . The large language model (LLM) ChatGPT (OpenAI, San Francisco, CA, USA) has caused a paradigm shift in the application of artificial intelligence in medicine . Currently limited to training using online resources until September 2021, GPT-3.5 is an improved version of GPT-3 (2020) trained on a wide range of parameters . In March 2023, OpenAI unveiled GPT-4, a new generation LLM that outperforms GPT-3.5 and performs at a human level across various academic benchmarks . The large language models (LLMs) and text-based LLMs can potentially improve medical diagnosis and interpretation. OphthoQuestions question banks, the Basic and Clinical Sciences Course (BCSC) Self-Assessment Programme, and FRCOphth examinations have previously been used to test the effectiveness of these models, particularly in ophthalmology . The performance of LLMs in ophthalmology question answering is still not sufficiently analyzed, although there are studies on their performance . This study evaluated a comparative analysis of GPT-3.5 and GPT-4.0 on the multiple-choice ophthalmology question bank using OphthoQuestions ( www.ophthoquestions.com ), a popular question preparation bank. Ophthalmologists frequently consult this multiple-choice question bank as these resources have been linked to improved performance on the standardized Ophthalmic Knowledge Assessment Programme (OKAP) examination taken by ophthalmology residents in the United States and Canada, particularly in studying for board examinations. Exploring OphthoQuestions In January 2024, using a personal account on OphthoQuestions ( www.ophthoquestions.com ), 520 questions were selected from 4,551 OphthoQuestions. Since the GPT-3.5 and GPT-4.0 multiple-choice question bank performances were compared, using questions that did not contain visual data, such as clinical, radiological, or graphic images, was preferred since the GPT-3.5 model could not analyze visual data. These questions were not available to the general public, meaning there was no chance that they were previously indexed in the ChatGPT training data set or any search engine. The researcher generated 40 random questions from each of the 13 ophthalmology sub-specialties. These subgroups included general medicine, fundamentals, clinical optics, cornea, uveitis, glaucoma, lens and cataract, pathology and tumors, neuro-ophthalmology, pediatrics, oculoplastics, retina, vitreous, and refractive surgery. Study Design The researcher manually entered the content of the text-based questions into the program. A new chat was opened for each question. Then, the statement “You should choose one of the following options” was written. Questions containing visual elements such as clinical images or medical photographs were not included in our evaluation as ChatGPT-3.5 could not analyze them. This study assessed the gross accuracy in correctly completing a series of multiple-choice questions (MCQs). ChatGPT was considered to have given a “correct” answer for scoring purposes when it selected the option suggested by the answer key for a given question. On the other hand, an answer was considered “incorrect” if it did not match the answer's essential suggestion, if the platform failed to identify any option when asked further, or if the third attempt was incorrect in the case of conflicting duplicate answers. The answers were then checked against the answer key by the researcher, and the correct answers were analyzed according to subgroups and overall groups. A conservative analysis strategy was adopted, preferring not to set thresholds similar to those in other studies. Instead, the assessment of whether the performance of GPT-4.0 was different from GPT-3.5 was performed . Statistical analysis To analyze categoric variables, Fisher’s exact test and Chi-square (X 2 ) were used to compare the number of correct responses on the GPT-4.0 and GPT-3.5 tests. The Kolmogorov-Smirnov test was used to assess the data’s normality. The accuracy and compliance rates were reported in percentage numbers. The accuracy of the thirteen distinct subspecialties was also compared using chi-square analysis. A P-value of 0.05 was regarded as statistically significant. The studies used SPSS, version 25.0 (SPSS Inc., Chicago, IL, USA). In January 2024, using a personal account on OphthoQuestions ( www.ophthoquestions.com ), 520 questions were selected from 4,551 OphthoQuestions. Since the GPT-3.5 and GPT-4.0 multiple-choice question bank performances were compared, using questions that did not contain visual data, such as clinical, radiological, or graphic images, was preferred since the GPT-3.5 model could not analyze visual data. These questions were not available to the general public, meaning there was no chance that they were previously indexed in the ChatGPT training data set or any search engine. The researcher generated 40 random questions from each of the 13 ophthalmology sub-specialties. These subgroups included general medicine, fundamentals, clinical optics, cornea, uveitis, glaucoma, lens and cataract, pathology and tumors, neuro-ophthalmology, pediatrics, oculoplastics, retina, vitreous, and refractive surgery. The researcher manually entered the content of the text-based questions into the program. A new chat was opened for each question. Then, the statement “You should choose one of the following options” was written. Questions containing visual elements such as clinical images or medical photographs were not included in our evaluation as ChatGPT-3.5 could not analyze them. This study assessed the gross accuracy in correctly completing a series of multiple-choice questions (MCQs). ChatGPT was considered to have given a “correct” answer for scoring purposes when it selected the option suggested by the answer key for a given question. On the other hand, an answer was considered “incorrect” if it did not match the answer's essential suggestion, if the platform failed to identify any option when asked further, or if the third attempt was incorrect in the case of conflicting duplicate answers. The answers were then checked against the answer key by the researcher, and the correct answers were analyzed according to subgroups and overall groups. A conservative analysis strategy was adopted, preferring not to set thresholds similar to those in other studies. Instead, the assessment of whether the performance of GPT-4.0 was different from GPT-3.5 was performed . To analyze categoric variables, Fisher’s exact test and Chi-square (X 2 ) were used to compare the number of correct responses on the GPT-4.0 and GPT-3.5 tests. The Kolmogorov-Smirnov test was used to assess the data’s normality. The accuracy and compliance rates were reported in percentage numbers. The accuracy of the thirteen distinct subspecialties was also compared using chi-square analysis. A P-value of 0.05 was regarded as statistically significant. The studies used SPSS, version 25.0 (SPSS Inc., Chicago, IL, USA). In general, GPT-4.0 and GPT-3.5 answered 408 questions (78.46%) 95% CI [70,88%] and 333 questions (64.15%) 95% CI [53,74%] of 520 questions correctly, respectively. GPT-4.0 answered statistically significantly more questions correctly compared to GPT-3.5 (p = 0.0195). Chat GPT 4.0 showed a statistically significant difference compared to Chat GPT 3.5 in giving correct answers in all subgroup analyses (p<0.05). In subgroup analyses, pathology and tumors were the groups with the highest percentage difference in the percentage of correct answers. In contrast, the group with the lowest percentage difference in correct answers was the retina, vitreous, and neuro-ophthalmology section. GPT-3.5 performance was significantly variable across the 13 subspecialties (p = 0.034). GPT-4.0 showed more consistent results across subspecialty groups than GPT-3.5 with no significant differences (p = 0.078). At the same time, GPT-3.5 had the highest percentage of correct answers in fundamentals (74%) and the lowest in pathology and tumors (53.0%). GPT-4.0 showed the highest percentage of correct answers in general medicine (88%) and the lowest rate of correct answers in clinical optics (70%). shows the amount and percentage of correct answers given by GPT-4.0 and GPT-3.5. This research provides promising new evidence of ChatGPT’s ability to handle complex clinical and medical data, particularly the development and consistency of artificial intelligence algorithms. AI chatbot technology has developed rapidly and is being used increasingly in e-society. ChatGPT, in particular, has become one of the fastest-growing computer applications in history, gaining 100 million active users in just 2 months . Integrating AI into clinical practice and medical education has grown in popularity recently. Recent research indicates that the newest LLM versions exhibit a promising problem-solving capacity . With its widespread use, it has been the subject of many studies, for example, one study reporting the relative success of ChatGPT on a sample United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge assessment, achieving a passing threshold of approximately 60% . The effectiveness of artificial intelligence was also studied in another board exam. In this study of the efficacy of artificial intelligence in the European Ophthalmology board exam, it was reported that GPT showed superior success by answering 6188 of 6785 questions correctly . Very few studies in the literature show the performance of GPT-3.5 and GPT 4.0 against each other in ophthalmology . In one of these studies, the GPT-4 was tested on two multiple-choice question sets of 260 questions from the Basic and Clinical Science Course (BCSC) Self-Assessment Program and OphthoQuestions question banks. The top-performing GPT-4 model was also contrasted with GPT-3.5 and past human performance. Antaki et al. found that GPT-4 significantly outperformed GPT-3.5 on simulated ophthalmology board-style exams, similar to the findings presented in this study . In another study evaluating the ability to answer ophthalmology-related questions at different ophthalmology education levels, GPT-4.0 was found to perform significantly better than GPT-3.5 (75% vs 46%, p<0.01) . In a relatively recent study, Moshirfar and colleagues evaluated human responses to 467 questions from GPT-4.0, GPT-3.5, and a question bank called StatPearls and obtained scores of GPT-4.0 73.2%, GPT-3.5 55.5%, humans 58.3%, respectively. Although it is not appropriate to directly compare this study and the presented study, Moshirfar et al. found that GPT-4.0 answered more questions correctly in percentage than GPT-3.5, similar to the results in this study . This study found that GPT-4.0 answered more questions correctly than GPT-3.5, and the difference between the two groups was statistically significant (78.46% vs. 64.15%; p = 0.0195). Chat GPT 4.0 showed a statistically significant difference compared to Chat GPT 3.5 in giving correct answers in all subgroup analyses (p<0.05). In the subgroup analyses performed in this study, GPT-3.5 performance was significantly variable across the 13 subspecialties (p = 0.034). GPT-4.0 showed more consistent results across subspecialty groups than GPT-3.5 with no significant differences (p = 0.078). This result indicates that the GPT-4.0 algorithm is statistically more successful than GPT-3.5 in the ophthalmology question bank. Finally, the statistically significant success of GPT-4.0 compared to GPT-3.5 in this study should be considered with the algorithm developments in the coming years, especially in online exams, which will increase gradually since the use of artificial intelligence is an increasing threat to test integrity. Thus, protocols such as mandatory proctoring should be considered. Limitation of Study The first limitation of this study was that image—or video-based questions that could not be easily analyzed in ChatGPT-3.5, which was offered free of charge, were not evaluated. This limitation should be considered a limitation that might affect the study. Furthermore, the questions included in the study were not categorized as easy, medium, or complex. Even though the questions were chosen randomly, this factor should have also been considered statistically. The first limitation of this study was that image—or video-based questions that could not be easily analyzed in ChatGPT-3.5, which was offered free of charge, were not evaluated. This limitation should be considered a limitation that might affect the study. Furthermore, the questions included in the study were not categorized as easy, medium, or complex. Even though the questions were chosen randomly, this factor should have also been considered statistically. The results of this study point to the potential for AI, and ChatGPT in particular, to positively contribute to medical education and practice. Moreover, the success of AI in its multiple-choice question bank exam could pave the way for greater integration of AI technology into medical education and continuing professional development. In the coming years, ChatGPT’s proficiency in clinical management and decision-making should be supported by further studies demonstrating that it can be a valuable resource for ophthalmologists and other medical professionals seeking information and guidance on complex cases. Furthermore, ChatGPT 4.0 was statistically more consistent and accurate in the study presented here than ChatGPT-3.5. AI technology, especially in ophthalmology, should be seen as a complement to, rather than a replacement for, medical professionals. |
A multicentre survey on the perception of palliative care among health professionals working in haematology | 9fb2f5b5-c399-40f7-8f63-189366529c2b | 10973048 | Internal Medicine[mh] | Haematological malignancies (leukaemia, lymphoma, and myeloma) all have different aetiologies, prognoses, and frequencies . According to data provided by the “Global Cancer Observatory” in 2020, the diagnosis of haematological neoplasms corresponds to 6.64% of all cancer diagnoses, with an overall mortality of 7.13% . These illnesses are characterised by long and complex prognosis, unpredictable disease trajectories, rapid clinical deterioration, and high symptom burden due to polychemotherapy regimens, radiotherapy, and/or bone marrow transplantation . Urgent hospitalisations for serious medical complications are frequent, especially in the advanced stage of the disease. However, there is a growing availability of new treatments that contribute to increasing both the possibility of recovery and long-term survival . This can affect patients’ quality of life, particularly in cases of very long hospitalisations or intensive medical treatments up to the last stages of life. Recent international literature supports the integration of palliative care (PC) and haematology with improved outcomes, particularly in models of early integration and simultaneous care , where supportive care does not exclude active treatments and collaboration between professionals is structured throughout the patient’s care pathway, pursuant to emerging needs. These models have been shown to promote higher-quality symptom management, facilitate complex medical decision-making, contribute to reducing hospitalisations and intensive medical treatments with an adverse harm/benefit ratio, and lower healthcare costs . Despite recent evidence, fewer haematologic patients access PC services compared to patients with solid cancers . The reasons for this phenomenon include cultural aspects, attitudes that propagate during medical training, the unique nature of haematological malignancies, such as difficulty with prognostication, and lack of accessibility to PC services. These barriers cross a variety of cultural contexts, which highlight the broad scope of the problem and emphasise the need for durable and sustainable solutions . In Italy, only one study has previously analysed the cognitive barriers and facilitators of health professionals when referring patients to PC via a qualitative survey . Only two other studies support the early integration of PC and haematology, and both demonstrated the effectiveness of these models in improving quality of life and significantly reducing healthcare costs . This study aimed to investigate the barriers and facilitators perceived by haematologic healthcare professionals in referring patients to PC and to propose a variety of solutions to improve collaboration between palliative and haematologic care. Study procedures The study was formally notified to the Ethics Committee of the Istituto Oncologico Veneto of Padova, and the health departments of each centre gave their approval. This research is a web-based, multicentre, exploratory descriptive survey. Eligible participants were specialist and trainee physicians as well as nurses working at an onco-haematological inpatient or day hospital of five Italian haematological units and San Marino’s hospital, specifically: IRCCS - Istituto Oncologico Veneto (IOV) of Padova and Castelfranco Veneto, Azienda Ospedaliera di Padova, Azienda Ospedaliera of Vicenza, Azienda Sanitaria Universitaria Friuli Centrale (ASU FC) of Udine, IRCCS-Istituto Romagnolo per lo Studio dei Tumori “Dino Amadori” (IRST) of Meldola, and Istituto per la Sicurezza Sociale (ISS) of San Marino. This study arose from our direct experience of resistance by haematologists to referring patients to palliative care. We confirmed, through a review of the literature, that the identical issues we noted had emerged in other settings formatively and culturally distinct from ours. Considering that no study has been conducted in Italy, we proposed a multicentre survey to promote awareness among health professionals about this topic in the hopes that further investigations will be conducted. This is why we decided to limit participation to academic healthcare professionals: we want this survey to serve as a starting point for future studies that will promote synergy between PC and haematologic care, while also including the public and raising awareness. Participants were enrolled via an email invitation that explained the purpose of the study and included a link to complete the questionnaire on the Google Forms digital platform. Study measures The initial stage in creating the questionnaire was to conduct a non-systematic review of the literature on PubMed, Cochrane, Chinal, and Scopus using the following keywords: palliative care, barriers, onco-haematology, haematological malignancies, hospice, end-of-life, interposed by Boolean operators “and”, “or”. Articles published prior to 2010 were excluded, to have recent and up-to-date data and a context more representative of the current reality. The questionnaire was developed using the collected bibliography and tailored for distribution to medical professionals and nurses under the supervision of a qualitative research expert. Validation was not necessary because the project consisted of a survey sent to healthcare experts rather than a measurement scale. Nonetheless, as a model, we used a similar questionnaire previously administered to transplant physicians in the USA . The questionnaire was created using the Google Forms digital platform, as it is a safe and secure tool, widely used for this type of exploratory investigation. Professionals could participate anonymously, and the compilation process took an estimated 20 min overall. The following areas were investigated: Personal information and clinical practice characteristics (6 items) Knowledge of PC (4 items) Education and training in PC (1 item) Perceptions of professionals regarding facilitators and barriers (8 items) Personal experiences (2 short open-ended questions) The questionnaire was administered over the course of 20 days, in September and October 2021. Statistical analysis The study’s objectives are descriptive: for the quantitative and qualitative closed-response variables, statistical analyses were carried out using SAS software. After the data were described, the frequency and response rates were correlated with age and profession status according to multivariate analysis. Microsoft Excel software was used to categorise the open-ended responses, which were then examined separately through group discussion. Labels were applied to identify recurrent thematic areas and relative intensity; significant responses were fully reported to support the discussion. The questionnaire responses and the associated raw data are available and can be consulted upon request by the authors. The study was formally notified to the Ethics Committee of the Istituto Oncologico Veneto of Padova, and the health departments of each centre gave their approval. This research is a web-based, multicentre, exploratory descriptive survey. Eligible participants were specialist and trainee physicians as well as nurses working at an onco-haematological inpatient or day hospital of five Italian haematological units and San Marino’s hospital, specifically: IRCCS - Istituto Oncologico Veneto (IOV) of Padova and Castelfranco Veneto, Azienda Ospedaliera di Padova, Azienda Ospedaliera of Vicenza, Azienda Sanitaria Universitaria Friuli Centrale (ASU FC) of Udine, IRCCS-Istituto Romagnolo per lo Studio dei Tumori “Dino Amadori” (IRST) of Meldola, and Istituto per la Sicurezza Sociale (ISS) of San Marino. This study arose from our direct experience of resistance by haematologists to referring patients to palliative care. We confirmed, through a review of the literature, that the identical issues we noted had emerged in other settings formatively and culturally distinct from ours. Considering that no study has been conducted in Italy, we proposed a multicentre survey to promote awareness among health professionals about this topic in the hopes that further investigations will be conducted. This is why we decided to limit participation to academic healthcare professionals: we want this survey to serve as a starting point for future studies that will promote synergy between PC and haematologic care, while also including the public and raising awareness. Participants were enrolled via an email invitation that explained the purpose of the study and included a link to complete the questionnaire on the Google Forms digital platform. The initial stage in creating the questionnaire was to conduct a non-systematic review of the literature on PubMed, Cochrane, Chinal, and Scopus using the following keywords: palliative care, barriers, onco-haematology, haematological malignancies, hospice, end-of-life, interposed by Boolean operators “and”, “or”. Articles published prior to 2010 were excluded, to have recent and up-to-date data and a context more representative of the current reality. The questionnaire was developed using the collected bibliography and tailored for distribution to medical professionals and nurses under the supervision of a qualitative research expert. Validation was not necessary because the project consisted of a survey sent to healthcare experts rather than a measurement scale. Nonetheless, as a model, we used a similar questionnaire previously administered to transplant physicians in the USA . The questionnaire was created using the Google Forms digital platform, as it is a safe and secure tool, widely used for this type of exploratory investigation. Professionals could participate anonymously, and the compilation process took an estimated 20 min overall. The following areas were investigated: Personal information and clinical practice characteristics (6 items) Knowledge of PC (4 items) Education and training in PC (1 item) Perceptions of professionals regarding facilitators and barriers (8 items) Personal experiences (2 short open-ended questions) The questionnaire was administered over the course of 20 days, in September and October 2021. The study’s objectives are descriptive: for the quantitative and qualitative closed-response variables, statistical analyses were carried out using SAS software. After the data were described, the frequency and response rates were correlated with age and profession status according to multivariate analysis. Microsoft Excel software was used to categorise the open-ended responses, which were then examined separately through group discussion. Labels were applied to identify recurrent thematic areas and relative intensity; significant responses were fully reported to support the discussion. The questionnaire responses and the associated raw data are available and can be consulted upon request by the authors. Participant characteristics Table lists the characteristics of the participants. Of the 320 health professionals involved, 142 (44.4%) answered the questionnaire. With a fairly homogeneous distribution concerning working reality, nurses (96/142, 67.6%) made up the majority of the sample. Just over half of the interviewees (72/142, 50.7%) reported the presence of an in-hospital PC team, and 77/142 (54.2%) never attended PC-related courses. Knowledge and perceptions of palliative care Most participants (100/142, 70.4%) stated that they knew the role of PC: when asked to supply a list of keywords that could be used to define PC, (119/142, 83.8%) answered. Every response was examined, categorised into macro-areas, and broken down into a total of 321 keywords. The most prevalent categories were “end-of-life and death” (37/321, 11.5%), “accompaniment” (42/321, 13.1%), “support” (28/321, 8.7%), “global care” (29/321, 9%), “symptoms” (44/321, 13.7%), and “quality” (73/321, 22.8%). Concerning simultaneous care, 45.07% (64/142) of participants said that they were unaware of the role it plays (Figure ). We again asked the participants to supply a list of keywords that could be used to define simultaneous care: (110/142, 77.5%) answered. Every response was examined and categorised into macro-areas like: “integration and multidisciplinary team” (32/110, 29.1%), “early management and timing” (27/110, 24.5%), “symptoms, pain, and side effects of therapies” (18/110, 16.4%), and “globality” (8/110, 7.3%). Subsequently, 113/142 (79.6%) of the medical professionals involved strongly agreed with the statement that PC integration in haematology benefits patients and caregivers. At this point, we examined how the terms “simultaneous care” and “palliative care” affected dialogue (Fig. ). First, we asked participants if they thought the terms “palliative care” and “simultaneous care” would prevent PC referrals. Overall, 77/142 (54.3%) answered affirmatively for “palliative care”, and 73/142 (51.4%) for “simultaneous care”. “Hospice” and “end of life” are synonymous terms for 61/142 (43%) of “palliative care” and 88/142 (61.9%) of “simultaneous care”. When asked if the term “palliative care” could make patients and caregivers feel less hopeful, only 39/142 (27.4%) participants said they thought so; however, 76/142 (53.5%) said they thought the same about “simultaneous care”. A few of the interviewees (26/142, (18.3%) agreed that the term “palliative care” could be linked to the management and treatment of exclusive symptoms, while 34/142 (24%) were more concerned with “simultaneous care”. Access to palliative care services The timing of the PC referrals was then examined (Figure ). A small majority of participants (75/142, 52.8%) strongly agreed to request PC when the prognosis was less than 3 months, while only 70/142 (49.2%) agreed to do so for haematological patients at the start of treatment. On referrals made 30 days prior to death, 84/142 (59.1%) agreed. On the other hand, there is a greater consensus (97/142, 68.2%) regarding PC referrals when symptoms become unmanageable. Figure shows the perception of maintaining transfusion support in patients no longer eligible for antitumour therapy; this statement was supported by the majority of participants (68/142, 57.1%). Of those, 18/142 (12.7%) fully agreed and 63/142 (44.4%) agreed. Through a correlation analysis between the participants’ profession and these data, it was possible to determine that, on average, physicians are more favourable (24/46, 60.9%) than nurses are (53/96, 55.3%), with most nurses not taking a position (31/96, 32.3% compared to 11/46, 23.9% of doctors). In response to the open-ended question about health professionals’ opinions concerning referral to the PC team, 122/142 (85.9%) answered, and 111/122 (90.9%) responded positively. The analysis was performed again for these open-ended responses by dividing the 161 recurring keywords into 8 macro-areas: “opportunity for the patient” (48/161, 29.8%), “quality of life and dignity” (39/161, 24.2%), “multidisciplinarity” (21/161, 13%), “total care” (20/161, 12.4%), “grief awareness and processing” (15/161, 9.3%), and “support” (9/161, 5.6%). Merely 4.9% (6/122) expressed dissent, emphasising that the primary issues stemmed from the demoralisation of patients and caregivers, an inadequate PC network in fulfilling patients’ requirements, and haematologists who view PC referrals as a personal failure. Consequently, we tried to delve deeper into two significant and related areas. First, the reasons behind the PC’s request for intervention were discussed: 105/152 (74%) fully agreed with the clinical aspects; 112/142 (78.8%) agreed with the communicative-relational reasons; 107/142 (75.4%) agreed with the ethical and deontological aspects; and 116/142 (81.7%) agreed with the management of emotional load. The second part involved who should communicate the prognosis in an advanced stage of illness by referring the patient to the PC team. The first hypothesis was that the haematologist would communicate the prognosis; 39/142 (27.5%) participants strongly agreed, and 53/142 (37.3%) participants agreed. The second involved the conjunction between haematologist and the palliative specialist; 101/142 (74.2%) participants agreed. The final one suggested that the multi-professional team communicate it; 70.4% (100/142) of the participants strongly agreed, and 15.5% (22/142) of participants agreed. Lastly, we used an open-ended question to investigate circumstances in which professionals might have considered it appropriate to refer patients to PC services but chose not to, and if that was the case, we asked why: 110/142 (77.5%) participants confirmed the occurrence of this eventuality. We classified 150 reasons that prevented access to the PC into recurrent thematic areas by analysing the affirmative answers: “professionals” perceptions” (23/150, 33%), “professional training and experience” (34/150, 22.66%), “institute resources” (24/150, 16%), “prognostic timing” (21/150, 14%), “patients” and caregivers” altered perceptions and awareness” (15/150, 10%), “lack of professional collaboration” (13/150, 8.67%), “patients” persistence in therapy” (13/150, 8.67%), and “the doctor-patient relationship” (7/150, 4.67%). Perceived facilitators to palliative care utilisations Figure outlines participant perceptions of elements that might encourage PC team referrals. Having a dedicated case manager is the first potential facilitator that has been explored. Most professionals view this as a factor that favours consistent exchange with the palliative team: 44/142 (31%) agreed, and 67/142 (47.2%) completely agreed. The PC team’s presence within the hospital is yet another suggested facilitator, with 89.5% (135/142) of consent; specifically, 42/142 (24%) respondents agreed and 93/142 (65.5%) strongly agreed. Participants rated the availability of an in-hospital hospice as a facilitator in 73.2% (104/142) of cases. Similarly, 73/142 (51.4%) strongly agreed, and 39/142 (27.5%) agreed that regular meetings with a PC team could enhance the integration with haematologists. Regarding the option to request an in-hospital palliative consultation, 86/142 (60.6%) of the respondents strongly agreed that it is a process facilitator. Moreover, most health professionals (110/142, 77.5%) believed that the ability to transfuse patients in hospice or at home could be a facilitating factor. Ultimately, the majority of them viewed training programmes as a motivating factor for patients to be referred to PC (71/142, 50% in strong agreement and 46/142, 32.4% in agreement). Table lists the characteristics of the participants. Of the 320 health professionals involved, 142 (44.4%) answered the questionnaire. With a fairly homogeneous distribution concerning working reality, nurses (96/142, 67.6%) made up the majority of the sample. Just over half of the interviewees (72/142, 50.7%) reported the presence of an in-hospital PC team, and 77/142 (54.2%) never attended PC-related courses. Most participants (100/142, 70.4%) stated that they knew the role of PC: when asked to supply a list of keywords that could be used to define PC, (119/142, 83.8%) answered. Every response was examined, categorised into macro-areas, and broken down into a total of 321 keywords. The most prevalent categories were “end-of-life and death” (37/321, 11.5%), “accompaniment” (42/321, 13.1%), “support” (28/321, 8.7%), “global care” (29/321, 9%), “symptoms” (44/321, 13.7%), and “quality” (73/321, 22.8%). Concerning simultaneous care, 45.07% (64/142) of participants said that they were unaware of the role it plays (Figure ). We again asked the participants to supply a list of keywords that could be used to define simultaneous care: (110/142, 77.5%) answered. Every response was examined and categorised into macro-areas like: “integration and multidisciplinary team” (32/110, 29.1%), “early management and timing” (27/110, 24.5%), “symptoms, pain, and side effects of therapies” (18/110, 16.4%), and “globality” (8/110, 7.3%). Subsequently, 113/142 (79.6%) of the medical professionals involved strongly agreed with the statement that PC integration in haematology benefits patients and caregivers. At this point, we examined how the terms “simultaneous care” and “palliative care” affected dialogue (Fig. ). First, we asked participants if they thought the terms “palliative care” and “simultaneous care” would prevent PC referrals. Overall, 77/142 (54.3%) answered affirmatively for “palliative care”, and 73/142 (51.4%) for “simultaneous care”. “Hospice” and “end of life” are synonymous terms for 61/142 (43%) of “palliative care” and 88/142 (61.9%) of “simultaneous care”. When asked if the term “palliative care” could make patients and caregivers feel less hopeful, only 39/142 (27.4%) participants said they thought so; however, 76/142 (53.5%) said they thought the same about “simultaneous care”. A few of the interviewees (26/142, (18.3%) agreed that the term “palliative care” could be linked to the management and treatment of exclusive symptoms, while 34/142 (24%) were more concerned with “simultaneous care”. The timing of the PC referrals was then examined (Figure ). A small majority of participants (75/142, 52.8%) strongly agreed to request PC when the prognosis was less than 3 months, while only 70/142 (49.2%) agreed to do so for haematological patients at the start of treatment. On referrals made 30 days prior to death, 84/142 (59.1%) agreed. On the other hand, there is a greater consensus (97/142, 68.2%) regarding PC referrals when symptoms become unmanageable. Figure shows the perception of maintaining transfusion support in patients no longer eligible for antitumour therapy; this statement was supported by the majority of participants (68/142, 57.1%). Of those, 18/142 (12.7%) fully agreed and 63/142 (44.4%) agreed. Through a correlation analysis between the participants’ profession and these data, it was possible to determine that, on average, physicians are more favourable (24/46, 60.9%) than nurses are (53/96, 55.3%), with most nurses not taking a position (31/96, 32.3% compared to 11/46, 23.9% of doctors). In response to the open-ended question about health professionals’ opinions concerning referral to the PC team, 122/142 (85.9%) answered, and 111/122 (90.9%) responded positively. The analysis was performed again for these open-ended responses by dividing the 161 recurring keywords into 8 macro-areas: “opportunity for the patient” (48/161, 29.8%), “quality of life and dignity” (39/161, 24.2%), “multidisciplinarity” (21/161, 13%), “total care” (20/161, 12.4%), “grief awareness and processing” (15/161, 9.3%), and “support” (9/161, 5.6%). Merely 4.9% (6/122) expressed dissent, emphasising that the primary issues stemmed from the demoralisation of patients and caregivers, an inadequate PC network in fulfilling patients’ requirements, and haematologists who view PC referrals as a personal failure. Consequently, we tried to delve deeper into two significant and related areas. First, the reasons behind the PC’s request for intervention were discussed: 105/152 (74%) fully agreed with the clinical aspects; 112/142 (78.8%) agreed with the communicative-relational reasons; 107/142 (75.4%) agreed with the ethical and deontological aspects; and 116/142 (81.7%) agreed with the management of emotional load. The second part involved who should communicate the prognosis in an advanced stage of illness by referring the patient to the PC team. The first hypothesis was that the haematologist would communicate the prognosis; 39/142 (27.5%) participants strongly agreed, and 53/142 (37.3%) participants agreed. The second involved the conjunction between haematologist and the palliative specialist; 101/142 (74.2%) participants agreed. The final one suggested that the multi-professional team communicate it; 70.4% (100/142) of the participants strongly agreed, and 15.5% (22/142) of participants agreed. Lastly, we used an open-ended question to investigate circumstances in which professionals might have considered it appropriate to refer patients to PC services but chose not to, and if that was the case, we asked why: 110/142 (77.5%) participants confirmed the occurrence of this eventuality. We classified 150 reasons that prevented access to the PC into recurrent thematic areas by analysing the affirmative answers: “professionals” perceptions” (23/150, 33%), “professional training and experience” (34/150, 22.66%), “institute resources” (24/150, 16%), “prognostic timing” (21/150, 14%), “patients” and caregivers” altered perceptions and awareness” (15/150, 10%), “lack of professional collaboration” (13/150, 8.67%), “patients” persistence in therapy” (13/150, 8.67%), and “the doctor-patient relationship” (7/150, 4.67%). Figure outlines participant perceptions of elements that might encourage PC team referrals. Having a dedicated case manager is the first potential facilitator that has been explored. Most professionals view this as a factor that favours consistent exchange with the palliative team: 44/142 (31%) agreed, and 67/142 (47.2%) completely agreed. The PC team’s presence within the hospital is yet another suggested facilitator, with 89.5% (135/142) of consent; specifically, 42/142 (24%) respondents agreed and 93/142 (65.5%) strongly agreed. Participants rated the availability of an in-hospital hospice as a facilitator in 73.2% (104/142) of cases. Similarly, 73/142 (51.4%) strongly agreed, and 39/142 (27.5%) agreed that regular meetings with a PC team could enhance the integration with haematologists. Regarding the option to request an in-hospital palliative consultation, 86/142 (60.6%) of the respondents strongly agreed that it is a process facilitator. Moreover, most health professionals (110/142, 77.5%) believed that the ability to transfuse patients in hospice or at home could be a facilitating factor. Ultimately, the majority of them viewed training programmes as a motivating factor for patients to be referred to PC (71/142, 50% in strong agreement and 46/142, 32.4% in agreement). This is the first study conducted in Italy with the goal of thoroughly examining how medical professionals who work in onco-haematology departments feel about PCs. The integration of PC and haematology benefits patients and healthcare professionals according to 96.5% (137/142) of participants (113/142, 79.6% strongly agreed, and 24/142, 16.9% agreed). However, one of the most identified barriers is the lack of services, such as an in-hospital PC team, which is crucial for determining which patients can benefit from the service, and an in-hospital hospice. Transfusion support is a highly debated topic in the care of haematological patients, as it is frequently a binary decision for PC referrals . According to the gathered data, transfusion support is a critical component for off-therapy patients. As a result, it is thought that offering transfusions at home and in hospice settings encourages PC team referrals, particularly from physicians. Despite the fact that healthcare professionals have written extensively about this subject in the literature , clinical practice still lacks a formalised, shared, and comprehensive process to support decision-making and encourage communication between two teams. This survey highlights the issue of professional collaboration as a barrier to PC referrals. According to the literature , multidisciplinarity seems to help lower cultural barriers related to the role of individual professionals. Nurses were more aware of this issue (82/96, 84.3%), and it is evident that while haematologists understood the value of integrating with the PC team, the volume of referrals did not match the needs that were identified. A significant disparity between areas where equitable, accessible, and continuous care is not guaranteed is caused by the lack of uniform organisational models and care pathways for patients with PC needs. Integrated care models encourage increased collaboration and communication between PC and haematology providers during disease treatment . A facilitating factor that is still lacking in regulatory identity but acknowledged by 112/142 (79%) of the involved professionals is the case manager nurse, who establishes goals, maintains consistency across various care settings, and recognises interdisciplinary issues. Although there are many resources available to improve the standard of care, training and education remain among the easiest to apply. According to the study, 100 professionals were familiar with PC but had never taken any related courses. They cited several factors, including lack of experience, inadequate training, and mistrust of haematologists, as contributing to unsuccessful PC referrals. Specific training focusing on the complexity of needs is necessary for PC diffusion in various care settings, but it is still inadequate in university education and in the general culture . Even though PC has gained recognition as a medical specialty and that many professionals and students acknowledge its benefits, university curricula still frequently omit PC courses . Note that there is a lack of current literature on PC training and education programmes, as well as on the specific role of the case manager; of the studies we found, only two were published after 2018 . Due to cultural barriers and misconceptions, haematologists and palliativists do not work together as much as they should. Based on the data gathered, however, university education and public awareness campaigns might reduce PC deficiency. The timing of PC referral appears to be another significant concern in this study and a prominent theme in the literature . Early referral of haematological patients to PC is hindered by the unpredictable nature of the haematologic disease trajectory, the timing of the treatments, and the possible complications. It is helpful to highlight professional differences, such as the fact that some doctors (15/142, 10.56%) and nurses (12/142, 8%) disagreed with administering on PC at the start of treatment. In haematology, a PC model is still lacking today , and regardless of the patient’s anticipated life expectancy, when to activate PC is crucial. The literature contains evidence that a shared path between PC and aggressive curative treatment (such as conditioning chemotherapy and related transplantation) is possible at the same time, accepted by patients, and has positive outcomes . An outpatient observational study of patients with acute myeloid leukaemia revealed that those who received early palliative supportive care had a greater quality of life and lower rates of treatment aggressiveness at the end of life . The survey’s results are strikingly similar to those in the literature, which seems to be mostly composed of European and international sources . Even though they are familiar with PC, the majority of the haematologists involved do not think that they are essential, so the service is only activated a few days prior to death, just as they are not aware of the role of simultaneous care (64/142, 45.07%); literature suggests that this model is still often a prerogative of oncology . According to the survey, one significant perceived barrier is the use of the terminology used when proposing the service to patients and family members: healthcare professionals did not acknowledge PC as an identity and are unable to discuss end-of-life because of the exclusive relationship established with patients. In addition to the study’s identified barriers—which can be summed up as clinical, cultural, educational, organisational, and resource allocation—haematologic patients have access to a variety of therapeutic options that unavoidably postpone suspending active treatment. This has led medical professionals to speak about aggressive medical treatments in certain situations (13/142 answers). The attitudes of the health professionals who participated in the survey generally support suggestions for integrating PC in haematology, indicating a need for improvement. Furthermore, there were no notable distinctions between the replies of doctors and nurses, demonstrating that both groups had similar understandings of the primary problems that surfaced. Proactive suggestions for enhancement, such as case managers, shared procedures and protocol drafting , integrated care models , training , and population involvement, could help PC approach haematology in a way that guarantees the highest quality of life for patients and families. The peculiarities of haematological malignancies (unpredictable illness trajectory, elevated symptom burden, specific care needs), healthcare organisation models (presence of in-hospital PC teams and PC case managers, presence of integrated PC networks between hospital and territory, accessibility to palliative transfusions and chemotherapies), and cultural aspects (training and perceptions of professionals working in haematology, education, and development of shared care plans with caregivers) are, in summary, the main obstacles to haematological patients’ referral to PC units. Certain suggestions were proposed to close the gap between these two disciplines based on the comparison of the literature and the data collected. For example: Implementing an in-hospital PC team. A fully staffed service is frequently impractical due to limited human resources and healthcare policies. Instead, it would be appropriate to reorganise PC services that are currently in place and primarily involved in home and hospice settings and thereby ensures that the PC team has scheduled intrahospital access (e.g., once or twice a week), which would facilitate requests for consultation and be handled by haematologists. -To promote the establishment of specialist outpatient clinics for PC and simultaneous care in haematology wards or day hospitals. -To hold focus groups involving haematologists and palliative physicians to develop shared checklists that identify key indicators and the best time to refer these patients to PC services. -To develop mutual guidelines and logistic context-specific procedures for transfusion support to guide physicians in decision-making. Expert talks or focus groups involving all professionals (transfusion medicine doctors, haematologists, and palliative physicians) could be used to build the former. The development of appropriate transfusion criteria could be accomplished using a checklist that considers blood values, the risk/benefit ratio (circulatory overload brought on by transfusion versus the reduction of anaemia symptoms), and the accessibility of substitute therapies such as iron infusion . To overcome organisational and resource barriers that prevent blood from being supplied to hospices or through home transfusion arrangements, logistical procedures should be developed by all parties involved in the pathway (hospital, territory, transfusion centre) . -To encourage collaborative staff training through hospital-specific courses, such as those offered as part of the mandatory yearly formation plan or new hire orientation. -To provide haematologists with a training period in PC wards during their university specialisation, as does the training of palliative specialists, to improve their knowledge in both fields. -Promote the presence of case managers. To enhance the integration between the two disciplines, it would be beneficial to have at least one member of the haematology team serve as an activity and services coordinator, and as a liaison between the patient, the healthcare system, and community resources. One of the primary objectives of the case manager is to lessen the patient's psychological distress and manage symptoms resulting from illness or treatment, thereby improving the quality of care for the patient and family. For this reason, this specialist may be the most important member of the haematologist team in determining the appropriate and pertinent palliative care referral. Following the identification of the need for PC, the case manager might encourage the haematologist to request consultation with a PC specialist or may suggest clinical cases for discussion via a multidisciplinary briefing between the two services. Furthermore, patients with haematologic malignancies require extensive clinical and logistic information to make treatment and clinical decisions, and case managers are experts in building consensus and empowering: they could present palliative care as one of the services and resources patients and families could access at anytime throughout their care path. To encourage the scheduling of multidisciplinary meetings, address urgent cases, and assess potential simultaneous care pathways. This survey has several limitations. The ability to complete the questionnaire exclusively online and performance bias—because participants were acquainted with the researchers—may have had an impact on the response rate. There is also respondent selection bias: the characteristics of nonrespondents were not collected, which could limit the ability to generalise the data findings; additionally, the study only examined the opinions of medical professionals who work with haematology patients, and the centres involved differed. Future research should assess the viability and dependability of the suggested implementation pathways, explore in greater detail the variations among specific centres and explore the perspectives of palliative physicians. This approach allows the data to be cross-referenced with the actual numbers of patients who are sent to the PC. |
Digital twins as global learning health and disease models for preventive and personalized medicine | 36d07c98-037f-4ba8-8e58-0cf7d89432fb | 11806862 | Preventive Medicine[mh] | Ineffective medication is one of the most important healthcare problems. Many patients with complex diseases do not respond to treatment or experience serious side effects. This problem causes enormous suffering and costs for health care, drug development, and production loss. An important reason for ineffective medication is the daunting complexity of diseases. Multi-omics analyses down to the single cell level show that each disease can involve altered interactions among thousands of genes across billions of cells in multiple organs . Most diseases, including inflammatory, cardiovascular, malignant, and metabolic, can develop for many years, or even decades, before symptoms manifest themselves and a diagnosis is given. Ineffective treatment increases the risk of comorbidities, and a vicious circle of increasing treatment inefficiency ensues. Disease progression can differ between different patients with the same diagnosis or within a patient at different time points. Indeed, health and disease can be seen as variable entities on continuous scales. Such variations depend on genetic or environmental factors, such as pollution, lifestyle, and inequitable health care. The 2030 agenda for sustainable development identified effective and equitable health as priorities . To address these priorities would require identification of factors that predispose to, or protect against, a complex disease in the life of a patient. Digital twins (DTs) can contribute to these goals. The DT concept is derived from engineering with the aims of modeling and developing complex systems more effectively and inexpensively in silico than in real life. As with many emerging disciplines, there is no accepted definition of a medical DT . However, many definitions have been proposed, ranging from a computational model of a disease process or a comprehensive model of a whole virtual representation of a patient that is continuously updated with relevant information . Reasons for lack of a generally accepted definition include the wide variety of potential applications of medical DTs and emerging technologies. Thus, it is possible that definitions will change, and perhaps be adapted to different contexts. This flexibility was also proposed in a recent publication about medical DTs . Here, we will use a broad definition of medical DTs: virtual representations of healthy or sick processes across lifecycles that can be understood, learned, and reasoned with real-time data or simulation models to predict, prevent, or treat diseases . Early examples of DTs have already been tested in the clinic, such as in the setting of an artificial lung or artificial pancreas . Recently a resource of sex-specific, organ-resolved whole-body models (WBMs) of infant metabolism was described . This can be used to develop personalized infant-WBMs to predict infant growth in health and disease. Similar models of the whole immune system are projected . Ideally, analyses and computational treatment of DTs will improve health care by paving the way for predictive, preventive, and personalized treatments . Two recent literature reviews provide comprehensive compilations of potential DT applications in health care , as summarized in Table . DTs have been applied in cancer, cardiology, neurology, orthopedics, and wellness . Other applications include the use of DTs to improve drug discovery, clinical trial design, and workflows in hospitals . As an example, Siemens Healthineers and the Medical University of South Carolina collaborated to optimize hospital processes based on DT applications that simulated different workflows and medical equipment . Another example was a DT of a hospital that provided predictive models of health care needs during the COVID-19 pandemic. Those needs included ventilators, critical care beds, and extracorporeal life support. The generated DTs were used to optimize the use of such resources and to provide clinical decision support for treatment of individual patients in all hospitals in the state of Oregon . The medical potential of DTs has been recognized by scientific organizations in the US, Europe, and Asia, and has led to international collaborative efforts to implement this computational strategy in health care and clinical trials. Such efforts and potential clinical applications have been extensively reviewed . However, clinical implementation of DTs involves multiple challenges that have not been systematically addressed in the same review, including (1) dynamic characterization of health and disease-associated molecular changes on population-, organome-, cellulome-, and genome-wide scales, as well as environmental factors; (2) computational methods that integrate and organize all changes into DT; (3) prioritization of mechanisms, from which (4) diagnostic biomarkers and preventive measures or therapeutic targets can be inferred; (5) solutions to connect 1–4 so that DTs can learn from each other; (6) user-friendly interfaces adapted to individuals and care givers; (7) solutions to disseminate DTs on a global scale for equitable and effective health; and (8) solutions to address social, psychological, organizational, ethical, regulatory, and financial challenges and opportunities. As highlighted by manifestos about DTs from the European Commission and US National Academy of Sciences, Engineering and Medicine, there is a lack of concrete clinical implementations that address these challenges . Moreover, the emerging market for medical DTs is projected to reach US$183 billion by 2031 . This has resulted in multiple industrial efforts to develop and implement DTs for health care . Here, we will discuss these challenges and potential solutions and give concrete examples of such solutions. Predictive, preventive, and personalized medicine will require analyses of potential disease causes on multiple scales ranging from populations to individuals, to their tissues, cells, and molecular species. Since multi-morbidity is common, population-wide analyses are important for characterizing disease constellations. This goal is feasible because of the availability of longitudinal electronic medical records of populations and large biobanks. As an example, see our analyses of temporal disease trajectories of over 200 million Americans revealing ten constellations of comorbid diseases (Fig. ). The motivations for studying whole populations include that environmental and genetic factors associated with health and disease may be identified. As an example, a study of health records of over 480,000 (United States) US individuals, along with geographically specific environmental quality measures, suggested that different combinations of genetic and environmental factors play significant roles in schizophrenia risk. The authors concluded that such knowledge would have the potential to implement preventative public health measures at the level of the general population, as well as personalized clinical strategies through genotype-guided primary, secondary, and tertiary prevention to protect defined individuals from exposure to specific environmental risks . Moreover, diseases often occur sequentially, so that disease trajectories can be characterized. Such information might be used for prediction and prevention of diseases. A well-known example from health care today is that early diagnosis and treatment of hypertension prevent cardiovascular diseases. However, many disease trajectories and their genetic/environmental associations may remain uncharacterized because of their complexity and heterogeneity, as well as lack of systematic analyses on population-wide scales. On the scale of individuals, detailed characterization of health and disease mechanisms can be achieved using different types of genome-wide analyses (“multi-omics”) down to the level of single cells (Fig. ) . The latter is important because analyses of the transcriptomes of thousands of cells give sufficient statistical power to characterize disease-associated changes in an individual patient by comparing sick and healthy tissues. As shown in Fig. , such changes can vary greatly between two patients with the same diagnosis who will therefore require different treatments. Treatment of disease-associated changes is further complicated by the involvement of multiple organs with variable mechanisms in the same patients . A recent single-cell RNA sequencing (scRNA-seq) study of a mouse model of arthritis showed involvement of multiple interconnected organs, although only joints showed signs of disease (Fig. ). This heterogeneity has important clinical implications: a drug target in one organ may variably interact with the same or other genes in the same and other organs. Such variations are not possible to measure in individual patients with current diagnostic methods. This may be one explanation for why medication is ineffective in many patients. The complexity and heterogeneity of diseases calls for systems levels to organize disease-associated changes into DTs on scales ranging from populations to individuals. We propose that analyses of data on population-wide scales, such as those shown in Fig. , can potentially be developed to construct DTs of health and disease processes in whole populations (henceforth referred to as pop-DTs). Since the data and methods to construct pop-DTs have yet to be developed and identified, an exact definition of a pop-DT remains to be developed. However, a prototypic definition could be virtual representations of healthy and sick processes in populations across life spans, as well as their environmental and genetic associations. The pop-DTs should be continuously updated with relevant data from any relevant data source, such as electronic medical records, quality registries, and environmental and genetic databases. The pop-DTs should facilitate analyses to identify factors that influence health and disease to promote health and predict and prevent diseases. Construction and analyses of pop-DTs will involve huge challenges, including finding relevant data and developing methods to analyze such data. Pop-DTs should ideally describe combinations of environmental and genetic causes of health or disease. The underlying data are increasingly available in longitudinal electronic medical records, quality registries, and genome-wide databases. Pop-DTs should be continuously updated based on information from the literature and the evolution of different databases. The example in Fig. may represent an early attempt to address such challenges. The result can be seen as a prototypic pop-DT. This version of pop-DT, based on natural language processing-inspired word embedding, we computed a 20-dimensional continuous “disease space,” where diseases, such as lung cancer or depression, are represented as 20-dimensional vectors. In this embedding, similar-etiology diseases tend to occur in close neighborhoods of each other. Indi-DTs translate the same principles to individual patients, but at a greater resolution. Disease-associated changes can be described on multi-organ, -cellulome, and genome-wide scales, as shown in Figs. and . The clinical importance of cellular and molecular resolution lies in that this is needed to find biomarkers and drug targets for predictive and preventive treatments. The figures also illustrate how different types of variables can be organized into networks on different scales. For example, in Fig. disease-associated cell types from individual patients are connected into networks using predicted molecular interactions between those cell types. Those interactions were predicted by bioinformatically inferring the upstream regulator (UR) genes of differentially expressed genes (DEGs) in any cell type. If an UR was found in one cell type and its DEGs in another, the two cell types were connected by an edge. Importantly, networks may provide a systems-level solution that organizes multiple types of variables in a complex system and shows how they interact within and between different levels of that system, as well as with variables in other complex systems. For example, symptoms and signs of human diseases can be connected to a network. In such a network, co-occurring symptoms and signs of the same disease are interconnected into modules (like pain in the chest and left arm in myocardial infarction). Such modules can, in turn, be connected to underlying cellular and molecular networks. Similarly, networks of environmental factors can be constructed and connected into multi-layer networks that describe diseases in scales ranging from populations to individuals, as well as how they change over time (Fig. A and ). Such multi-layer networks may be used to analyze the multiple relationships each node within the network has with every other node. For example, environmental effects can be depicted by recognizing the post-translational modifications of proteins in the protein–protein interaction network and their functional consequences. Ideally, tracing such relationships could lead to identification of subnetworks or modules in which the major determinants of every specific disease exist. If so, this could lead to the identification of potential drug targets that can be used to guide therapeutic strategies and drug development, including drug repurposing . Moreover, multi-layer networks can provide a framework from which highly predictive combinations of variables for different purposes, such as personalized treatment, can be inferred with deep learning/artificial intelligence (AI) techniques (Fig. C and ). These principles will be applied in a recent initiative, The Virtual Child, which aims to construct computational models of individual children’s cancer development to predict, prevent, or treat such developments, based on multi-layer networks . This initiative is based on a multidisciplinary team (professional social network) consisting of patient advocates, industry partners, and basic and clinical researchers from three continents. Thus, the application of network tools to construct multi-layer networks may provide a solution to the challenge of constructing and analyzing pop- and indi-DTs. Many approaches for constructing medical DTs have been proposed and extensively reviewed elsewhere . These strategies encompass advanced machine learning (ML) algorithms and computational modeling techniques, such as multi-scale models that integrate molecular, multicellular, and organismal scales, all of which are fundamental to this process. These modeling approaches may involve systems of ordinary differential equations, agent-based models, and other dynamical systems models. The latter are crucial for modeling molecular interactions within cancer cells that ultimately influence cellular phenotypes. Moreover, ML algorithms significantly contribute by identifying complex patterns and associations within large datasets, improving the efficiency and accuracy of predictions related to tumor behavior and treatments outcomes. In the next section, we will discuss how networks can be systematically analyzed to prioritize mechanisms for predictive, preventive, and personalized medicine. Prioritization of disease-relevant environmental, phenotypic, and molecular changes on dynamic population-, organome-, cellulome-, and genome-wide scales are unresolved challenges. However, recent studies point to potential solutions: On the scale of pop-DTs, analyses of longitudinal data from electronic medical records or biobanks can identify the evolution of disease constellations such that the initiating mechanisms of (preclinical) diseases can be identified (Fig. ). Combined analyses of molecular data can be used to infer early mechanisms, as well as biomarkers and drug targets for prediction and prevention. On the scale of indi-DTs, the potential of single-cell-based methods for personalized medicine was recognized at an early stage . Recently, several methods have been described to infer relations to clinical traits such as survival and treatment responses. These methods have been applied to multiple diseases, including cancer, and cardiological and neurological diseases . As an example, the scGWAS (scRNA-seq assisted genome-wide association studies analysis) method was developed to investigate transcriptional changes associated with genetic variants in specific cell types and their relationship to traits in multiple complex diseases . Another application is to infer drug sensitivity based on scRNA-seq data. In cancer or cardiac cells, drugs or drug combinations can be inferred by integrating analyses of single-cell expression profiles with pharmacogenomic databases . A recent study proposes a novel computational method to identify phenotype-associated cellular states that could be used to infer biomarkers to predict response to therapy and survival in order to improve prognosis and treatment . Frameworks like scDrugPrio construct network models of diseases based on scRNA-seq data to prioritize drug candidates. This approach considers cell type-specific gene expression changes. Dynamic multicellular disease models (MCDMs) can be analyzed to find early URs, which may be both diagnostic and therapeutic targets that predict and prevent disease in cancer, cardiological, and neurological diseases Network analyses, such as centrality measures, can be used to prioritize the most central cell types in MCDMs and their modules. Those modules may be computationally matched with thousands of drugs to find the optimal ones for individual patients (Fig. e–f). This approach has been validated by extensive in vitro and in vivo studies and is ready for clinical trials. Machine and transfer learning can be used to project data about genome-wide drug responses from public databases to individual patients . Pop- and indi-DTs are envisioned to learn and adapt continuously, providing predictive, preventive, and personalized treatment based on diverse data, as described above. The potential of linking medical DTs to emerging DTs in related fields, such as climatology, environmental pollution, and socioeconomics, was recently discussed at a series of seminars organized by the US National Academies of Sciences, Engineering, and Medicine . Algorithmic advances in AI that can contribute to improving and integrating DTs include self-supervised learning , geometric deep learning , and generative pre-training of foundation medical models . Collectively, these AI approaches are transforming adjacent areas, including healthcare decision support systems, and can be directly adapted to enhance the predictive power and scalability of digital twins due to their unique capabilities in handling complex, multi-modal, and data-limited environments, which are characteristic of biomedical systems across scales. Self-supervised learning is a form of ML in which the system learns to predict part of its input from other parts of its input using a large amount of unlabeled data. In healthcare DTs, obtaining large-scale labeled datasets is often challenging due to privacy concerns, cost, and the complexity of clinical annotations . Self-supervised learning allows models to leverage vast amounts of unlabeled medical data (e.g., clinical notes, imaging data) to pre-train models that can be fine-tuned with minimal supervision. This is crucial for DTs, which must integrate various forms of patient data and operate in data-constrained environments. For example, a DT could learn patterns from medical images, electronic health records, or genetic data to predict missing patient records or infer future clinical events, enhancing the DT’s ability to simulate potential disease progressions even when labeled data is limited. Geometric deep learning is a recent paradigm in that generalizes deep neural network models to non-Euclidean domains such as graphs and manifolds . Biological systems naturally reside on graph-structured data, such as molecular structures, protein–protein interaction networks, molecular pathways, and patient similarity networks. Geometric deep learning excels at learning from data structured as graphs, meshes, or manifolds, making it ideal for capturing the relationships and dynamics within these systems . For example, in a DT of the human heart, geometric deep learning approaches could model geometries of different anatomical structures (e.g., blood vessels, muscle tissues) to simulate cardiovascular functions under different conditions. These approaches are particularly powerful in modeling spatial relationships in imaging data, which can be useful for simulating personalized disease models . DTs need to integrate multiple data types, such as clinical data, genomics, and imaging. Generative pre-training on large-scale medical datasets is a technique to build foundation models that learn medical knowledge across these modalities and can be fine-tuned for specific DT applications. Instead of training many task-specific models, we can adapt a single, generative, pre-trained model to many tasks via few-shot prompting or fine-tuning . For example, in virtual cell simulators , this approach can generate and test hypotheses in virtual environments, enabling scientists to explore scenarios and conditions that are difficult to replicate in a physical laboratory . In clinical DTs, this approach could simulate patient-specific outcomes by generating treatment responses or disease progressions based on the individual’s data. Integrating DTs and AI models into clinical settings presents an important challenge in ensuring these technologies are interpretable and transparent to individuals, care givers, and medical researchers. This is essential for participatory medicine, where joint decision-making between patients and health professionals is based on a clear and informed understanding of health and disease management . Machine learning models often function as black boxes, making it difficult for end-users, such as clinicians and patients, to understand how predictions are made . Ensuring these models are explainable without sacrificing accuracy is crucial for trust and usability in DTs . Explainability techniques allow us to create DT models that are more transparent. These techniques include tools that visualize how data points are connected and influence one another within the model and algorithms that break down complex predictions into simpler, more comprehensible components. One of the most effective explainability tools in DTs is attribution maps . These maps visually represent which parts of the model’s structure—such as nodes or edges in a graph, pixels in an image, or time points in sequential datasets—contribute most to a prediction. For example, in a medical DT simulating a patient’s disease progression, attribution maps can highlight which clinical symptoms, genetic markers, or other factors are most influential in diagnosing a condition or predicting a treatment outcome . This visualization helps clinicians validate the model’s reasoning and makes it easier for patients to understand why certain medical decisions are recommended. Another explainability technique involves local explainers—tools that focus on explaining individual predictions rather than the overall model behavior . In healthcare DTs, where personalized care is essential, local explainers can offer detailed insights into why a model recommended a specific treatment or diagnosis for a particular patient. For instance, in a DT built from scRNA-seq data, local explainers can help determine why a certain cell type or gene expression pattern was critical for prediction . This fine-grained understanding is especially useful in precision medicine, where individual-level explanations are often more actionable than global trends. In scRNA-seq DTs, explainable AI can be employed to trace the molecular basis of a prediction. For example, a visible neural network —designed to be inherently interpretable—can illustrate which gene expressions or pathways influenced the model’s classification of cell states in a patient’s immune response . This type of transparency is critical in complex systems where biological pathways are intricate, and predictions must be rooted in identifiable molecular changes. In healthcare DTs, a visible neural network could be deployed to predict hospital readmissions. Each layer of the model is structured to offer insights into why certain factors (e.g., age, comorbidities, medication adherence) influence the likelihood of readmission. By making these decisions transparent, hospitals can better allocate resources and tailor interventions for at-risk patients . Designing explainable interfaces for DTs tailored to patients’ preferences and educational backgrounds can enhance DTs. This might involve creating visualizations that patients can understand while providing clinicians with detailed insights . For example, a DT interface that simulates the impact of lifestyle changes on disease progression could incorporate interactive elements to engage patients more effectively, adapting the presentation based on their health literacy . Leveraging explainability techniques like attribution maps, local explainers, and visible neural networks can enhance the usability of DTs and foster interaction between DTs and human users . As evidenced by the Virtual Child Project, which spans three continents, many of the computational solutions underlying DTs are independent of geographical location. This supports that DTs may contribute to improved and equitable health on a global scale based on collaborative efforts between developing and developed countries. There are several successful examples of such collaborations, aiming at global health digitalization, including concrete examples such as an automated pipeline for virtual drug discovery and clinical applications such as digital or AI-supported diagnostic protocols in low-resource settings . Clinical implementation of DTs will involve a wide range of challenges. As recently discussed, many of these challenges are generic for implementation of computational science in different fields . One important example is gender differences in how digital technologies and health care are perceived, used, and led in different countries . Such differences can be disadvantageous for women—especially women of racial or ethnic minority backgrounds. Another question can be data ownership: can a patient be asked to share increasingly detailed information from her DT as a resource for clinicians treating patients with similar characteristics, or for use in clinical or industrial research, such as drug discovery? Addressing this question requires integrated solutions to tackle challenges in ethics, data security, and regulatory issues . However, despite national differences in evaluation and approval processes, computational modeling tools for clinical purposes have entered in the market. The FDA has implemented pre-qualification programs to speed up the regulatory processes of digital tools. Additionally, protecting the privacy and rights of an individual’s DT is crucial, especially as it incorporates sensitive, multiscale data. The analyses for example federated data analysis with evolving computational approaches that protect privacy even in population-based studies. A white paper from the US National Academy of Science recently recommended that the potential of digital twins to “accelerate scientific discovery and revolutionize health care” would merit an integrated agenda to harmonize research across sectors and focus efforts on realistic applications. These efforts should be “crosscutting” to help “advance the mathematical, statistical, and computational foundations underlying digital twin technologies.” However, the white paper also stated that there is a “lack of adopted standards in data generation” that “hinders the interoperability of data required for digital twins.” Finally, the reports urged “fundamental challenges include aggregating uncertainty across different data modalities and scales as well as addressing missing data” . While implementation of DTs for predictive, preventive, and personalized medicine will involve huge and diverse challenges, these must be balanced against the suffering and costs resulting from the many patients for whom today’s diagnostics and therapeutics are ineffective. These challenges arise from the intricate nature of diseases, which involve complex interactions among thousands of genes across various cell types and organs. Disease progression can vary significantly between patients and overtime, influenced by a combination of genetic and environmental factors. DTs are increasingly recognized as a potential solution to address these challenges in healthcare. Early clinical applications of DTs have already emerged for endocrine, cardiological, and malignant diseases, as well as for hospital workflow optimization. These applications demonstrate the versatility and potential impact of DT technology in healthcare. However, widespread implementation of DTs in healthcare faces several challenges: Biological complexity: characterizing dynamic molecular changes across multiple biological scales. Data integration: developing computational methods to integrate diverse data types into digital twins. Prioritization: identifying and prioritizing disease mechanisms and therapeutic targets. Interoperability: creating digital twin systems that can learn from and interact with each other. User interface: designing intuitive interfaces for patients and clinicians. Global scaling: expanding digital twin technology globally to ensure equitable healthcare access. Ethical and regulatory considerations: addressing ethical, regulatory, and financial aspects of digital twin implementation. Addressing these challenges as proposed by the 2030 agenda will require global collaborations between developed and developing countries, as well as patient organizations, health care professionals, academic and industrial researchers, politicians, and regulatory bodies. This could pave the way for a more predictive, preventive, and personalized approach to medicine. The successful implementation of digital twins has the potential to transform healthcare delivery and significantly improve patient outcomes. As research progresses and technology advances, digital twins may become an integral part of healthcare systems worldwide, offering tailored solutions for individual patients and enhancing overall healthcare efficiency. |
KIF23 is a potential biomarker of diffuse large B cell lymphoma | 34ecdfff-ad78-460c-8026-f4dcb395bf1a | 9276187 | Anatomy[mh] | Introduction Diffuse Large B-cell Lymphoma (DLBCL) is the most common type of hematological cancer, accounting for 30%–40% of non-Hodgkin lymphoma. The standard treatment is chemoimmunotherapy with rituximab plus cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP), leading to the cure or remission of 60% of patients. However, 40% of patients succumb to DLBCL. The application of next-generation sequencing has revealed a great degree of molecular and clinical heterogeneity in DLBCL. This heterogeneity poses a series of challenges in the understanding and treatment of DLBCL. Further deciphering genes and signaling pathways involved in the initiation and development of DLBCL may provide a chance for efficient therapy. Kinesin superfamily proteins possess a highly conserved motor domain, which hydrolyzes ATP to generate energy leading to a conformational change in their movement. Those proteins participate in multiple biological functions, including mitosis, organelles transport, and signaling events. Dysregulation of Kinesin superfamily proteins is involved in the initiation, development, and progress of human cancers. Kinesin family member 23 (KIF23), the member of kinesin 6 family, located at the interzone of mitotic spindles, plays a critical role in cytokinesis. Tumor suppressor gene p53 can repress KIF23 transcription by downregulation of KIF23 promoter activity, while TCF-4 can directly bind to the promoter of KIF23 at -814/-805 bp (GGGTCAAAGA) to activate its transcription. KIF23 knockdown significantly decreased the proliferation of glioma cells and gastric cancer cells in vitro and in vivo. In Synovial Sarcoma, KIF23 was involved in metastasis, leading to reduced survival. A recent study found that a lncRNA PVT1 knockdown reduced KIF23 expression by enhancing miR-15a-5p, thereby attenuating prostate cancer progression. Nonetheless, the roles of KIF23 in DLBCL remain unclear. In this study, we selected four microarray datasets to screen differentially expressed genes (DEGs) between DLBCL and the corresponding normal tissues. Kyoto Encyclopedia of Genes and Genomes (KEGG) and Gene oncology (GO) analyses were performed to obtain insights into these DEGs. Protein protein interaction (PPI) network was constructed to identify hub genes. Next, survival analyses identified that KIF23 was significantly associated with poor prognosis in DLBCL based on four datasets. Finally, LinkedOmics, KEGG, GO, Gene Set Enrichment Analysis (GSEA), and methylation array of TCGA dataset were all used to obtain the possible molecular mechanisms of KIF23. Our results improve the understanding of the roles and mechanisms of KIF23 in DLBCL.
Materials and methods 2.1 Data source To obtain key genes related to DLBCL, we used “DLBCL” and “RNA” as keywords to search gene expression profiles in GEO database. Datasets containing normal tissues and DLBCL samples were our choice. Finally, four GEO datasets (GSE25638, GSE44337, GSE56315, GSE32018) were selected for further study. GSE25638, GSE44337 and GSE56315 were based on GPL570 ([HG-U133_Plus_2] Affymetrix Human Genome U133 Plus 2.0 Array). GSE32018 was based on GPL6480 (Agilent-014850 Whole Human Genome Microarray 4 × 44 K G4112F (Probe Name version)). The data for GSE25638, GSE44337, GSE56315 and GSE32018 consisted of 26 DLBCL patients vs 13 controls, 9 DLBCL patients vs 3 controls, 55 DLBCL patients vs 33 controls and 22 DLBCL patients vs 7 controls respectively. Two DNA methylation profiles (TCGA, n = 48; GSE92679, n = 97) were selected for methylation analysis. Three GEO datasets (GSE10846, GSE32918, GSE23501 ) were selected for survival analysis. GSE10846, GSE32918, and GSE23501 consist of 414 DLBCL patients (181 patients received CHOP regimen and 233 patients received RCHOP regimen), 244 DLBCL patients, and 69 DLBCL patients, respectively. For multivariate cox analysis, several clinical factors (age, gender, regimen, ECOG, stage, LDH ratio, and extranodal sites) were included. 27 DLBCLs paraffin-embedded tissues and 18 lymphoid samples were obtained from Second hospital of Shaoxing and the First Affiliated Hospital of Zhejiang University, respectively, with the necessary informed consent of patients (samples were collected from 2015–2016). Another 77 DLBCLs paraffin-embedded tissues with clinical information were collected from the First Affiliated Hospital of Zhejiang University (samples were collected from 2009–2016). 2.2 DEG identification DEG identification was performed as follows. First, we mapped the probe IDs to the gene symbols using R software (version 4.0.0). Then, the limma package (version 3.44.3) was adopted to identify DEGs between DLBCL and the corresponding normal controls. Genes with false discovery rate (FDR) adjusted P < .05 and |log2 Fold change| > 1 were considered as DEGs. Finally, the overlapping DEGs, shared by all the four datasets (GSE25638, GSE44337, GSE56315, GSE32018), were obtained and visualized using VennDiagram (version 1.6.20). 2.3 KEGG pathway and GO annotation analysis To reveal the potential functions of overlapping DEGs, we performed enrichment analyses as follows. First, gene symbols were mapped to ENTREZID using the R package org.Hs.eg.db (version 3.11.4). Then, we conducted KEGG pathway and GO annotation analyses by using the R package clusterProfiler (version, 3.16.1) with the “enrichKEGG” and “enrichGO” functions, respectively. GO terms consist of biological process, cellular component, and molecular function. False discovery rate (FDR) adjusted P < .05 was regarded statistically significant. Finally, the function “Dotplot” was used to visualize. 2.4 PPI network and hub genes Search Tools for Retrieval of Interacting Genes (STRING) provides a platform for constructing protein association networks. Overlapping DEGs were submitted to STRING. The combined score of PPI pairs was no less than .4. We used the plugin cytoHubba of Cytoscape (3.8.0) to calculate and visualize hub genes in the PPI network. The Local-based method, including four algorithms: Degree method (Geg), Maximal Clique Centrality, Density of Maximum Neighborhood Component, and Maximum Neighborhood Component, of cytoHubba, was used to screen out top 30 hub genes. VennDiagram (1.6.20) was then applied to identify overlapping hub genes calculated by these four algorithms. We used Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn/ ) to validate the differential expression levels of these overlapping hub genes. 2.5 Survival analysis for hub genes We performed Kaplan-Meier analyses to explore associations between overall survival (OS) and hub genes. GSE10846 and GSE32918 were used as test cohorts. GSE23501 was used as the validation cohort. For further study, as GSE10846 provided detailed and refined information about each patient, Kaplan-Meier, univariate analyses, and multivariate analysis, based on GSE10846, were used to explore the relationship between KIF23 and prognosis patients under different clinical conditions. Patients with missing value or loss of follow-up were excluded. Two R packages (survival,version 3.1-12; survminer, version 0.4.7) were used for Kaplan-Meier, univariate analyses, and multivariate analysis. 2.6 Immunohistochemistry analysis IHC staining was performed with the antibody KIF23 (Abcam, ab235955, 1:550) following the manufacturer's protocol. The staining value of KIF23 was calculated as previously described : the staining index (values 0–12), obtained as the intensity of KIF23-positive staining (3, strong staining; 2, moderate staining; 1, weak staining; 0, no staining) and the proportion of immuno-positive cells of interest (4, >75%; 3, 51%–75%; 2, 26%–50%; 1, <25%), was calculated. For example, if a patient specimen showed strong KIF23-positive staining in more than 75% cells, the staining score was 12 (3 multiplied by 4 equals 12). For survival analysis, all patients were divided into two groups according to the median value of the KIF23 staining index: KIF23-high expression group (≥median) and KIF23-low expression group (<median). 2.7 LinkedOmics and GSEA LinkedOmics ( http://www.linkedomics.org/login.php ) provides a platform for analyzing multi-dimensional datasets of TCGA. We applied Pearson's correlation coefficient to detect KIF23 co-expression genes. KEGG pathway and GO (Biological Process) were performed to obtain deeper insights into the potential functions of KIF23 in DLBCL. In addition, All the 48 DLBCL patients from TCGA were separated into two groups according to the KIF23 median value. GSEA (4.0.3) was then performed. In this study, we chose h.all.v7.2.symbols.gmt [Hallmarks] as the gene set database. The number of permutations was 1000, and the permutation type was phenotype. 2.8 The combined analysis of RNA-seq and corresponding DNA methylation microarray of DLBCL from TCGA To explore the possible reason for the higher expression of KIF23 in DLBCL, we combined RNA-seq and the corresponding DNA methylation profile for analysis. Both datasets from TCGA were downloaded from UCSC Xena. DNA methylation profile GSE92679 was selected to validate the hypomethylation in the promoter region of KIF23. We used Pearson correlation and Spearman correlation analysis to determine the relationship between methylation sites of the KIF23 promotor region and KIF23 mRNA expression. P < .05 and r > 0.3 were regarded as the standard of the significant correlation. 2.9 Statistical analysis The student's t-test was applied to evaluate the difference in KIF23 expression between DLBCL tissues and lymphoid tissues. The association between KIF23 expression and clinical factors was analyzed by chi-square test. Kaplan-Meier analysis was used to compare the OS between KIF23-high and KIF23-low groups. Univariate cox analysis was applied to calculate the relationship of OS with clinical factors and KIF23 expression in DLBCL. Multivariate cox analysis was performed to confirm the prognostic value of KIF23 in DLBCL by including all the parameters with P < .05 in univariate cox analysis. R (version 4.0.0) was used to perform statistical analysis. The P < .05 was considered statistically significant.
Data source To obtain key genes related to DLBCL, we used “DLBCL” and “RNA” as keywords to search gene expression profiles in GEO database. Datasets containing normal tissues and DLBCL samples were our choice. Finally, four GEO datasets (GSE25638, GSE44337, GSE56315, GSE32018) were selected for further study. GSE25638, GSE44337 and GSE56315 were based on GPL570 ([HG-U133_Plus_2] Affymetrix Human Genome U133 Plus 2.0 Array). GSE32018 was based on GPL6480 (Agilent-014850 Whole Human Genome Microarray 4 × 44 K G4112F (Probe Name version)). The data for GSE25638, GSE44337, GSE56315 and GSE32018 consisted of 26 DLBCL patients vs 13 controls, 9 DLBCL patients vs 3 controls, 55 DLBCL patients vs 33 controls and 22 DLBCL patients vs 7 controls respectively. Two DNA methylation profiles (TCGA, n = 48; GSE92679, n = 97) were selected for methylation analysis. Three GEO datasets (GSE10846, GSE32918, GSE23501 ) were selected for survival analysis. GSE10846, GSE32918, and GSE23501 consist of 414 DLBCL patients (181 patients received CHOP regimen and 233 patients received RCHOP regimen), 244 DLBCL patients, and 69 DLBCL patients, respectively. For multivariate cox analysis, several clinical factors (age, gender, regimen, ECOG, stage, LDH ratio, and extranodal sites) were included. 27 DLBCLs paraffin-embedded tissues and 18 lymphoid samples were obtained from Second hospital of Shaoxing and the First Affiliated Hospital of Zhejiang University, respectively, with the necessary informed consent of patients (samples were collected from 2015–2016). Another 77 DLBCLs paraffin-embedded tissues with clinical information were collected from the First Affiliated Hospital of Zhejiang University (samples were collected from 2009–2016).
DEG identification DEG identification was performed as follows. First, we mapped the probe IDs to the gene symbols using R software (version 4.0.0). Then, the limma package (version 3.44.3) was adopted to identify DEGs between DLBCL and the corresponding normal controls. Genes with false discovery rate (FDR) adjusted P < .05 and |log2 Fold change| > 1 were considered as DEGs. Finally, the overlapping DEGs, shared by all the four datasets (GSE25638, GSE44337, GSE56315, GSE32018), were obtained and visualized using VennDiagram (version 1.6.20).
KEGG pathway and GO annotation analysis To reveal the potential functions of overlapping DEGs, we performed enrichment analyses as follows. First, gene symbols were mapped to ENTREZID using the R package org.Hs.eg.db (version 3.11.4). Then, we conducted KEGG pathway and GO annotation analyses by using the R package clusterProfiler (version, 3.16.1) with the “enrichKEGG” and “enrichGO” functions, respectively. GO terms consist of biological process, cellular component, and molecular function. False discovery rate (FDR) adjusted P < .05 was regarded statistically significant. Finally, the function “Dotplot” was used to visualize.
PPI network and hub genes Search Tools for Retrieval of Interacting Genes (STRING) provides a platform for constructing protein association networks. Overlapping DEGs were submitted to STRING. The combined score of PPI pairs was no less than .4. We used the plugin cytoHubba of Cytoscape (3.8.0) to calculate and visualize hub genes in the PPI network. The Local-based method, including four algorithms: Degree method (Geg), Maximal Clique Centrality, Density of Maximum Neighborhood Component, and Maximum Neighborhood Component, of cytoHubba, was used to screen out top 30 hub genes. VennDiagram (1.6.20) was then applied to identify overlapping hub genes calculated by these four algorithms. We used Gene Expression Profiling Interactive Analysis (GEPIA, http://gepia.cancer-pku.cn/ ) to validate the differential expression levels of these overlapping hub genes.
Survival analysis for hub genes We performed Kaplan-Meier analyses to explore associations between overall survival (OS) and hub genes. GSE10846 and GSE32918 were used as test cohorts. GSE23501 was used as the validation cohort. For further study, as GSE10846 provided detailed and refined information about each patient, Kaplan-Meier, univariate analyses, and multivariate analysis, based on GSE10846, were used to explore the relationship between KIF23 and prognosis patients under different clinical conditions. Patients with missing value or loss of follow-up were excluded. Two R packages (survival,version 3.1-12; survminer, version 0.4.7) were used for Kaplan-Meier, univariate analyses, and multivariate analysis.
Immunohistochemistry analysis IHC staining was performed with the antibody KIF23 (Abcam, ab235955, 1:550) following the manufacturer's protocol. The staining value of KIF23 was calculated as previously described : the staining index (values 0–12), obtained as the intensity of KIF23-positive staining (3, strong staining; 2, moderate staining; 1, weak staining; 0, no staining) and the proportion of immuno-positive cells of interest (4, >75%; 3, 51%–75%; 2, 26%–50%; 1, <25%), was calculated. For example, if a patient specimen showed strong KIF23-positive staining in more than 75% cells, the staining score was 12 (3 multiplied by 4 equals 12). For survival analysis, all patients were divided into two groups according to the median value of the KIF23 staining index: KIF23-high expression group (≥median) and KIF23-low expression group (<median).
LinkedOmics and GSEA LinkedOmics ( http://www.linkedomics.org/login.php ) provides a platform for analyzing multi-dimensional datasets of TCGA. We applied Pearson's correlation coefficient to detect KIF23 co-expression genes. KEGG pathway and GO (Biological Process) were performed to obtain deeper insights into the potential functions of KIF23 in DLBCL. In addition, All the 48 DLBCL patients from TCGA were separated into two groups according to the KIF23 median value. GSEA (4.0.3) was then performed. In this study, we chose h.all.v7.2.symbols.gmt [Hallmarks] as the gene set database. The number of permutations was 1000, and the permutation type was phenotype.
The combined analysis of RNA-seq and corresponding DNA methylation microarray of DLBCL from TCGA To explore the possible reason for the higher expression of KIF23 in DLBCL, we combined RNA-seq and the corresponding DNA methylation profile for analysis. Both datasets from TCGA were downloaded from UCSC Xena. DNA methylation profile GSE92679 was selected to validate the hypomethylation in the promoter region of KIF23. We used Pearson correlation and Spearman correlation analysis to determine the relationship between methylation sites of the KIF23 promotor region and KIF23 mRNA expression. P < .05 and r > 0.3 were regarded as the standard of the significant correlation.
Statistical analysis The student's t-test was applied to evaluate the difference in KIF23 expression between DLBCL tissues and lymphoid tissues. The association between KIF23 expression and clinical factors was analyzed by chi-square test. Kaplan-Meier analysis was used to compare the OS between KIF23-high and KIF23-low groups. Univariate cox analysis was applied to calculate the relationship of OS with clinical factors and KIF23 expression in DLBCL. Multivariate cox analysis was performed to confirm the prognostic value of KIF23 in DLBCL by including all the parameters with P < .05 in univariate cox analysis. R (version 4.0.0) was used to perform statistical analysis. The P < .05 was considered statistically significant.
Results and discussion 3.1 DEG identification Four datasets were used to identify DEGs between DLBCL and the corresponding normal tissues. For GSE25638, 1787 upregulated genes and 318 downregulated genes were obtained (Fig. A). For GSE44337, 1697 upregulated genes and 590 downregulated genes were obtained (Fig. B). For GSE56315, 3665 upregulated genes and 4856 downregulated genes were obtained (Fig. C). For GSE32018, 486 upregulated genes and 884 downregulated genes were obtained (Fig. D). Finally, 80 overlapping genes were significantly upregulated (Fig. E), and 15 overlapping genes were remarkably downregulated (Fig. F) in DLBCL compared to normal tissues (Table ). The log fold change and the P value of these overlapping genes were listed in supplementary file 1. 3.2 KEGG pathway and GO annotation enrichment analyses Next, Two R packages (clusterProfiler, org.Hs.eg.db) were applied to conduct KEGG pathway and GO annotation enrichment analyses of these 95 overlapping DEGs. The top four KEGG enrichment pathways were cell cycle, oocyte meiosis, progesterone-mediated oocyte maturation, and cysteine and methionine metabolism (Fig. A). The GO biological process analysis revealed that these 95 common DEGs were significantly enriched in chromosome segregation, nuclear division, organelle fission (Fig. B). The GO cellular component analysis showed that chromosomal region, spindle, centromeric region kinetochore were markedly enriched (Fig. C). Besides, the top three GO molecular function terms were ATPase activity, catalytic activity acting on DNA, DNA-dependent ATPase activity (Fig. D). 3.3 PPI network and hub genes identification STRING database was used to construct the protein-protein interaction network among the 95 common DEGs. A plugin cytoHubba of Cytoscape (3.8.0) was applied to identify and visualize the top 30 hub genes. As the heterogeneity of protein network, it is reasonable to use more than one algorithm to catch hub genes. Since the local-based method (including four algorithms: Degree method (Geg), Maximal Clique Centrality, Density of Maximum Neighborhood Component, and Maximum Neighborhood Component) of cytoHubba considers the relationship between the node and its direct neighbors, we used all the four algorithms of this method to identify the top 30 hub genes (Fig. A-D). Then, 17 overlapping hub genes were obtained (Fig. A). All the 17 hub genes were upregulated in DLBCL compared to normal tissues in the 4 GEO datasets (Fig. B). Similar results were obtained from the GEPIA database (Fig. C-S). 3.4 Relationship between hub genes and prognosis of DLBCL patients For OS, two datasets, GSE10846 consisted of 414 DLBCL cases, GSE32918 consisted of 244 DLBCL cases, was used for Kaplan-Meier analysis. Among these 17 hub genes, three genes, KIF23 ( P = .01, P = .01), TRIP13 ( P < .01, P < .01), and ZWINT ( P < .001, P < .01), were significantly associated with shorter lives in both datasets (Fig. A-F). Next, GSE23501(n = 69), were used to validate the results. In this dataset, patients with higher expression of KIF23 had poor prognosis ( P = .04), while patients with different expression levels of TRIP13 ( P = .77) or ZWINT ( P = .75) showed no significant differences in prognosis (Fig. G-I). Therefore, KIF23 is considered as a critical gene in DLBCL. 3.5 IHC validation of KIF23 importance in DLBCL We then identified the KIF23 expression level in 45 samples, including 17 lymph nodes and 27 DLBCL samples. IHC experiment validated that DLBCL samples showed higher expression of KIF23 compared to lymphoid tissues (Fig. A-C). We classified DLBCL patients into four categories (Fig. B, the staining value ≥ 9: +++; the staining value≥4: ++; the staining value≥1: + the staining value = 0: −). Furthermore, 77 DLBCL patients with clinical information were used for survival analysis. We separate all patients into two groups according to the median value of the KIF23 staining index: KIF23-high expression group (≥median value) and KIF23-low expression group (<median value). The clinical information and KIF23 staining value of those 77 DLBCL paraffin-embedded tissues were detailed in supplementary file 2. Kaplan-Meier analysis suggested that higher expression of KIF23 was significantly associated with poor prognosis in DLBCL (Fig. D). 3.6 KIF23 as an independent prognostic factor in DLBCL To identify the importance of KIF23 in DLBCL, we used information from GSE10846 for further studies because this dataset had detailed information on clinical and treatment attributes. Univariate analysis indicated that age (HR 1.03; CI 1.018–1.041; P < .001;), regimen (HR 0.53; CI 0.376–0.719; P < .001), ECOG (HR 1.82; CI 1.551–2.136; P < .001), stage (HR 1.51; CI 1.293–1.758; P < .001), LDH ratio (HR 1.14; CI 1.095–1.181; P < .001), extranodal sites (HR 1.21; CI 1.001–1.452; P < .001), KIF23 (HR 1.36; CI 1.101–1.690; P < .001) significantly correlated with OS (Table ). However, males and females showed no significantly different outcomes in DLBCL (HR 1.02; CI 0.744–1.402; P = .89). Follow-up evaluation of multivariate analysis of these significantly clinical factors demonstrated that older age (HR 2.41; CI 1.43–4.06; P < .001), ECOG2 (HR 2.87; CI 1.50–5.47; P < .01), ECOG3 (HR 2.58; CI 1.22–5.50; P = .01), ECOG4 (HR 9.66; CI 2.95–31.61; P < .001), stage 2 (HR 3.10; CI 1.32–6.83; P < .01), stage 3 (HR 2.90; CI 1.24–6.83; P = .02), stage 4 (HR 3.51; CI 1.50–8.21, P < .01), KIF23 (HR 1.28 CI 1.01–1.61; P = .04) were independent risk factors for poor prognosis. Treatment of R-CHOP regimen (HR 0.45; CI 0.28–0.71; P < .001) can prolong the survival time of patients (Fig. ). Results from univariate and multivariate analysis indicated that KIF23 can be an independent risk factor for poor prognosis in DLBCL. 3.7 Relationship between KIF23 and prognosis of patients under different clinical conditions Then, we explored the effect of KIF23 in patients under different clinical conditions. Kaplan-Meier analysis revealed that the higher expression of KIF23 was associated with inferior prognosis in patients who received the R-CHOP regimen ( P < .01 Fig. B). The HR for R-CHOP was 1.56 (CI 1.046–2.314, P = .03, Table ), suggesting that KIF23 may be a negative prognostic indicator for patients who received R-CHOP regimen, while whatever Kaplan-Meier ( P = .77) or univariate analysis (HR 1.23, CI 0.87–1.742, P = .24), KIF23 expression level showed no significant effect on prognosis in patients received CHOP regimen (Fig. A, Table ). In the early-stage (stage 1 and stage 2) and late-stage (stage 3 and stage 4), Kaplan-Meier analysis indicated that there were no differences in prognosis between KIF23 higher group and KIF23 lower group ( P = .07, P = .07 respectively, Fig. C-D). Univariate analysis indicated similar results (HR 1.40, CI 0.8918–2.188, P = .14, Table ) in the early-stage. However, the HR for late-stage was 1.41 (CI 1.057–1.878, P = .02 Table ) indicated that the higher expression of KIF23 might be a prognostic risk factor in patients of late stage. Higher expression of KIF23 was significantly associated with poor clinical outcomes ( P = .02, Fig. E) in patients with lower LDH ratio, whereas there was no notable difference in patients with higher LDH ratio ( P = .2, Fig. F). Univariate analysis suggested no apparent differences between KIF23 higher expression and KIF23 lower expression in patients with lower LDH ratio or higher LDH ratio (HR 1.53 CI 0.9983–2.339, P = .51, HR 1.20 CI 0.9219–1.552, P = .18 respectively, Table ). Patients with KIF23 higher expression showed shorter survival time in the group with extranodal sites ( P < .01, Fig. H). Univariate analysis showed similar results (HR 1.40 CI 1.002–1.966, P = .04, Table ). In groups without extranodal sites, the result from Kaplan-Meier analysis showed no significant difference in prognosis between KIF23 higher expression and lower expression ( P = .31, Fig. G). On the contrary, univariate analysis revealed that HR was 1.37 (CI 1.023–1.842, P = .03, Table ), indicating that the higher expression of KIF23 indicated poor prognosis in patients without extranodal sites. Moreover, a subgroup analysis revealed that upregulation of KIF23 was a prognostic risk factor for reduced 3 years ( P < .01, Fig. J; HR 1.10, CI 0.799–1.513, P = .56, Table ), 5 years ( P < .01, Fig. K; HR 1.28, CI 1.027–1.589, P = .03, Table ) and 10 years ( P < .01, Fig. L; HR 1.41, CI 1.141–1.734, P < .01, Table ), but not 1 year ( P = .13, Fig. I; HR 1.45, CI 1.170–1.799, P < .001, Table ) OS in DLBCL patients. 3.8 Relationship between KIF23 and clinical features In addition, we evaluated the correlation between KIF23 and clinical features in DLBCL using a chi-square test. As shown in Table , different regimens showed different effects on KIF23 expression ( P = 0.04), while other clinical factors (gender, age, regimen, ECOG, stage, LDH ratio, extranodal sites) were not significantly affecting KIF23 expression. 3.9 Molecular mechanism of KIF23 in DLBCL KIF23 and its co-expression genes may function together in cells. Hence, we first aim to identify genes that showed significant correlations with KIF23. We used the LinkedOmics database to explore KIF23 co-expression mode in the DLBCL cohort from TCGA. As shown in Fig. A, 2875 genes (dark red dots) displayed significantly positive associations with KIF23, while 2671 genes (dark green dots) displayed negative associations (false discovery rate, FDR < .01). Fig. B-C showed the top 50 significant genes positively and negatively related to KIF23. Then, we performed KEGG and GO analyses to explore the molecular functions of KIF23 and its co-expression genes in DLBCL. Results showed that KEGG pathways were enriched in cell cycle, RNA transport, viral carcinogenesis, ubiquitin-mediated proteolysis, cellular senescence, oocyte meiosis, ribosome biogenesis in eukaryotes, homologous recombination, fanconi anemia pathway, and basal transcription factors (Fig. D). GO annotation displayed that KIF23 may be involved in covalent chromatin modification, histone modification, organelle fission, nuclear division, chromosome segregation, DNA replication, nuclear chromosome segregation, mitotic nuclear division, sister chromatid segregation, and mitotic sister chromatid segregation (Fig. E). These results verify that KIF23 play a crucial role in the development of DLBCL. Most importantly, to explore tumor-related signaling pathways in which KIF23 may participate, we chose the h.all.v7.2.symbols.gmt [Hallmarks] as the gene set database. We divided DLBCL patients from TCGA into two groups according to the median mRNA level of KIF23. Results from GSEA indicated that patients with higher expression of KIF23 showed notably positive correlation with PI3 K/AKT/mTOR_signaling (FDR < .01, Fig. F), TGF-beta signaling (FDR < .01, Fig. G) and Wnt/beta-catenin signaling (FDR < .01, Fig. H). Those three pathways are frequently activated and play important roles in human cancers. These findings suggested that KIF23 may promote DLBCL through PI3 K/AKT/mTOR_signaling, TGF-beta signaling, and Wnt/beta-catenin signaling. 3.10 Hypomethylation of the promotor region might be one reason for the upregulation of KIF23 in DLBCL It is well known that hypomethylation in the promoter region leads to transcriptional activation. According to the higher expression of KIF23 in DLBCL, we wonder whether hypomethylation is present in the promoter region of KIF23 in DLBCL. However, the methylation levels in the promoter region of KIF23 was not reported in DLBCL. To detect methylation levels in the promoter region of KIF23, we analyzed two DNA methylation profilings (TCGA, n = 48; GSE92676, n = 97). The result from TCGA suggested that CPG probes in the promoter region of KIF23 showed hypomethylation (Fig. A). The mean methylation levels of these CPG probes were all less than 0.1 (Fig. B). We obtained similar results in the GSE92676 dataset (Fig. C-D). To verify the relationship between methylation levels in promoter regions and mRNA expression levels, we combined RNA-seq and DNA methylation profile from TCGA to further analyze. Results confirmed that methylation levels of 4/8 CPG probes (cg15465548, cg08817171, cg16587794, cg05749577) showed significant negative correlations with KIF23 mRNA levels (Fig. E-H). Hence, KIF23 hypomethylation in its promoter region might be one reason for its upregulation in DLBCL.
DEG identification Four datasets were used to identify DEGs between DLBCL and the corresponding normal tissues. For GSE25638, 1787 upregulated genes and 318 downregulated genes were obtained (Fig. A). For GSE44337, 1697 upregulated genes and 590 downregulated genes were obtained (Fig. B). For GSE56315, 3665 upregulated genes and 4856 downregulated genes were obtained (Fig. C). For GSE32018, 486 upregulated genes and 884 downregulated genes were obtained (Fig. D). Finally, 80 overlapping genes were significantly upregulated (Fig. E), and 15 overlapping genes were remarkably downregulated (Fig. F) in DLBCL compared to normal tissues (Table ). The log fold change and the P value of these overlapping genes were listed in supplementary file 1.
KEGG pathway and GO annotation enrichment analyses Next, Two R packages (clusterProfiler, org.Hs.eg.db) were applied to conduct KEGG pathway and GO annotation enrichment analyses of these 95 overlapping DEGs. The top four KEGG enrichment pathways were cell cycle, oocyte meiosis, progesterone-mediated oocyte maturation, and cysteine and methionine metabolism (Fig. A). The GO biological process analysis revealed that these 95 common DEGs were significantly enriched in chromosome segregation, nuclear division, organelle fission (Fig. B). The GO cellular component analysis showed that chromosomal region, spindle, centromeric region kinetochore were markedly enriched (Fig. C). Besides, the top three GO molecular function terms were ATPase activity, catalytic activity acting on DNA, DNA-dependent ATPase activity (Fig. D).
PPI network and hub genes identification STRING database was used to construct the protein-protein interaction network among the 95 common DEGs. A plugin cytoHubba of Cytoscape (3.8.0) was applied to identify and visualize the top 30 hub genes. As the heterogeneity of protein network, it is reasonable to use more than one algorithm to catch hub genes. Since the local-based method (including four algorithms: Degree method (Geg), Maximal Clique Centrality, Density of Maximum Neighborhood Component, and Maximum Neighborhood Component) of cytoHubba considers the relationship between the node and its direct neighbors, we used all the four algorithms of this method to identify the top 30 hub genes (Fig. A-D). Then, 17 overlapping hub genes were obtained (Fig. A). All the 17 hub genes were upregulated in DLBCL compared to normal tissues in the 4 GEO datasets (Fig. B). Similar results were obtained from the GEPIA database (Fig. C-S).
Relationship between hub genes and prognosis of DLBCL patients For OS, two datasets, GSE10846 consisted of 414 DLBCL cases, GSE32918 consisted of 244 DLBCL cases, was used for Kaplan-Meier analysis. Among these 17 hub genes, three genes, KIF23 ( P = .01, P = .01), TRIP13 ( P < .01, P < .01), and ZWINT ( P < .001, P < .01), were significantly associated with shorter lives in both datasets (Fig. A-F). Next, GSE23501(n = 69), were used to validate the results. In this dataset, patients with higher expression of KIF23 had poor prognosis ( P = .04), while patients with different expression levels of TRIP13 ( P = .77) or ZWINT ( P = .75) showed no significant differences in prognosis (Fig. G-I). Therefore, KIF23 is considered as a critical gene in DLBCL.
IHC validation of KIF23 importance in DLBCL We then identified the KIF23 expression level in 45 samples, including 17 lymph nodes and 27 DLBCL samples. IHC experiment validated that DLBCL samples showed higher expression of KIF23 compared to lymphoid tissues (Fig. A-C). We classified DLBCL patients into four categories (Fig. B, the staining value ≥ 9: +++; the staining value≥4: ++; the staining value≥1: + the staining value = 0: −). Furthermore, 77 DLBCL patients with clinical information were used for survival analysis. We separate all patients into two groups according to the median value of the KIF23 staining index: KIF23-high expression group (≥median value) and KIF23-low expression group (<median value). The clinical information and KIF23 staining value of those 77 DLBCL paraffin-embedded tissues were detailed in supplementary file 2. Kaplan-Meier analysis suggested that higher expression of KIF23 was significantly associated with poor prognosis in DLBCL (Fig. D).
KIF23 as an independent prognostic factor in DLBCL To identify the importance of KIF23 in DLBCL, we used information from GSE10846 for further studies because this dataset had detailed information on clinical and treatment attributes. Univariate analysis indicated that age (HR 1.03; CI 1.018–1.041; P < .001;), regimen (HR 0.53; CI 0.376–0.719; P < .001), ECOG (HR 1.82; CI 1.551–2.136; P < .001), stage (HR 1.51; CI 1.293–1.758; P < .001), LDH ratio (HR 1.14; CI 1.095–1.181; P < .001), extranodal sites (HR 1.21; CI 1.001–1.452; P < .001), KIF23 (HR 1.36; CI 1.101–1.690; P < .001) significantly correlated with OS (Table ). However, males and females showed no significantly different outcomes in DLBCL (HR 1.02; CI 0.744–1.402; P = .89). Follow-up evaluation of multivariate analysis of these significantly clinical factors demonstrated that older age (HR 2.41; CI 1.43–4.06; P < .001), ECOG2 (HR 2.87; CI 1.50–5.47; P < .01), ECOG3 (HR 2.58; CI 1.22–5.50; P = .01), ECOG4 (HR 9.66; CI 2.95–31.61; P < .001), stage 2 (HR 3.10; CI 1.32–6.83; P < .01), stage 3 (HR 2.90; CI 1.24–6.83; P = .02), stage 4 (HR 3.51; CI 1.50–8.21, P < .01), KIF23 (HR 1.28 CI 1.01–1.61; P = .04) were independent risk factors for poor prognosis. Treatment of R-CHOP regimen (HR 0.45; CI 0.28–0.71; P < .001) can prolong the survival time of patients (Fig. ). Results from univariate and multivariate analysis indicated that KIF23 can be an independent risk factor for poor prognosis in DLBCL.
Relationship between KIF23 and prognosis of patients under different clinical conditions Then, we explored the effect of KIF23 in patients under different clinical conditions. Kaplan-Meier analysis revealed that the higher expression of KIF23 was associated with inferior prognosis in patients who received the R-CHOP regimen ( P < .01 Fig. B). The HR for R-CHOP was 1.56 (CI 1.046–2.314, P = .03, Table ), suggesting that KIF23 may be a negative prognostic indicator for patients who received R-CHOP regimen, while whatever Kaplan-Meier ( P = .77) or univariate analysis (HR 1.23, CI 0.87–1.742, P = .24), KIF23 expression level showed no significant effect on prognosis in patients received CHOP regimen (Fig. A, Table ). In the early-stage (stage 1 and stage 2) and late-stage (stage 3 and stage 4), Kaplan-Meier analysis indicated that there were no differences in prognosis between KIF23 higher group and KIF23 lower group ( P = .07, P = .07 respectively, Fig. C-D). Univariate analysis indicated similar results (HR 1.40, CI 0.8918–2.188, P = .14, Table ) in the early-stage. However, the HR for late-stage was 1.41 (CI 1.057–1.878, P = .02 Table ) indicated that the higher expression of KIF23 might be a prognostic risk factor in patients of late stage. Higher expression of KIF23 was significantly associated with poor clinical outcomes ( P = .02, Fig. E) in patients with lower LDH ratio, whereas there was no notable difference in patients with higher LDH ratio ( P = .2, Fig. F). Univariate analysis suggested no apparent differences between KIF23 higher expression and KIF23 lower expression in patients with lower LDH ratio or higher LDH ratio (HR 1.53 CI 0.9983–2.339, P = .51, HR 1.20 CI 0.9219–1.552, P = .18 respectively, Table ). Patients with KIF23 higher expression showed shorter survival time in the group with extranodal sites ( P < .01, Fig. H). Univariate analysis showed similar results (HR 1.40 CI 1.002–1.966, P = .04, Table ). In groups without extranodal sites, the result from Kaplan-Meier analysis showed no significant difference in prognosis between KIF23 higher expression and lower expression ( P = .31, Fig. G). On the contrary, univariate analysis revealed that HR was 1.37 (CI 1.023–1.842, P = .03, Table ), indicating that the higher expression of KIF23 indicated poor prognosis in patients without extranodal sites. Moreover, a subgroup analysis revealed that upregulation of KIF23 was a prognostic risk factor for reduced 3 years ( P < .01, Fig. J; HR 1.10, CI 0.799–1.513, P = .56, Table ), 5 years ( P < .01, Fig. K; HR 1.28, CI 1.027–1.589, P = .03, Table ) and 10 years ( P < .01, Fig. L; HR 1.41, CI 1.141–1.734, P < .01, Table ), but not 1 year ( P = .13, Fig. I; HR 1.45, CI 1.170–1.799, P < .001, Table ) OS in DLBCL patients.
Relationship between KIF23 and clinical features In addition, we evaluated the correlation between KIF23 and clinical features in DLBCL using a chi-square test. As shown in Table , different regimens showed different effects on KIF23 expression ( P = 0.04), while other clinical factors (gender, age, regimen, ECOG, stage, LDH ratio, extranodal sites) were not significantly affecting KIF23 expression.
Molecular mechanism of KIF23 in DLBCL KIF23 and its co-expression genes may function together in cells. Hence, we first aim to identify genes that showed significant correlations with KIF23. We used the LinkedOmics database to explore KIF23 co-expression mode in the DLBCL cohort from TCGA. As shown in Fig. A, 2875 genes (dark red dots) displayed significantly positive associations with KIF23, while 2671 genes (dark green dots) displayed negative associations (false discovery rate, FDR < .01). Fig. B-C showed the top 50 significant genes positively and negatively related to KIF23. Then, we performed KEGG and GO analyses to explore the molecular functions of KIF23 and its co-expression genes in DLBCL. Results showed that KEGG pathways were enriched in cell cycle, RNA transport, viral carcinogenesis, ubiquitin-mediated proteolysis, cellular senescence, oocyte meiosis, ribosome biogenesis in eukaryotes, homologous recombination, fanconi anemia pathway, and basal transcription factors (Fig. D). GO annotation displayed that KIF23 may be involved in covalent chromatin modification, histone modification, organelle fission, nuclear division, chromosome segregation, DNA replication, nuclear chromosome segregation, mitotic nuclear division, sister chromatid segregation, and mitotic sister chromatid segregation (Fig. E). These results verify that KIF23 play a crucial role in the development of DLBCL. Most importantly, to explore tumor-related signaling pathways in which KIF23 may participate, we chose the h.all.v7.2.symbols.gmt [Hallmarks] as the gene set database. We divided DLBCL patients from TCGA into two groups according to the median mRNA level of KIF23. Results from GSEA indicated that patients with higher expression of KIF23 showed notably positive correlation with PI3 K/AKT/mTOR_signaling (FDR < .01, Fig. F), TGF-beta signaling (FDR < .01, Fig. G) and Wnt/beta-catenin signaling (FDR < .01, Fig. H). Those three pathways are frequently activated and play important roles in human cancers. These findings suggested that KIF23 may promote DLBCL through PI3 K/AKT/mTOR_signaling, TGF-beta signaling, and Wnt/beta-catenin signaling.
Hypomethylation of the promotor region might be one reason for the upregulation of KIF23 in DLBCL It is well known that hypomethylation in the promoter region leads to transcriptional activation. According to the higher expression of KIF23 in DLBCL, we wonder whether hypomethylation is present in the promoter region of KIF23 in DLBCL. However, the methylation levels in the promoter region of KIF23 was not reported in DLBCL. To detect methylation levels in the promoter region of KIF23, we analyzed two DNA methylation profilings (TCGA, n = 48; GSE92676, n = 97). The result from TCGA suggested that CPG probes in the promoter region of KIF23 showed hypomethylation (Fig. A). The mean methylation levels of these CPG probes were all less than 0.1 (Fig. B). We obtained similar results in the GSE92676 dataset (Fig. C-D). To verify the relationship between methylation levels in promoter regions and mRNA expression levels, we combined RNA-seq and DNA methylation profile from TCGA to further analyze. Results confirmed that methylation levels of 4/8 CPG probes (cg15465548, cg08817171, cg16587794, cg05749577) showed significant negative correlations with KIF23 mRNA levels (Fig. E-H). Hence, KIF23 hypomethylation in its promoter region might be one reason for its upregulation in DLBCL.
Discussion Previous researches indicated that KIF23 was involved in the initiation, development, and progression of tumors. Overexpression of KIF23 was detected in squamous cell carcinoma, gastric cancer, breast cancer, and lung cancer. In this study, integrating four gene expression datasets covering DLBCL patients and normal tissues, we identified that KIF23 expression showed a remarkable discrepancy between normal tissues and tumor tissues of DLBCL. By Kaplan-Meier analysis of four DLBCL profilings (n = 804) with clinical information, we found that the higher expression of KIF23 was consistently associated with poor clinical outcomes. Multivariate analysis indicated that KIF23 might be an independent prognostic factor in DLBCL. Moreover, univariate cox regression analysis revealed that KIF23 higher expression was a prognostic risk factor for patients who received R-CHOP regimen ( P = .03), in late-stage ( P = .02), with extranodal sites ( P = .03) and without extranodal sites ( P = .04). We also depicted that the higher expression of KIF23 was an adverse factor for decreasing 3, 5, 10-year overall survival. Consequently, we are pragmatic in indicating that KIF23 plays a critical role and leads to shorter lives in DLBCL patients. PI3 K/AKT/mTOR, TGF-β, and Wnt/beta/catenin signaling pathways are frequently activated in human cancers. These pathways play critical roles in cell proliferation, metabolism, differentiation, invasion/metastasis, and survival. In gastric cancer, KIF23 facilitated cell proliferation through directly binding with APC membrane recruitment 1 (Amer) to activate the Wnt/β-catenin signaling pathway. In our study, results of GSEA suggested that DLBCL patients with KIF23 higher expression showed activation of PI3 K/AKT/mTOR, TGF-β, and Wnt/beta/catenin signaling pathways. These findings broaden our knowledge of the molecular mechanism, and we assume that KIF23 may interact with these pathways to initiate and promote DLBCL. Inhibition of PI3 K and mTOR with NVP-BEZ235 can significantly reduce proliferation and the phosphorylation of 4EBP1, therefore inducing cell death in DLBCL. By analyzing gene expression profiles of rituximab (CD20-specific antibody) responsive and unresponsive cell lines of DLBCL, researchers found that rituximab affected not only the expression of genes related to classical pathway but also TGF-β and Wnt signalings. A previous study indicated that FOXP1 could enhance Wnt/β-catenin signaling and improve the sensitivity to Wnt signaling inhibitors in DLBCL. The combination therapy targeting KIF23 and these pathways may provide a new treatment for DLBCL. DNA methylation is the most common form of epigenetic modification. Dysregulation of DNA methylation is involved in the carcinogenesis of human cancers. Previous studies indicated that the hypermethylation of the promoter region was significantly associated with transcriptional silencing. Nevertheless, the methylation status of KIF23 in DLBCL is not reported previously. By analyzing the 450 K microarray of DLBCL cohorts, we found that the KIF23 promoter region was hypomethylated. Further study confirmed that methylation levels in the promoter region showed significant inverse correlations with KIF23 mRNA levels in DLBCL. Moreover, according to rigorous screening and validation, we affirm that KIF23 was highly expressed in DLBCL compared to lymphoid tissues. Thus, we supposed that the hypomethylation of the promoter region might be one reason for the higher expression of KIF23 in DLBCL. However, our study had some limitations. First, although we verified that KIF23 was an adverse prognostic factor in four DLBCL cohorts, the correlations between KIF23 expression and prognosis of patients in different clinical conditions were only analyzed in one GEO dataset (GSE10846 containing 414 samples), since other public datasets lack complete clinical information. A large sample study is required to validate. Second, all the mechanisms of KIF23 identified in DLBCL were based on bioinformatic analysis. Third, due to the lack of DNA methylation profiles of normal lymphoid tissues, our study lacked a comparison of methylation levels in KIF23 promoter regions between normal and DLBCL tissues. What's more, we didn’t confirm relationships between DNA methylation levels in the promoter region of KIF23 and KIF23 mRNA levels in the dataset GSE92676 because GSE92676 lacks the corresponding RNA expression data. Further exploration and validation experimentally in vitro and in vivo were necessary.
Conclusions In summary, our results provided evidence of the involvement of KIF23 in DLBCL by unveiling the prognostic values of KIF23 and potentially affected signaling pathways. Further study indicated that the hypomethylation in the promoter region of KIF23 might lead to its upregulation. KIF23 may serve as a potential therapeutic target in DLBCL.
The authors thank Dr. Jianming Zeng (University of Macau) and his team for their kindness of knowledge sharing about bioinformatics.
Concept and design: Yuqi Gong, Jing Zhang, Zhengrong Mao, and Renzhou. Conceptualization: Jing Zhang, Ren Zhou, Yuqi Gong, Zhengrong Mao. Data analysis: Yuqi Gong Data curation: Guoping Ren, Lingna Zhou, Yuqi Gong, Zhe Wang. Formal analysis: Yuqi Gong. Funding acquisition: Jing Zhao, Ren Zhou, Zhengrong Mao. Investigation: Jing Zhao, Lingna Zhou, Yuqi Gong, Zhe Wang. Methodology: Yuqi Gong. Resources: Yuqi Gong. Sample collection: Yuqi Gong, Lingna Zhou, Jing Zhao, Zhe Wang, Guoping Ren. Software: Yuqi Gong. Validation: Yuqi Gong. Visualization: Yuqi Gong. Writing – original draft: Liya Ding, Yuqi Gong. Writing – review & editing: Liya Ding. Writing: Yuqi Gong, Liya Ding.
Supplemental Digital Content
Supplemental Digital Content
|
Multi-omics analysis of soil microbiota and metabolites in dryland wheat fields under different tillage methods | 93716241-b4af-4725-8890-7bef08fbee76 | 11473732 | Microbiology[mh] | Winter wheat ( Triticum aestivum L.) is a nutrient-rich food crop that accounts for 26% of the global agricultural output and feeds > 35% of the global population, and is of great significance for nutrition security . In China, wheat cultivation accounts for approximately 22% of the total crop area, and the Loess Plateau is one of the most important cultivation areas, with wheat cultivation accounting for 44% of the total cultivated land. As Loess Plateau provides the main source of staple food for residents in this region, it is pivotal to maintain a stable wheat production in the Loess Plateau – . Tillage practices have profound effect on soil environment and thereby on crop yield , . In dryland wheat field, deep ploughing, subsoiling tillage, or rotary tillage in fallow period (rainy season) has been extensively applied to the recovery of soil moisture , . Conservation methods such as subsoiling (SS) and no-tillage (NT) can reduce soil disturbance and increase the water-use efficiency of crops, thereby improving the physical and chemical properties of the soil, water-use efficiency, and crop yield . Further, SS combined with organic fertilizer not only significantly increases the soil microbial biomass carbon and nitrogen, as well as soil enzyme activity, but also improves total dry matter accumulation and water use efficiency of wheat , . However, lower grain yield of winter wheat and water use efficiency were also reported in previous studies , . Therefore, understanding the mechanism of soil biochemistry is important for sustainable production of winter wheat under different tillage practices. Rhizosphere soil is a complex and dynamic system that creates ecological niches for various types of microorganisms and is thus responsible for crop growth and development . The soil microbiome is a key factor in ecosystem functions and contributes greatly to crop health . The activity of the soil microbiome is a crucial part of the soil function in organic mineralization, thereby providing phosphorus, nitrogen, and potassium in agroecosystems . Xia et al. illustrated that different tillage methods have substantial impacts on the diversity and composition of bacteria in the rhizosphere soil of wheat and that the less destructive tillage methods (SS-SS and NT-NT) could preserve the integrity of soil bacteria. They reported that SS-SS was the most effective tillage method for accumulating soil water, maintaining the balance of aerobic and anaerobic bacteria, and enhancing the metabolic ability of rhizosphere bacteria. In addition, the rhizosphere microbiome secretes effect or proteins that evade the plant immune system and successfully colonize the plant roots . Therefore, the interactions between rhizosphere environmental microbes play an essential role in crop growth. Our previous studies found that soil available nutrients were maintained at higher levels under the conditions of NT and SS than under DT . SS is a good measure to improve soil nitrogen, carbon, and nutrient availability in dryland wheat field . However, the effects of different tillage methods on rhizosphere microorganisms and metabolites during dryland wheat growth remain unknown. The present study aimed to discover the key soil microorganisms as well as key metabolites in dryland wheat soil treated with three tillage modes (NT, SS, and DT) using metagenomics and metabolomics. In addition, the relationship between microorganisms and differential metabolites in wheat rhizosphere was analyzed. The present study results can provide a theoretical basis for improving wheat yield in dryland agriculture.
Research region The experimental site is located at National Experimental Station of Hongbu, Institute of Wheat Research, Shanxi Agricultural University (Linfen, Shanxi, China, 111° 33′ 07″ E, 36° 13′ 02″ N). The altitude of this site is 457.9 m, and the average annual rainfall is 468.5 mm (with about 65% occurring during fallow period from July to September and only 35% during the growth period from October to June). The average soil water content in 0-20 cm soil layer (W/W) was 12.3% across all growth stages. Annual evaporation is 1829.4 mm. Annual sunshine duration is 2416.5 h with a frost-free period of 184 d. Here, ≥ 0 °C effective accumulative temperature is 4617.5 °C; and ≥ 10 °C effective accumulative temperature is 4151.0 °C. The soil type of the experimental site was calcareous cinnamon soil, and soil properties and nutrient conditions before the experiment were as follows: pH 8.63, organic carbon 11.67 g/kg, total nitrogen 0.93 g/kg, and total phosphorus 0.73 g/kg. No irrigation was applied across all stages in the field. Experimental design and management The field experiment has been conducted since 2016 . Winter wheat ripens once a year; it is sown in early October and harvested mechanically in early June of the following year, leaving a stubble of 10-15 cm in height. The straw was then crushed, mulched, and returned to the field. Three different tillage modes (NT, no tillage; SS, subsoiling; DT, deep tillage) were used in a random complete block design with three replicates (50 m long× 10 m wide) in mid-July. In the NT treatment, no tillage was performed until subsequent crop planting. SS was initiated at a depth of 30-35 cm using a sub-sowing machine (model number: ISZL-300, Shandong Aolong Agricultural Machinery Manufacturing Co., Ltd., Shandong, China), and DT was started at a depth of 30-35 cm using a plow machine (model number: 1 L-320, Shanxi Jishan Agricultural Machinery Manufacturing Co., Ltd., Shanxi, China). Winter wheat ( Triticum aestivum cv. Jinmai 92) was sown at a seeding rate of 150 kg/ha in early October. Sowing and fertilization were performed simultaneously, and approximately 750 kg/ha Apollo compound fertilizer (N: P 2 O 5 : K 2 O= 22:16:5, total nutrient content ≥ 40%) was applied as the basal fertilizer for all treatments. No topdressing was applied in the later stage. The later management (Prevention and control of diseases, pests and grasses) of all tillage treatments was consistent. Soil sampling Three replicates for each treatment were collected randomly in April 2021. At the grain filling stage, the rhizosphere soil samples were collected from the roots of wheat. Wheat roots were manually excavated along with the surrounding soil. Impurities and loose soil were picked and shaken off gently, and the rhizosphere soil attached to the root surface was collected using a brush into sterile tube (50 mL). Then, one part of the samples (1 g) was stored in a -80℃ refrigerator as soon as possible for subsequent analysis of microbial community structure. The other portion (1 g) was used for metabolomic analysis. Sequencing and microbiome analysis Soil samples (250 mg) collected from wheat rhizosphere of NT, SS and DT treatment groups were sent to Personal biotechnology Co., Ltd. (Shanghai, China) for 16S rRNA gene sequencing and ITS sequencing. Briefly, an OMEGA soil DNA kit (M5635-02, Omega Bio-Tek, USA) was used to isolate total genomic DNA from each rhizosphere sample. The quantity and quality of the extracted DNA were assessed using a NanoDrop NC2000 spectrophotometer (Thermo Fisher Scientific, USA) and 1.5% agarose gel electrophoresis, respectively. For bacterial community analysis, the V3-V4 region of the 16S rRNA gene was amplified using the forward primer (V3V4-F: ACTCCTACGGGAGGCAGCA) and the reverse primer (V3V4-R: GGACTACHVGGGTWTCTAAT). For fungal community analysis, the ITS sequence was amplified with the forward primer (ITS1-F: GGAAGTAAAAGTCGTAACAAGG) and the reverse primer (ITS1-R: GCTGCGTTCTTCATCGATGC). PCR amplification products were purified using Vazyme VAHTSTM DNA Clean Beads (Vazyme, Nanjing, China) and quantified using the Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen, USA). Afterwards, the amplification products were pooled at equal concentrations, followed by 16S rRNA genes and ITS sequencing using the Illumina NovaSeq 6000 SP Reagent Kit (500 cycles). QIIME 2 software (version 2019.4, https://qiime2.org ) was used to analyze the sequencing data according to Bolyen et al. . Briefly, raw sequencing data were quality-filtered, denoised, assembled, and chimeras removed using the DADA2 method to obtain unique amplicon sequence variants (ASVs). Subsequently, based on the classify-sklearn algorithm, the Greengenes reference database classifier (release 13.8) and UNITE reference database classifier (release 8.0) were used to annotate operational taxonomic units (OTUs) from 16S rRNA gene sequencing and ASVs from ITS sequencing, respectively. Alpha and beta diversities were determined using the QIIME 2 software. Chao1 and Observed species indices indicated species richness, Shannon and Simpson indices reflected species diversity, Faith’s PD index represented species diversity based on evolution, and Pielou’s evenness indicated species evenness. The heatmap package of R software (version 3.3.1, https://cran.R-project.org ) was employed to analyze the different microbiota taxa in the different groups, and the Python LEfSe package in R software (version 3.3.1) was used to identify crucial biomarkers among the groups . Finally, the functions of the identified soil microbiota were predicted using the PICRUSt2 software and analyzed using the metagenome Seq package in R software (version 3.3.1). Isolation of metabolites and metabolomics analysis in rhizosphere The rhizosphere samples (200 mg) were added with 600 μL pre-cooled 2-chloro-L-phenylalanine (4 ppm) in methanol. After vortexing for 60 s, 100 mg of glass beads was added, and the samples were ground at 60 Hz in a tissue grinder for 90 s. After sonication for 30 min at room temperature, as well as on ice for 30 min, the mixture was centrifuged at 12000 rpm for 10 min at 4 °C, and then the supernatant was filtered through a 0.22-μm membrane. The filtered samples were subjected to liquid chromatography-mass spectrometry (LC-MS) detection. A Vanquish UHPLC system (Thermo Fisher Scientific) equipped with an ACQUITY UPLC ® HSS T3 column (1.8 µm, 2.1 × 150 mm; Waters, Milford, MA, USA) and a mass spectrometer (Orbitrap Exploris 120; Thermo Fisher Scientific) was used for LC-MS. The temperature of column was maintained at 40 °C, the flow rate was 0.25 mL/min, and the injection volume was 2 μL. Additionally, the mobile phases for positive were 0.1% formic acid in acetonitrile (v/v, B2) and 0.1% formic acid in water (A2); and for negative were 5 mM ammonium formic water (A3) and acetonitrile (B3). The elution program was set as follows: 0-1 min, 2% B2/B3; 1- 9 min, 2%-50% B2/B3; 9-12 min, 50%-98% B2/B3; 12-13.5 min, 98% B2/B3; 13.5-14 min, 98%-2% B2/B3; 14-20 min, 2% B2 (positive) or 14-27 min, 2% B3 (negative). The spray voltage of MS for positive and negative modes was respectively 3.5 kV and 2.5 kV; the capillary temperature was 325 °C; as well as sheath gas and auxiliary gas were set at 30 and 10 arbitrary units. Full scanning was performed at a resolution of 60000, and a scanning range of 100-1000 m/z. The original data obtained from LC-MS were converted to a mzXML format using Proteowizard software (v3.0.8789), and then peaks detection, peaks filtration, and peaks alignment were performed using the “xcms” package of R with the parameters of bw = 5, ppm = 15, peakwidth = c (5, 30), mzwid = 0.015, mzdiff = 0.01 and method = “centWave.” Then, the metabolites were identified according to the public databases HMDB ( http://www.hmdb.ca/ ), LipidMaps ( http://www.lipidmaps.org/ ), MassBank ( https://www.massbank.jp/ ), mzcloud ( https://www.mzcloud.org/ ), and KEGG ( http://www.genome.jp/kegg/ ), and the parameter was set as ppm < 30 ppm. The “ropls” package in R was used for all multivariate data analyses and modelling, and significantly differential metabolites were screened based on the thresholds of VIP > 1 and P < 0.05. Finally, the screened differential metabolites were subjected to Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis. And the metabonomic data in the form of the relative quantification (peak area) were used for all subsequent analysis. Conjoint analysis of the microbiome and metabolomics data Spearman’s correlation coefficient and p -values between soil microbiota (including bacterial and fungal communities) at the genus level and differential metabolites were analyzed using Mothur software, and a correlation heatmap was drawn using R software. Additionally, the relationships based on |rho| > 0.8 and P < 0.01 were selected for the construction of the correlation network, which was visualized using Cytoscape software.
The experimental site is located at National Experimental Station of Hongbu, Institute of Wheat Research, Shanxi Agricultural University (Linfen, Shanxi, China, 111° 33′ 07″ E, 36° 13′ 02″ N). The altitude of this site is 457.9 m, and the average annual rainfall is 468.5 mm (with about 65% occurring during fallow period from July to September and only 35% during the growth period from October to June). The average soil water content in 0-20 cm soil layer (W/W) was 12.3% across all growth stages. Annual evaporation is 1829.4 mm. Annual sunshine duration is 2416.5 h with a frost-free period of 184 d. Here, ≥ 0 °C effective accumulative temperature is 4617.5 °C; and ≥ 10 °C effective accumulative temperature is 4151.0 °C. The soil type of the experimental site was calcareous cinnamon soil, and soil properties and nutrient conditions before the experiment were as follows: pH 8.63, organic carbon 11.67 g/kg, total nitrogen 0.93 g/kg, and total phosphorus 0.73 g/kg. No irrigation was applied across all stages in the field.
The field experiment has been conducted since 2016 . Winter wheat ripens once a year; it is sown in early October and harvested mechanically in early June of the following year, leaving a stubble of 10-15 cm in height. The straw was then crushed, mulched, and returned to the field. Three different tillage modes (NT, no tillage; SS, subsoiling; DT, deep tillage) were used in a random complete block design with three replicates (50 m long× 10 m wide) in mid-July. In the NT treatment, no tillage was performed until subsequent crop planting. SS was initiated at a depth of 30-35 cm using a sub-sowing machine (model number: ISZL-300, Shandong Aolong Agricultural Machinery Manufacturing Co., Ltd., Shandong, China), and DT was started at a depth of 30-35 cm using a plow machine (model number: 1 L-320, Shanxi Jishan Agricultural Machinery Manufacturing Co., Ltd., Shanxi, China). Winter wheat ( Triticum aestivum cv. Jinmai 92) was sown at a seeding rate of 150 kg/ha in early October. Sowing and fertilization were performed simultaneously, and approximately 750 kg/ha Apollo compound fertilizer (N: P 2 O 5 : K 2 O= 22:16:5, total nutrient content ≥ 40%) was applied as the basal fertilizer for all treatments. No topdressing was applied in the later stage. The later management (Prevention and control of diseases, pests and grasses) of all tillage treatments was consistent.
Three replicates for each treatment were collected randomly in April 2021. At the grain filling stage, the rhizosphere soil samples were collected from the roots of wheat. Wheat roots were manually excavated along with the surrounding soil. Impurities and loose soil were picked and shaken off gently, and the rhizosphere soil attached to the root surface was collected using a brush into sterile tube (50 mL). Then, one part of the samples (1 g) was stored in a -80℃ refrigerator as soon as possible for subsequent analysis of microbial community structure. The other portion (1 g) was used for metabolomic analysis.
Soil samples (250 mg) collected from wheat rhizosphere of NT, SS and DT treatment groups were sent to Personal biotechnology Co., Ltd. (Shanghai, China) for 16S rRNA gene sequencing and ITS sequencing. Briefly, an OMEGA soil DNA kit (M5635-02, Omega Bio-Tek, USA) was used to isolate total genomic DNA from each rhizosphere sample. The quantity and quality of the extracted DNA were assessed using a NanoDrop NC2000 spectrophotometer (Thermo Fisher Scientific, USA) and 1.5% agarose gel electrophoresis, respectively. For bacterial community analysis, the V3-V4 region of the 16S rRNA gene was amplified using the forward primer (V3V4-F: ACTCCTACGGGAGGCAGCA) and the reverse primer (V3V4-R: GGACTACHVGGGTWTCTAAT). For fungal community analysis, the ITS sequence was amplified with the forward primer (ITS1-F: GGAAGTAAAAGTCGTAACAAGG) and the reverse primer (ITS1-R: GCTGCGTTCTTCATCGATGC). PCR amplification products were purified using Vazyme VAHTSTM DNA Clean Beads (Vazyme, Nanjing, China) and quantified using the Quant-iT PicoGreen dsDNA Assay Kit (Invitrogen, USA). Afterwards, the amplification products were pooled at equal concentrations, followed by 16S rRNA genes and ITS sequencing using the Illumina NovaSeq 6000 SP Reagent Kit (500 cycles). QIIME 2 software (version 2019.4, https://qiime2.org ) was used to analyze the sequencing data according to Bolyen et al. . Briefly, raw sequencing data were quality-filtered, denoised, assembled, and chimeras removed using the DADA2 method to obtain unique amplicon sequence variants (ASVs). Subsequently, based on the classify-sklearn algorithm, the Greengenes reference database classifier (release 13.8) and UNITE reference database classifier (release 8.0) were used to annotate operational taxonomic units (OTUs) from 16S rRNA gene sequencing and ASVs from ITS sequencing, respectively. Alpha and beta diversities were determined using the QIIME 2 software. Chao1 and Observed species indices indicated species richness, Shannon and Simpson indices reflected species diversity, Faith’s PD index represented species diversity based on evolution, and Pielou’s evenness indicated species evenness. The heatmap package of R software (version 3.3.1, https://cran.R-project.org ) was employed to analyze the different microbiota taxa in the different groups, and the Python LEfSe package in R software (version 3.3.1) was used to identify crucial biomarkers among the groups . Finally, the functions of the identified soil microbiota were predicted using the PICRUSt2 software and analyzed using the metagenome Seq package in R software (version 3.3.1).
The rhizosphere samples (200 mg) were added with 600 μL pre-cooled 2-chloro-L-phenylalanine (4 ppm) in methanol. After vortexing for 60 s, 100 mg of glass beads was added, and the samples were ground at 60 Hz in a tissue grinder for 90 s. After sonication for 30 min at room temperature, as well as on ice for 30 min, the mixture was centrifuged at 12000 rpm for 10 min at 4 °C, and then the supernatant was filtered through a 0.22-μm membrane. The filtered samples were subjected to liquid chromatography-mass spectrometry (LC-MS) detection. A Vanquish UHPLC system (Thermo Fisher Scientific) equipped with an ACQUITY UPLC ® HSS T3 column (1.8 µm, 2.1 × 150 mm; Waters, Milford, MA, USA) and a mass spectrometer (Orbitrap Exploris 120; Thermo Fisher Scientific) was used for LC-MS. The temperature of column was maintained at 40 °C, the flow rate was 0.25 mL/min, and the injection volume was 2 μL. Additionally, the mobile phases for positive were 0.1% formic acid in acetonitrile (v/v, B2) and 0.1% formic acid in water (A2); and for negative were 5 mM ammonium formic water (A3) and acetonitrile (B3). The elution program was set as follows: 0-1 min, 2% B2/B3; 1- 9 min, 2%-50% B2/B3; 9-12 min, 50%-98% B2/B3; 12-13.5 min, 98% B2/B3; 13.5-14 min, 98%-2% B2/B3; 14-20 min, 2% B2 (positive) or 14-27 min, 2% B3 (negative). The spray voltage of MS for positive and negative modes was respectively 3.5 kV and 2.5 kV; the capillary temperature was 325 °C; as well as sheath gas and auxiliary gas were set at 30 and 10 arbitrary units. Full scanning was performed at a resolution of 60000, and a scanning range of 100-1000 m/z. The original data obtained from LC-MS were converted to a mzXML format using Proteowizard software (v3.0.8789), and then peaks detection, peaks filtration, and peaks alignment were performed using the “xcms” package of R with the parameters of bw = 5, ppm = 15, peakwidth = c (5, 30), mzwid = 0.015, mzdiff = 0.01 and method = “centWave.” Then, the metabolites were identified according to the public databases HMDB ( http://www.hmdb.ca/ ), LipidMaps ( http://www.lipidmaps.org/ ), MassBank ( https://www.massbank.jp/ ), mzcloud ( https://www.mzcloud.org/ ), and KEGG ( http://www.genome.jp/kegg/ ), and the parameter was set as ppm < 30 ppm. The “ropls” package in R was used for all multivariate data analyses and modelling, and significantly differential metabolites were screened based on the thresholds of VIP > 1 and P < 0.05. Finally, the screened differential metabolites were subjected to Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis. And the metabonomic data in the form of the relative quantification (peak area) were used for all subsequent analysis.
Spearman’s correlation coefficient and p -values between soil microbiota (including bacterial and fungal communities) at the genus level and differential metabolites were analyzed using Mothur software, and a correlation heatmap was drawn using R software. Additionally, the relationships based on |rho| > 0.8 and P < 0.01 were selected for the construction of the correlation network, which was visualized using Cytoscape software.
Structure of soil bacterial communities According to the 16S rRNA gene sequencing results, there were 12566 OTUs, 13376 OTUs and 12024 OTUs in the DT, NT, and SS groups, respectively, and 3009 OTUs overlapped in the three groups (Fig. A). Principal Coordinate Analysis (PCoA) revealed obvious clustering of soil bacterial communities in the DT, NT, and SS groups (Fig. B). These results suggest a high depth and reliability of the sequencing results, which can be used for further analyses. Good’s coverage index of each group was approximately 0.97 (Fig. C), which implied that the sequencing results of each sample contained soil bacterial communities. There were no significant differences in the Chao1, Faith’s PD, Shannon, Pielou’s evenness, Simpson, and observed species values among the DT, NT, and SS groups ( P > 0.05, Fig. C). Therefore, it can be inferred that three tillage modes (DT, NT, and SS) had no significant effects on the α diversity of soil bacterial communities on wheat roots. At the phylum level, the dominant phyla of the annotated OTUs were Proteobacteria, Acidobacteria, Gemmatimonadetes, Actinobacteria, and Firmicutes (Fig. D). Among them, the abundance of Proteobacteria in the NT, SS, and DT groups was respectively about 30.74%, 31.04%, and 32.40%; and the Actinobacteria abundance was 18.86%, 17.3% and 20.15% in the NT, SS, and DT groups. Furthermore, the abundances of Acidobacteria/Gemmatimonadetes/Firmicutes in the NT, SS, and DT groups were approximately 17.45%, 7.63%, 10.97%, 20.54%, 9.86%, and 3.44% and 16.06%, 10.94%, and 3.42%, respectively. These results indicate that compared with the DT group, SS decreased the Actinobacteria abundance significantly ( P < 0.05), while increasing Acidobacteria ( P < 0.05), Firmicutes abundance, whereas reduce the abundance of Gemmatimonadetes (Fig. D). The top 30 soil bacterial genera were analyzed, including Subgroup-6 , Bacillus , Rokubacteriales , MND1 , Sphingomonas , Nitrospira , Longimicrobiaceae , Solirubrobacter , Nocardioides , JG30-KF-CM45 , and Gaiella (Fig. E). The abundance of Subgroup-6 in the NT, SS, and DT groups was 11.21%, 12.22%, and 8.74%, respectively, and the Bacillus abundance was 9.16%, 2.65%, and 2.35% in the NT, SS, and DT groups, respectively. Additionally, the Rokubacteriales (3.98%) and MND1 (3.04%) abundance were the highest in the SS group, the Sphingomonas abundance (3.33%) was the highest in the DT group, and Nitrospira (0.98%), Solirubrobacter (0.87%), and Gaiella (0.70%) were the lowest in the NT group (Fig. E). LDA effect size (LEFSe) analysis was used to identify biomarkers among the groups at different levels. At the phylum level, Gemmatimonadetes were especially abundant in the DT group, and Acidobacteria, Nitrospirae, Planctomycetes, and Rokubacteria were the primary phyla in the SS group. Conversely, Firmicutes, and Mortierellomycota was more abundant in the NT group (Fig. A). At the genus level, we found that Cryptosporangium , Crossiella , Umezawaea , AKYH767 , OPB56 , Rhodothermaceae , OLB13 , Candidatus-Pacebacteria , Bdellovibrio , Leptothrix , and CCD24 were signature bacterial communities in the NT group; and Nocardia , Aeromicrobium , Chthonomonas , Phenylobacterium , Neorhizobium , Sphinhopyxis , Silvanigrella , and Hydrogenophaga were important species in the DT group; as well as the crucial genera in the SS group were RB41 , MB-A2-108 , Rubrobacter , Lachnospiraceae-NK4A136-group , Latescibacteraceae , Nitrospira , bacteriap25 , Aquicella , JTB255-marine-benthic-group , Rokubacteriales , and ADurb-Bin063-1 (Fig. A). The annotated soil bacterial communities were subjected to functional analysis, and it was found that the soil bacterial communities were related to “amino acid biosynthesis”, “carbohydrate biosynthesis”, “carboxylate degradation”, “fatty acid and lipid biosynthesis”, “fermentation”, “TCA cycle”, “cofactor, prosthetic group, and vitamin biosynthesis”, and “electron transfer” (Fig. ). We used the metagenomeSeq method to identify significantly different metabolic pathways. PWY-7377 (cob(II) yrinatea, c-diamide biosynthesis I (early cobalt insertion)), PWY-6654 (phosphopantothenate biosynthesis III), PWY-6349 (CDP-archaeol biosynthesis), PWY-6350 (archaetidylinositol biosynthesis), PWY-5532 (adenosine nucleotides degradation IV), PWY-5198 (factor 420 biosynthesis), PWY-7286 (7-(3-amino-3-carboxypropyl)-wyosine biosynthesis), and PWY-5507 (adenosylcobalamin biosynthesis I (early cobalt insertion)) was the differential functional pathways between NT vs. DT group (Fig. B). LACTOSECAT-PWY (lactose and galactose degradation I) was the differentially expressed pathway between the SS vs. DT group (Fig. B). Moreover, PWY-5392 (reductive TCA cycle II), PWY-6915 (pentalenolactone biosynthesis), ORNDEG-PWY (superpathway of ornithine degradation), PWY-5507, PWY-7377, PWY-7644 (heparin degradation), and THREOCAT-PWY (superpathway of L-threonine metabolism) were the differential pathways between NT vs. SS group (Fig. B). Structure of soil fungal communities ITS sequencing results showed that 516 OTUs, 500 OTUs, and 417 OTUs were observed in the NT, SS, and DT groups, respectively, which included 200 shared OTUs among the three groups (Fig. A). The PCoA results showed that the samples in the NT, DT, and SS groups were well-aggregated (Fig. B), indicating that the sequencing was reliable and could be used for further analyses. Good’s coverage values for the NT, SS, and DT groups were all close to 1 (Fig. C), indicating that the sequencing results of each sample covered all soil fungal communities. No significant differences in the values of Pielou’s evenness, Chao1, Shannon, Simpson, as well as Observed species indexes were observed among the NT, SS, and DT groups ( P > 0.05, Fig. C), which implied that the α diversity of soil fungal communities in wheat cultivation could not be significantly altered by the three tillage modes (NT, SS, and DT). We further explored the changes in the composition of specific soil fungal communities at both the phylum and genus levels. As shown in Fig. D, Ascomycota, Basidiomycota, Glomeromycota, Mortierellomycota, and Mucoromycota were the five most dominant phyla in the soil fungal communities at the phylum level. Relative to the DT group (77.41%), the Ascomycota abundance was higher in the SS group (85.24%), whereas the Basidiomycota abundance in the NT group (1.41%) was lower than that in the DT group (4.64%), with non-significant difference. The abundances of Mortierellomycota were 2.43%, 0.98%, and 0.80% in the NT, SS, and DT groups, respectively. At the fungal genus level, top 30 fungal genera in NT, SS, and DT were analyzed, such as Staphylotrichum , Coniochaeta , Dichotomopilus , Phaeosphaeria , Immersiella , Fusarium , Mycosphaerella , Trichoderma , Humicola , Cladorrhinum , Cercophora , and Preussia (Fig. E). The relative abundance of Staphylotrichum in the SS group reached 11.63%, while the abundances of Coniochaeta, Dichotomopilus , and Immersiella in the DT group were 14.87%, 10.99%, and 7.03%, respectively. Phaeosphaeria abundance was approximately 9.84% in the NT group, which was higher than that in the other two groups (Fig. E). For Mycosphaerella and Trichoderma , their abundance in the NT, SS, and DT groups were approximately 5.55%, 6.60%, 3.10%, 1.25%, and 0.63%/1.07%, respectively. Additionally, the relative abundances of Humicola , Cladorrhinum , and Preussia in the SS group were 4.30%, 4.84%, and 4.43%, respectively, which were higher than those in the other two groups. Compared with the SS (0.20%) and DT (0.10%) groups, the Cercophora abundance was highest in the NT group (4.59%). Based on the LEFSe results, we found that Ctenomyces was a crucial fungal species in the SS group and Pezizella , Cordyceps , Monocillium , and Subulicystidium were the signature fungal genera in the DT group. The biomarker fungal genera in the NT group were Alfaria , Stilbella , Metacordyceps , Diutina , Candida , Pichia , Cyphellophora , and Pyrenochaetopsis (Fig. A). Finally, functional analysis was performed on these soil fungal communities, and it was predicted that the annotated soil fungal communities were associated with “carbohydrate biosynthesis”, “amino acid biosynthesis”, “electron transfer”, “respiration”, “nucleoside and nucleotide biosynthesis”, “pentose phosphate pathways”, “carboxylate degradation”, and “TCA cycle” (Fig. ). Gluconeogenesis I (Gluconeo-PWY) was identified as a differential pathway as a differential pathway between the NT vs. SS (Fig. B). Identification of differential metabolites and functional analysis Metabolomic analysis (Fig. A) showed that the proportion of characteristic peaks with a relative standard deviation (RSD) < 30% in the quality control samples was 83.2%, indicating that the metabolomics data were reliable and conducive for the detection of biomarkers. Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) showed that metabolites in the NT, SS, and DT treatments were significantly separated (Fig. B), which implied that the three tillage modes (NT, SS, and DT) had substantial effects on the metabolic profiles of wheat rhizosphere and could be used for subsequent secondary structure analysis. According to the thresholds of P < 0.05, VIP > 1, 47, 29, and 40 differential metabolites were respectively identified between the DT vs. SS, the DT vs. NT, and the SS vs. NT (Fig. C). Compared with the SS group, 21 down-regulated metabolites and 26 up-regulated metabolites (N-methyl-2-pyrrolidinone, spermidine, pelargonic acid, isovitexin, and 3,4-dihydroxymandelic acid) were found in the DT group. Compared with the NT group, there were 16 down-regulated metabolites (Terephtalate, L-Dopa, and 3-Oxo-5beta-cholanate) and 13 up-regulated metabolites (5’-methylthioadenosine, and 3-methylthiopropionic acid) in the DT groups (Fig. C, D). Additionally, compared with the NT group, 27 down-regulated metabolites (isovitexin, terephthalate, spermidine, and N-methyl-2-pyrrolidinone) and 13 up-regulated metabolites (pentadecanoic acid) were identified in the SS group (Fig. C, D). The clustering heatmaps of all identified differential metabolites in the different groups are shown in Fig. . The identified differential metabolites were subjected to KEGG pathway enrichment analysis. Specificallt, 57, 96, and 81 KEGG pathways were enriched for the differential metabolites in the DT vs. NT, DT vs. SS, and SS vs. NT comparisons, respectively (Fig. ). The differential metabolites between the DT vs. NT were mainly involved in the “biosynthesis of plant secondary metabolites”, “catecholamine transferase inhibitors”, “synthesis and degradation of ketone bodies”, “cocaine addiction”, as well as “cysteine and methionine metabolism” (Fig. A). The significantly enriched pathways between the DT vs. SS were “galactose metabolism”, “ABC transporters”, “taste transduction”, “beta-alanine metabolism”, and “central carbon metabolism in cancer” (Fig. B). The mainly enriched pathway between the SS vs. NT was related to “ABC transporters”, “phosphotransferase system (PTS)”, “beta-alanine metabolism”, “taste transduction”, and “starch and sucrose metabolism” (Fig. C). Correlation between the crucial bacterial communities, fungal communities and differential metabolites After removing the duplicates, 72 differentially expressed metabolites were obtained from the NT, SS, and DT groups, combined with the 30 differential bacterial genera and 13 differential fungal genera. Among the differential metabolites, norselegiline and pentadecanoic acid had fifteen correlations on average with bacterial genera (Fig. A). Especially, pentadecanoic acid had the most associations including ten positive correlations and ten negative correlations. Differential metabolites had four correlations on average with fungal genera (Fig. B). Examining these fungal genera, Ctenomyces had the most associations including twenty-nine positive correlations and ten negative correlations with differential metabolites. Based on the network of bacterial genera and metabolites, glutaric acid was positively correlated with Bdellovibrio and Aeromicrobium but negatively correlated with Aquicella (Fig. A). Phenylobacterium was negatively correlated with 1-hexadecanol, norselegiline, and azelaic acid; as well as Spingopyxis was negatively correlated with azelaic acid, (R)-salsolinol, pantothenic acid, and pyridoxal-phosphate; and pantothenic acid and pyridoxal-phosphate were positively correlated with Latescibacteraceae and JTB255-marine-benthic-group , respectively (Fig. A). 9(S)-HPODE was negatively correlated with RB41 , MB-A2-108 , Nitrospira , bacteriap25 , and Rokubacteriales , whereas Rokubacteriales and RB41 were both positively correlated with D-fructose, and RB41 was also negatively correlated with lithocholic acid and terephthalate (Fig. A). Moreover, CCD24 positively correlated with (R)-3-hydroxybutyric-acid and 1,7-dimethyluric-acid (Fig. A). From the network of fungal genera and metabolites, we found that Cordyceps was positively correlated with fluvoxamine but negatively correlated with 1-hexadecanol, and Pichia , Candida , and Diutina were positively correlated with methyl jasmonate, p-synephrine, and terephthalate (Fig. B).
According to the 16S rRNA gene sequencing results, there were 12566 OTUs, 13376 OTUs and 12024 OTUs in the DT, NT, and SS groups, respectively, and 3009 OTUs overlapped in the three groups (Fig. A). Principal Coordinate Analysis (PCoA) revealed obvious clustering of soil bacterial communities in the DT, NT, and SS groups (Fig. B). These results suggest a high depth and reliability of the sequencing results, which can be used for further analyses. Good’s coverage index of each group was approximately 0.97 (Fig. C), which implied that the sequencing results of each sample contained soil bacterial communities. There were no significant differences in the Chao1, Faith’s PD, Shannon, Pielou’s evenness, Simpson, and observed species values among the DT, NT, and SS groups ( P > 0.05, Fig. C). Therefore, it can be inferred that three tillage modes (DT, NT, and SS) had no significant effects on the α diversity of soil bacterial communities on wheat roots. At the phylum level, the dominant phyla of the annotated OTUs were Proteobacteria, Acidobacteria, Gemmatimonadetes, Actinobacteria, and Firmicutes (Fig. D). Among them, the abundance of Proteobacteria in the NT, SS, and DT groups was respectively about 30.74%, 31.04%, and 32.40%; and the Actinobacteria abundance was 18.86%, 17.3% and 20.15% in the NT, SS, and DT groups. Furthermore, the abundances of Acidobacteria/Gemmatimonadetes/Firmicutes in the NT, SS, and DT groups were approximately 17.45%, 7.63%, 10.97%, 20.54%, 9.86%, and 3.44% and 16.06%, 10.94%, and 3.42%, respectively. These results indicate that compared with the DT group, SS decreased the Actinobacteria abundance significantly ( P < 0.05), while increasing Acidobacteria ( P < 0.05), Firmicutes abundance, whereas reduce the abundance of Gemmatimonadetes (Fig. D). The top 30 soil bacterial genera were analyzed, including Subgroup-6 , Bacillus , Rokubacteriales , MND1 , Sphingomonas , Nitrospira , Longimicrobiaceae , Solirubrobacter , Nocardioides , JG30-KF-CM45 , and Gaiella (Fig. E). The abundance of Subgroup-6 in the NT, SS, and DT groups was 11.21%, 12.22%, and 8.74%, respectively, and the Bacillus abundance was 9.16%, 2.65%, and 2.35% in the NT, SS, and DT groups, respectively. Additionally, the Rokubacteriales (3.98%) and MND1 (3.04%) abundance were the highest in the SS group, the Sphingomonas abundance (3.33%) was the highest in the DT group, and Nitrospira (0.98%), Solirubrobacter (0.87%), and Gaiella (0.70%) were the lowest in the NT group (Fig. E). LDA effect size (LEFSe) analysis was used to identify biomarkers among the groups at different levels. At the phylum level, Gemmatimonadetes were especially abundant in the DT group, and Acidobacteria, Nitrospirae, Planctomycetes, and Rokubacteria were the primary phyla in the SS group. Conversely, Firmicutes, and Mortierellomycota was more abundant in the NT group (Fig. A). At the genus level, we found that Cryptosporangium , Crossiella , Umezawaea , AKYH767 , OPB56 , Rhodothermaceae , OLB13 , Candidatus-Pacebacteria , Bdellovibrio , Leptothrix , and CCD24 were signature bacterial communities in the NT group; and Nocardia , Aeromicrobium , Chthonomonas , Phenylobacterium , Neorhizobium , Sphinhopyxis , Silvanigrella , and Hydrogenophaga were important species in the DT group; as well as the crucial genera in the SS group were RB41 , MB-A2-108 , Rubrobacter , Lachnospiraceae-NK4A136-group , Latescibacteraceae , Nitrospira , bacteriap25 , Aquicella , JTB255-marine-benthic-group , Rokubacteriales , and ADurb-Bin063-1 (Fig. A). The annotated soil bacterial communities were subjected to functional analysis, and it was found that the soil bacterial communities were related to “amino acid biosynthesis”, “carbohydrate biosynthesis”, “carboxylate degradation”, “fatty acid and lipid biosynthesis”, “fermentation”, “TCA cycle”, “cofactor, prosthetic group, and vitamin biosynthesis”, and “electron transfer” (Fig. ). We used the metagenomeSeq method to identify significantly different metabolic pathways. PWY-7377 (cob(II) yrinatea, c-diamide biosynthesis I (early cobalt insertion)), PWY-6654 (phosphopantothenate biosynthesis III), PWY-6349 (CDP-archaeol biosynthesis), PWY-6350 (archaetidylinositol biosynthesis), PWY-5532 (adenosine nucleotides degradation IV), PWY-5198 (factor 420 biosynthesis), PWY-7286 (7-(3-amino-3-carboxypropyl)-wyosine biosynthesis), and PWY-5507 (adenosylcobalamin biosynthesis I (early cobalt insertion)) was the differential functional pathways between NT vs. DT group (Fig. B). LACTOSECAT-PWY (lactose and galactose degradation I) was the differentially expressed pathway between the SS vs. DT group (Fig. B). Moreover, PWY-5392 (reductive TCA cycle II), PWY-6915 (pentalenolactone biosynthesis), ORNDEG-PWY (superpathway of ornithine degradation), PWY-5507, PWY-7377, PWY-7644 (heparin degradation), and THREOCAT-PWY (superpathway of L-threonine metabolism) were the differential pathways between NT vs. SS group (Fig. B).
ITS sequencing results showed that 516 OTUs, 500 OTUs, and 417 OTUs were observed in the NT, SS, and DT groups, respectively, which included 200 shared OTUs among the three groups (Fig. A). The PCoA results showed that the samples in the NT, DT, and SS groups were well-aggregated (Fig. B), indicating that the sequencing was reliable and could be used for further analyses. Good’s coverage values for the NT, SS, and DT groups were all close to 1 (Fig. C), indicating that the sequencing results of each sample covered all soil fungal communities. No significant differences in the values of Pielou’s evenness, Chao1, Shannon, Simpson, as well as Observed species indexes were observed among the NT, SS, and DT groups ( P > 0.05, Fig. C), which implied that the α diversity of soil fungal communities in wheat cultivation could not be significantly altered by the three tillage modes (NT, SS, and DT). We further explored the changes in the composition of specific soil fungal communities at both the phylum and genus levels. As shown in Fig. D, Ascomycota, Basidiomycota, Glomeromycota, Mortierellomycota, and Mucoromycota were the five most dominant phyla in the soil fungal communities at the phylum level. Relative to the DT group (77.41%), the Ascomycota abundance was higher in the SS group (85.24%), whereas the Basidiomycota abundance in the NT group (1.41%) was lower than that in the DT group (4.64%), with non-significant difference. The abundances of Mortierellomycota were 2.43%, 0.98%, and 0.80% in the NT, SS, and DT groups, respectively. At the fungal genus level, top 30 fungal genera in NT, SS, and DT were analyzed, such as Staphylotrichum , Coniochaeta , Dichotomopilus , Phaeosphaeria , Immersiella , Fusarium , Mycosphaerella , Trichoderma , Humicola , Cladorrhinum , Cercophora , and Preussia (Fig. E). The relative abundance of Staphylotrichum in the SS group reached 11.63%, while the abundances of Coniochaeta, Dichotomopilus , and Immersiella in the DT group were 14.87%, 10.99%, and 7.03%, respectively. Phaeosphaeria abundance was approximately 9.84% in the NT group, which was higher than that in the other two groups (Fig. E). For Mycosphaerella and Trichoderma , their abundance in the NT, SS, and DT groups were approximately 5.55%, 6.60%, 3.10%, 1.25%, and 0.63%/1.07%, respectively. Additionally, the relative abundances of Humicola , Cladorrhinum , and Preussia in the SS group were 4.30%, 4.84%, and 4.43%, respectively, which were higher than those in the other two groups. Compared with the SS (0.20%) and DT (0.10%) groups, the Cercophora abundance was highest in the NT group (4.59%). Based on the LEFSe results, we found that Ctenomyces was a crucial fungal species in the SS group and Pezizella , Cordyceps , Monocillium , and Subulicystidium were the signature fungal genera in the DT group. The biomarker fungal genera in the NT group were Alfaria , Stilbella , Metacordyceps , Diutina , Candida , Pichia , Cyphellophora , and Pyrenochaetopsis (Fig. A). Finally, functional analysis was performed on these soil fungal communities, and it was predicted that the annotated soil fungal communities were associated with “carbohydrate biosynthesis”, “amino acid biosynthesis”, “electron transfer”, “respiration”, “nucleoside and nucleotide biosynthesis”, “pentose phosphate pathways”, “carboxylate degradation”, and “TCA cycle” (Fig. ). Gluconeogenesis I (Gluconeo-PWY) was identified as a differential pathway as a differential pathway between the NT vs. SS (Fig. B).
Metabolomic analysis (Fig. A) showed that the proportion of characteristic peaks with a relative standard deviation (RSD) < 30% in the quality control samples was 83.2%, indicating that the metabolomics data were reliable and conducive for the detection of biomarkers. Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) showed that metabolites in the NT, SS, and DT treatments were significantly separated (Fig. B), which implied that the three tillage modes (NT, SS, and DT) had substantial effects on the metabolic profiles of wheat rhizosphere and could be used for subsequent secondary structure analysis. According to the thresholds of P < 0.05, VIP > 1, 47, 29, and 40 differential metabolites were respectively identified between the DT vs. SS, the DT vs. NT, and the SS vs. NT (Fig. C). Compared with the SS group, 21 down-regulated metabolites and 26 up-regulated metabolites (N-methyl-2-pyrrolidinone, spermidine, pelargonic acid, isovitexin, and 3,4-dihydroxymandelic acid) were found in the DT group. Compared with the NT group, there were 16 down-regulated metabolites (Terephtalate, L-Dopa, and 3-Oxo-5beta-cholanate) and 13 up-regulated metabolites (5’-methylthioadenosine, and 3-methylthiopropionic acid) in the DT groups (Fig. C, D). Additionally, compared with the NT group, 27 down-regulated metabolites (isovitexin, terephthalate, spermidine, and N-methyl-2-pyrrolidinone) and 13 up-regulated metabolites (pentadecanoic acid) were identified in the SS group (Fig. C, D). The clustering heatmaps of all identified differential metabolites in the different groups are shown in Fig. . The identified differential metabolites were subjected to KEGG pathway enrichment analysis. Specificallt, 57, 96, and 81 KEGG pathways were enriched for the differential metabolites in the DT vs. NT, DT vs. SS, and SS vs. NT comparisons, respectively (Fig. ). The differential metabolites between the DT vs. NT were mainly involved in the “biosynthesis of plant secondary metabolites”, “catecholamine transferase inhibitors”, “synthesis and degradation of ketone bodies”, “cocaine addiction”, as well as “cysteine and methionine metabolism” (Fig. A). The significantly enriched pathways between the DT vs. SS were “galactose metabolism”, “ABC transporters”, “taste transduction”, “beta-alanine metabolism”, and “central carbon metabolism in cancer” (Fig. B). The mainly enriched pathway between the SS vs. NT was related to “ABC transporters”, “phosphotransferase system (PTS)”, “beta-alanine metabolism”, “taste transduction”, and “starch and sucrose metabolism” (Fig. C).
After removing the duplicates, 72 differentially expressed metabolites were obtained from the NT, SS, and DT groups, combined with the 30 differential bacterial genera and 13 differential fungal genera. Among the differential metabolites, norselegiline and pentadecanoic acid had fifteen correlations on average with bacterial genera (Fig. A). Especially, pentadecanoic acid had the most associations including ten positive correlations and ten negative correlations. Differential metabolites had four correlations on average with fungal genera (Fig. B). Examining these fungal genera, Ctenomyces had the most associations including twenty-nine positive correlations and ten negative correlations with differential metabolites. Based on the network of bacterial genera and metabolites, glutaric acid was positively correlated with Bdellovibrio and Aeromicrobium but negatively correlated with Aquicella (Fig. A). Phenylobacterium was negatively correlated with 1-hexadecanol, norselegiline, and azelaic acid; as well as Spingopyxis was negatively correlated with azelaic acid, (R)-salsolinol, pantothenic acid, and pyridoxal-phosphate; and pantothenic acid and pyridoxal-phosphate were positively correlated with Latescibacteraceae and JTB255-marine-benthic-group , respectively (Fig. A). 9(S)-HPODE was negatively correlated with RB41 , MB-A2-108 , Nitrospira , bacteriap25 , and Rokubacteriales , whereas Rokubacteriales and RB41 were both positively correlated with D-fructose, and RB41 was also negatively correlated with lithocholic acid and terephthalate (Fig. A). Moreover, CCD24 positively correlated with (R)-3-hydroxybutyric-acid and 1,7-dimethyluric-acid (Fig. A). From the network of fungal genera and metabolites, we found that Cordyceps was positively correlated with fluvoxamine but negatively correlated with 1-hexadecanol, and Pichia , Candida , and Diutina were positively correlated with methyl jasmonate, p-synephrine, and terephthalate (Fig. B).
The highly diverse composition of rhizosphere microbial communities influences the adaptability, productivity, and growth of plants and plays important roles in maintaining crop physiology, nutrient uptake, abiotic and biological stress resilience, and defense . Different tillage practices have profound effects on soil and plant-associated microorganisms, thereby affecting soil fertility and crop growth . Our previous data showed that SS and NT could improve soil aggregate structure and realize the synergistic effect of soil carbon and nitrogen retention compared to DT . However, the effects of different tillage modes on the soil microbiome and metabolites in the wheat rhizosphere remain unclear. In this study, 30 differential bacterial genera, 13 differential fungal genera, and 72 differential metabolites were identified in the NT, SS, and DT groups using 16S rRNA gene sequencing, ITS sequencing, and metabolomics, respectively. Our sequencing results showed that the three tillage modes (DT, NT, and SS) had no significant effects on the α diversity of soil bacterial and fungal communities in wheat, which is in accordance with a previous study on maize agroecosystems with the tillage practices of NT, DT, and rotary tillage and Mediterranean rainfed conditions with minimum tillage (MT) and no-till (NT) practices . Similarly, Li et al. also reported tillage practices had no significant difference on the alpha diversity of bacterial community. This may due to the intraspecies stability of bacterial species with respect to soil disturbances. However, conservation tillage (NT and SS) increased fungal richness and diversity compared with DT treatment in the present study. It is possible that less physical disturbance to soil created suitable microhabitat for fungal communities than bacterial communities . Tillage methods significantly altered the compositions of bacterial and fungal communities at the phylum and genus levels. Mainly (about 89.7%) keystone species were derived from Proteobacteria, Bacteroidetes, Acidobacteria, and Gemmatimonadetes . At the phylum level, Gemmatimonadetes and Actinobacteria were more abundant in the DT group. Acidobacteria and Ascomycota were more abundant in the SS group, which are considered vital decomposers of soil organic matter . Firmicutes and Mortierellomycota were higher in the NT group. Actinomycetes are enriched in arid soils and root environments in different crops, such as peanuts and other angiosperms . Besides, the proportion of Gemmatimonadetes was also higher in arid soils . The decrease in soil water content was accompanied by a significant decrease in the proportion of active Acidobacteria , and Ascomycetes members play dominant roles in the decomposition process of straw residues in cultivated land and are responsible for the degradation of residues in the soil . Firmicutes belongs to eutrophic microorganisms , and Mortierellomycota abundance decreases during the P cycle in conifer forests . Taken together, we speculate that different tillage methods may change the soil water content and fertility by affecting the abundance of Actinobacteria, Acidobacteria, Gemmatimonadetes, Firmicutes, Mortierellomycota and Actinomycete. These reports, together with our results, suggest that the three tillage methods significantly alter the composition of soil rhizosphere microorganisms in wheat. Core groups in soil have the ability to predict environmental preferences and redistribute to adapt the ecological environment status . LEFSe indicated that Cryptosporangium , Crossiella , Rhodothermaceae, Bdellovibrio , Leptothrix , Stilbella , Diutina , Candida , Pichia , Cyphellophora , and Pyrenochaetopsis were considerably more abundant in NT practice. Fungi favoured by no tillage could be root endophytes or species suitable to utilize intact decaying roots . The essential microbiota for the SS were RB41 , Rubrobacter , Latescibacteraceae, Nitrospira, JTB255-marine-benthic-group , Rokubacteriales , and Ctenomyces . We identified Nitrospira as the nitrite-oxidizing bacteria, as also reported by – . In the study of Chen et al. , RB41 was detected under N conditions, and confirmed of the important role in mediating the crop N uptake derived from soil. The crucial genera enriched in the DT groups were Nocardia , Aeromicrobium , Chthonomonas , Phenylobacterium , Neorhizobium , Sphinhopyxis , Cordyceps , Monocillium , and Subulicystidium . Nocardia species can produce several kinds of biosurfactants such as lipopeptides and glycolipids to degrade hydrocarbon compounds . Therefore, soil disturbances affected by tillage would establish new niches and select different microbial composition . Metabolites in the rhizosphere are related to the soil microbiota, and their interactions can promote the healthy growth of plants . Plant secondary metabolites are not only a series of beneficial natural products but also an important part of plant defense systems against pathogenic attacks and environmental stress, which help construct ecological relationships between plants and other organisms . Amino acids and lipids were most redundant metabolites affected by tillage practices , which were related to nitrogen cycling. In the present study, functional analysis obtained soil bacterial and fungal communities were associated with the amino acid biosynthesis and carbohydrate biosynthesis. These suggested that amino acids and carbohydrate are the main metabolite influencing the soil bacterial and fungal communities involved in carbon metabolism . A previous study showed that increased levels of certain amino acids (e.g., alanine, arginine, and ornithine) in Sesuvium portulacastrum L . under metal toxicity could be attributed to high levels of stress tolerance . Hou et al. reported that hydrogen-rich water promoted the monochromatic bulb formation of lilium davidii var. unicolor by regulating sucrose and starch metabolism . Glucose plays an important role in metabolism. Sugar is an important assimilative product of plant photosynthesis, and its anabolism and operational distribution directly influence plant growth and development as well as crop yield and quality – . ABC transporters have essential functions in the transport of biomolecules across membranes and regulate the interactions between the composition of root secretions and rhizosphere microbiota . Therefore, tillage methods may regulate the related characteristics of wheat rhizosphere soil through the pathways of the biosynthesis of plant secondary metabolites, cysteine and methionine metabolism, galactose metabolism, ABC transporters, beta-alanine metabolism, synthesis and degradation of ketone bodies, and starch and sucrose metabolism enriched by the identified differential metabolites, thus influencing the growth and yield of wheat. However, the specific effects of the different metabolites and pathways involved in wheat growth should be further explored. Collectively, the present study suggested that soil bacterial communities was highly associated with the changes in wheat metabolites that may affect the wheat metabolome directly or indirectly, as reported in sugarcane rhizosphere by Huang et al. . Yamazaki et al. also reported close relationship between the bacterial communities and mineral properties in the soybean rhizosphere. In the study of Li et al. , soil bacterial and fungal communities were significantly correlated with soil organic carbon. Therefore, we speculated different stratified structure of nutrients caused by tillage resulted in adaptive changes of bacterial communities and metabolites on wheat roots. Bacteria can produce glucoheptonic acid, gluconic acid and cellobiose to cross-feed this rhizosphere bacteria . In the present study, glutaric acid was positively correlated with Bdellovibrio and Aeromicrobium but negatively correlated with Aquicella . Phenylobacterium was negatively correlated with 1-hexadecanol, norselegiline, and azelaic acid, whereas Spingopyxis was negatively correlated with azelaic acid, pantothenic acid, and pyridoxal phosphate. Pantothenic acid and d-fructose were positively correlated with Latescibacteraceae and Rokubacteriales , respectively. Taken together, our study implies that the interaction between the soil microbiome and metabolites may affect the soil fertility and wheat growth. In conclusion, this study investigated the changes of microbial communities and metabolites in wheat rhizosphere in the actual field condition responses to tillage practices. Bacterial, fungal communities and metabolites were distinct between tillage groups. Proteobacteria and Ascomycota were the predominant taxonomic groups among tillage soils. Metabolites were more closely related to rhizosphere bacterial community than that of fungi community. Our findings suggested that varied the composition of microbial communities and metabolites and their interactions in the wheat rhizosphere responses to tillage may affect soil fertility and wheat growth.
Supplementary Information.
|
Influence of decompression by laminotomy and percutaneous tansforaminal endoscopic surgery on postoperative wound healing, pain intensity, and lumbar function in elderly patients with lumbar spinal stenosis | 7d16da39-2cde-4306-8d0f-02b20f0bee32 | 11881652 | Surgical Procedures, Operative[mh] | Introduction As individuals age, degenerative changes such as intervertebral disc and facet joint degeneration, hypertrophy of the ligamentum flavum, and spinal instability gradually lead to lumbar spinal sten osis . Degenerative lumbar spinal stenosis (DLSS) was increasingly prevalent in the elderly population, characterized by symptoms of lumbar and leg pain, lower limb numbness, and intermittent claudication, significantly impacting patients’ walking ability, physical function, and quality of life . In cases where conservative treatments fail to alleviate symptoms, surgical intervention becomes necessary . Laminotomy decompression surgery can selectively alleviate pressure within the spinal canal, minimizing damage to the lumbar spine structure . The appropriate window size was selected based on the severity of the patient’s symptoms and the size and location of the lesions . However, due to the limited operative scope, patients with severe spinal canal stenosis may pose increased intraoperative challenges with insufficient decompression . Nonetheless, this approach was associated with greater trauma and slower recovery, which may not be conducive to favorable patient outcomes . With advancements in medical technology, minimally invasive surgery, such as percutaneous transforaminal endoscopic surgery, has gradually penetrated clinical practice . This minimally invasive approach offers benefits such as reduced trauma and favorable outcomes, leading to its increasing adoption in clinical settings . However, comparative reports on the application of laminotomy decompression and percutaneous transforaminal endoscopic treatment in elderly patients with lumbar spinal stenosis were relatively limited . Therefore, this study aims to investigate the impact of laminotomy decompression and percutaneous transforaminal endoscopic treatment on postoperative wound healing, pain intensity, and lumbar function in elderly patients with lumbar spinal stenosis .
Materials and methods 2.1. Study population and grouping criteria This retrospective study analyzed the clinical data of elderly patients with lumbar spinal stenosis admitted to our hospital from January 2021 to June 2023.The study included 65 patients who underwent laminotomy and 69 patients who underwent percutaneous transforaminal endoscopic spinal decompression surgery. 2.2. Inclusion and exclusion criteria Inclusion criteria: Definitive diagnosis of lumbar spinal stenosis; age greater than or equal to 60 years; clinical symptoms, signs, and imaging examinations (X-ray, CT, and MR) are consistent; ineffective non-surgical treatment for more than 3 months or recurrent attacks; dynamic lumbar spine X-ray indicating good segmental stability; absence of prior lumbar spine surgical treatment in patients. Exclusion criteria: Short course of spinal stenosis, mild symptoms; concomitant with intervertebral instability, fractures, spondylolisthesis, intervertebral infection, tuberculosis, tumors, or deformities; presence of segmental instability in lumbar spine disorders such as minor facet joint dislocation; concomitant severe medical conditions incompatible with surgery. 2.3. Preoperative preparation Prior to surgery, all patients underwent thorough spinal MRI, X-rays, and CT scans. Detailed explanations of the surgical principles, procedures, and precautions were provided to both groups of patients. The assessment of patients’ physical condition and the functionality of vital organs such as the heart, lungs, liver, and kidneys aimed to evaluate their tolerance to surgery and intraoperative anesthesia. Hypertensive patients received antihypertensive medications preoperatively to adjust their blood pressure, aiming for an ideal blood pressure of 120-130/70-80 mmHg, not exceeding 150/90 mmHg. Routine evaluation of cardiac function and medication management, based on cardiology consultation, was conducted for patients with heart disease, implementing surgery on the basis of Grade 1 heart function. Diabetic patients underwent preoperative insulin therapy to control blood sugar levels.Patients with a history of cerebrovascular disease, coronary heart disease, or deep vein thrombosis underwent preoperative imaging to exclude any new lesions, following the relevant department’s consultation advice. Anticoagulant drugs were avoided around the perioperative period, and long-term oral clopidogrel users discontinued the medication one week before surgery, switching to subcutaneous injection of low-molecular-weight heparin. Patients with poor lung function received nebulized treatment with drugs such as budesonide and underwent respiratory function exercises to optimize their respiratory function. Preoperative anesthesia consultation followed the American Society of Anesthesiologists (ASA) physical status classification for patient assessment, ensuring their suitability for general anesthesia. The skin in the surgical area was prepared and cleaned one day before the surgery, with hair removal performed in an area of at least 15 cm around the incision, and patients were instructed to refrain from eating and drinking as per standard protocol. 2.4. Surgical procedures 2.4.1. Laminotomy and decompression surgery Following successful general anesthesia induction, patients were placed in a prone position, and standard disinfection procedures were performed. Using fluoroscopy to identify the responsible intervertebral space, a 15 cm midline incision was made, and the skin, subcutaneous fat, and lumbodorsal fascia were sequentially dissected. The sacrospinal muscle was detached to expose the lamina gap. Partial laminectomy and hypertrophied ligamentum flavum resection were performed, along with the removal of hypertrophic and internally fused articular processes to decompress the dura mater and nerve roots . In cases of significant intervertebral disc protrusion, the protruded and degenerated nucleus pulposus tissue was excised. The extent and degree of decompression were determined intraoperatively based on the surgeon’s experience. 2.4.2. Percutaneous transforaminal endoscopic spinal canal decompression surgery The patient is placed in a prone position, and the C-arm X-ray machine is used for target intervertebral space localization. Routine disinfection and draping are performed, and 0.1% lidocaine local anesthesia is administered. A puncture needle is inserted approximately 10-14 cm lateral to the midline of the symptomatic side of the target spinous process under fluoroscopic monitoring at a 10° angle to the horizontal plane. The needle is advanced through the safe zone of the intervertebral foramen (Kambin’s triangle) into the spinal canal. Soft tissue is gradually dilated using 1-4 grade expansion tubes, and a ring saw is used to grind the ventral side of the lower vertebral body articular process. If necessary, further grinding may be required on the lower part of the articular process and the upper margin and inner edge of the lower vertebral arch to enlarge the intervertebral foramen and nerve root canal. A working channel is created, and an optical source and lens are connected, with the screen image adjusted. The lens is directly inserted into the spinal canal for radiofrequency hemostasis, excision of proliferative yellow ligament tissue around the nerve root, exploration of the protruding intervertebral disc, and under microscope guidance, resection of the protruding intervertebral disc tissue and partial proliferative ligament tissue to relieve nerve root compression. Clear pulsation of the dura mater and nerve roots is observed after confirmation of no compression, followed by irrigation of the incision, radiofrequency ablation decompression, and annulus fibrosus shaping . The working channel is removed, the incision is sutured, dressings are applied, and the procedure is concluded. 2.5. Observation indicators 2.5.1. Pain assessment The preoperative and postoperative pain levels of the patients were assessed using the Visual Analog Scale (VAS) for pain scores, ranging from 0 to 10 points, where a higher score indicates more severe pain. 2.5.2. Lumbar spine function The Oswestry Disability Index (ODI) questionnaire consists of 10 questions, covering aspects such as pain intensity, self-care, lifting, walking, sitting, standing, sleep disturbance, sex life, social life, and traveling. Each question has 6 options, with a maximum score of 5 for each question. Choosing the first option scores 0, and the last option scores 5. If all 10 questions are answered, the scoring method is: actual score/50 (maximum possible score) x 100%. If one question is left unanswered, the scoring method is: actual score/45 (maximum possible score) x 100%. A higher score indicates more severe functional impairment. The Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOA) encompasses subjective symptoms, activities of daily living, clinical signs, and bladder function, with scores of 9, 14, 6, and −6 to 0, respectively, and a maximum score of 29. A higher score implies a better functional status. 2.6. Data collection Collect demographic data, preoperative clinical characteristics, surgical specifics, postoperative complications, wound healing status, pain intensity measurements (utilizing the Visual Analog Scale), and assess lumbar function using standardized scoring systems such as the Japanese Orthopaedic Association (JOA) and Oswestry Disability Index (ODI). The wound healing condition is judged by the doctor, and there are no uncomfortable symptoms such as redness, swelling, exudation, and pain at the wound, which indicates that the tissue on the surface and under the wound is basically healed. 2.7. Statistical analysis The data were analyzed using SPSS 25.0 statistical software (SPSS Inc., Chicago, IL,USA). For categorical data, [n] was used for representation. The chi-square test was applied with the basic formula when the sample size was ≥40 and the theoretical frequency T was ≥5, with the test statistic represented by χ2. When the sample size was ≥40 but the theoretical frequency 1≤ T < 5, the chi-square test was adjusted using the correction formula. For normally distributed continuous data, the format (X ± s) was employed. Non-normally distributed data was analyzed using Wilcoxon rank-sum test. p < 0.05 were considered as statistical significance.
Study population and grouping criteria This retrospective study analyzed the clinical data of elderly patients with lumbar spinal stenosis admitted to our hospital from January 2021 to June 2023.The study included 65 patients who underwent laminotomy and 69 patients who underwent percutaneous transforaminal endoscopic spinal decompression surgery.
Inclusion and exclusion criteria Inclusion criteria: Definitive diagnosis of lumbar spinal stenosis; age greater than or equal to 60 years; clinical symptoms, signs, and imaging examinations (X-ray, CT, and MR) are consistent; ineffective non-surgical treatment for more than 3 months or recurrent attacks; dynamic lumbar spine X-ray indicating good segmental stability; absence of prior lumbar spine surgical treatment in patients. Exclusion criteria: Short course of spinal stenosis, mild symptoms; concomitant with intervertebral instability, fractures, spondylolisthesis, intervertebral infection, tuberculosis, tumors, or deformities; presence of segmental instability in lumbar spine disorders such as minor facet joint dislocation; concomitant severe medical conditions incompatible with surgery.
Preoperative preparation Prior to surgery, all patients underwent thorough spinal MRI, X-rays, and CT scans. Detailed explanations of the surgical principles, procedures, and precautions were provided to both groups of patients. The assessment of patients’ physical condition and the functionality of vital organs such as the heart, lungs, liver, and kidneys aimed to evaluate their tolerance to surgery and intraoperative anesthesia. Hypertensive patients received antihypertensive medications preoperatively to adjust their blood pressure, aiming for an ideal blood pressure of 120-130/70-80 mmHg, not exceeding 150/90 mmHg. Routine evaluation of cardiac function and medication management, based on cardiology consultation, was conducted for patients with heart disease, implementing surgery on the basis of Grade 1 heart function. Diabetic patients underwent preoperative insulin therapy to control blood sugar levels.Patients with a history of cerebrovascular disease, coronary heart disease, or deep vein thrombosis underwent preoperative imaging to exclude any new lesions, following the relevant department’s consultation advice. Anticoagulant drugs were avoided around the perioperative period, and long-term oral clopidogrel users discontinued the medication one week before surgery, switching to subcutaneous injection of low-molecular-weight heparin. Patients with poor lung function received nebulized treatment with drugs such as budesonide and underwent respiratory function exercises to optimize their respiratory function. Preoperative anesthesia consultation followed the American Society of Anesthesiologists (ASA) physical status classification for patient assessment, ensuring their suitability for general anesthesia. The skin in the surgical area was prepared and cleaned one day before the surgery, with hair removal performed in an area of at least 15 cm around the incision, and patients were instructed to refrain from eating and drinking as per standard protocol.
Surgical procedures 2.4.1. Laminotomy and decompression surgery Following successful general anesthesia induction, patients were placed in a prone position, and standard disinfection procedures were performed. Using fluoroscopy to identify the responsible intervertebral space, a 15 cm midline incision was made, and the skin, subcutaneous fat, and lumbodorsal fascia were sequentially dissected. The sacrospinal muscle was detached to expose the lamina gap. Partial laminectomy and hypertrophied ligamentum flavum resection were performed, along with the removal of hypertrophic and internally fused articular processes to decompress the dura mater and nerve roots . In cases of significant intervertebral disc protrusion, the protruded and degenerated nucleus pulposus tissue was excised. The extent and degree of decompression were determined intraoperatively based on the surgeon’s experience. 2.4.2. Percutaneous transforaminal endoscopic spinal canal decompression surgery The patient is placed in a prone position, and the C-arm X-ray machine is used for target intervertebral space localization. Routine disinfection and draping are performed, and 0.1% lidocaine local anesthesia is administered. A puncture needle is inserted approximately 10-14 cm lateral to the midline of the symptomatic side of the target spinous process under fluoroscopic monitoring at a 10° angle to the horizontal plane. The needle is advanced through the safe zone of the intervertebral foramen (Kambin’s triangle) into the spinal canal. Soft tissue is gradually dilated using 1-4 grade expansion tubes, and a ring saw is used to grind the ventral side of the lower vertebral body articular process. If necessary, further grinding may be required on the lower part of the articular process and the upper margin and inner edge of the lower vertebral arch to enlarge the intervertebral foramen and nerve root canal. A working channel is created, and an optical source and lens are connected, with the screen image adjusted. The lens is directly inserted into the spinal canal for radiofrequency hemostasis, excision of proliferative yellow ligament tissue around the nerve root, exploration of the protruding intervertebral disc, and under microscope guidance, resection of the protruding intervertebral disc tissue and partial proliferative ligament tissue to relieve nerve root compression. Clear pulsation of the dura mater and nerve roots is observed after confirmation of no compression, followed by irrigation of the incision, radiofrequency ablation decompression, and annulus fibrosus shaping . The working channel is removed, the incision is sutured, dressings are applied, and the procedure is concluded.
Laminotomy and decompression surgery Following successful general anesthesia induction, patients were placed in a prone position, and standard disinfection procedures were performed. Using fluoroscopy to identify the responsible intervertebral space, a 15 cm midline incision was made, and the skin, subcutaneous fat, and lumbodorsal fascia were sequentially dissected. The sacrospinal muscle was detached to expose the lamina gap. Partial laminectomy and hypertrophied ligamentum flavum resection were performed, along with the removal of hypertrophic and internally fused articular processes to decompress the dura mater and nerve roots . In cases of significant intervertebral disc protrusion, the protruded and degenerated nucleus pulposus tissue was excised. The extent and degree of decompression were determined intraoperatively based on the surgeon’s experience.
Percutaneous transforaminal endoscopic spinal canal decompression surgery The patient is placed in a prone position, and the C-arm X-ray machine is used for target intervertebral space localization. Routine disinfection and draping are performed, and 0.1% lidocaine local anesthesia is administered. A puncture needle is inserted approximately 10-14 cm lateral to the midline of the symptomatic side of the target spinous process under fluoroscopic monitoring at a 10° angle to the horizontal plane. The needle is advanced through the safe zone of the intervertebral foramen (Kambin’s triangle) into the spinal canal. Soft tissue is gradually dilated using 1-4 grade expansion tubes, and a ring saw is used to grind the ventral side of the lower vertebral body articular process. If necessary, further grinding may be required on the lower part of the articular process and the upper margin and inner edge of the lower vertebral arch to enlarge the intervertebral foramen and nerve root canal. A working channel is created, and an optical source and lens are connected, with the screen image adjusted. The lens is directly inserted into the spinal canal for radiofrequency hemostasis, excision of proliferative yellow ligament tissue around the nerve root, exploration of the protruding intervertebral disc, and under microscope guidance, resection of the protruding intervertebral disc tissue and partial proliferative ligament tissue to relieve nerve root compression. Clear pulsation of the dura mater and nerve roots is observed after confirmation of no compression, followed by irrigation of the incision, radiofrequency ablation decompression, and annulus fibrosus shaping . The working channel is removed, the incision is sutured, dressings are applied, and the procedure is concluded.
Observation indicators 2.5.1. Pain assessment The preoperative and postoperative pain levels of the patients were assessed using the Visual Analog Scale (VAS) for pain scores, ranging from 0 to 10 points, where a higher score indicates more severe pain. 2.5.2. Lumbar spine function The Oswestry Disability Index (ODI) questionnaire consists of 10 questions, covering aspects such as pain intensity, self-care, lifting, walking, sitting, standing, sleep disturbance, sex life, social life, and traveling. Each question has 6 options, with a maximum score of 5 for each question. Choosing the first option scores 0, and the last option scores 5. If all 10 questions are answered, the scoring method is: actual score/50 (maximum possible score) x 100%. If one question is left unanswered, the scoring method is: actual score/45 (maximum possible score) x 100%. A higher score indicates more severe functional impairment. The Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOA) encompasses subjective symptoms, activities of daily living, clinical signs, and bladder function, with scores of 9, 14, 6, and −6 to 0, respectively, and a maximum score of 29. A higher score implies a better functional status.
Pain assessment The preoperative and postoperative pain levels of the patients were assessed using the Visual Analog Scale (VAS) for pain scores, ranging from 0 to 10 points, where a higher score indicates more severe pain.
Lumbar spine function The Oswestry Disability Index (ODI) questionnaire consists of 10 questions, covering aspects such as pain intensity, self-care, lifting, walking, sitting, standing, sleep disturbance, sex life, social life, and traveling. Each question has 6 options, with a maximum score of 5 for each question. Choosing the first option scores 0, and the last option scores 5. If all 10 questions are answered, the scoring method is: actual score/50 (maximum possible score) x 100%. If one question is left unanswered, the scoring method is: actual score/45 (maximum possible score) x 100%. A higher score indicates more severe functional impairment. The Japanese Orthopaedic Association Back Pain Evaluation Questionnaire (JOA) encompasses subjective symptoms, activities of daily living, clinical signs, and bladder function, with scores of 9, 14, 6, and −6 to 0, respectively, and a maximum score of 29. A higher score implies a better functional status.
Data collection Collect demographic data, preoperative clinical characteristics, surgical specifics, postoperative complications, wound healing status, pain intensity measurements (utilizing the Visual Analog Scale), and assess lumbar function using standardized scoring systems such as the Japanese Orthopaedic Association (JOA) and Oswestry Disability Index (ODI). The wound healing condition is judged by the doctor, and there are no uncomfortable symptoms such as redness, swelling, exudation, and pain at the wound, which indicates that the tissue on the surface and under the wound is basically healed.
Statistical analysis The data were analyzed using SPSS 25.0 statistical software (SPSS Inc., Chicago, IL,USA). For categorical data, [n] was used for representation. The chi-square test was applied with the basic formula when the sample size was ≥40 and the theoretical frequency T was ≥5, with the test statistic represented by χ2. When the sample size was ≥40 but the theoretical frequency 1≤ T < 5, the chi-square test was adjusted using the correction formula. For normally distributed continuous data, the format (X ± s) was employed. Non-normally distributed data was analyzed using Wilcoxon rank-sum test. p < 0.05 were considered as statistical significance.
Results 3.1. General information Based on the results of our study comparing the influence of decompression by laminotomy and percutaneous transforaminal endoscopic surgery on postoperative wound healing, pain intensity, and lumbar function in elderly patients with lumbar spinal stenosis, there were no significant differences between the laminotomy group ( n = 65) and the percutaneous transforaminal endoscopic group ( n = 69) with respect to age, gender distribution, body mass index (BMI), smoking status, alcohol consumption, diabetes prevalence, hypertension prevalence, hyperlipidemia prevalence, history of lumbar injury, physical labor intensity, and the course of the disease ( p > 0.05). Our study found no significant differences between the two groups in terms of the distribution of different conditions, including pure lumbar spinal stenosis (21.54% vs. 21.74%), lumbar disc herniation (49.23% vs. 50.72%), degenerative spondylolisthesis (26.15% vs. 23.19%), and degenerative scoliosis (3.08% vs. 4.35%). Similarly, there were no significant differences in the distribution of ASA Classification grades between the two groups, with Grade I (12.31% vs. 14.49%), Grade II (76.92% vs. 71.01%), and Grade III (10.77% vs. 14.49%). There were no significant differences observed in the distribution of stenosis at L2-3 (4.62% vs. 7.25%), L3-4 (23.08% vs. 20.29%), and L4-5 (44.62% vs. 47.83%) levels. However, at the L5-S1 level, there was a trend towards a higher incidence of stenosis in the percutaneous transforaminal endoscopic group compared to the laminotomy group (27.69% vs. 24.64%), although this did not reach statistical significance ( t = 0.042, p = 0.837). The comparison of surgical parameters between the laminotomy group and the percutaneous transforaminal endoscopic group revealed significant differences in surgical time and intraoperative blood loss. Specifically, the percutaneous transforaminal endoscopic surgery demonstrated significantly reduced surgical time compared to laminotomy (70.78 ± 6.80 min vs. 128.97 ± 4.70 min, t = 4485, p < 0.001), along with substantially decreased intraoperative blood loss (94.22 ± 7.69 mL vs. 327.68 ± 6.44 mL, t = 190.871, p < 0.001). However, no significant difference was found in the length of hospital stay between the two groups (14.26 ± 5.45 days vs. 13.49 ± 2.49 days, t = 1.060, p = 0.292). The study demonstrated comparable baseline demographic and clinical characteristics between the two groups, which strengthens the validity of the subsequent comparisons for the primary outcome measures . 3.2. Postoperative complications and evaluation scores The comparison of complications between the laminotomy group and the percutaneous transforaminal endoscopic group did not reveal any statistically significant differences in the incidence of dural tear (1.54% vs. 1.45%), urinary tract infection (6.15% vs. 0%), urinary retention (6.15% vs. 0%), pneumonia (3.08% vs. 2.90%), and postoperative anemia (1.54% vs. 1.45%). The comparison of postoperative wound healing between the two group revealed significant differences in ambulation time and wound healing time. The percutaneous transforaminal endoscopic surgery demonstrated a markedly shorter ambulation time compared to laminotomy (3.00 ± 0.00 days vs. 5.06 ± 0.30 days, χ2 = 134, p < 0.001). Similarly, the wound healing time was significantly shorter in the percutaneous transforaminal endoscopic group compared to the laminotomy group (9.93 ± 1.19 days vs. 12.23 ± 1.74 days, W = 3873, p < 0.001). In the comparison of pain intensity before and after surgery between thetwo groups, no statistically significant differences were observed in preoperative pain levels (VAS) (7.29 ± 1.17 vs. 7.46 ± 1.12, p = 0.758). However, postoperative pain levels (VAS) were found to be significantly lower in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group (3.48 ± 1.11 vs. 2.80 ± 1.05, p = 0.007). In comparing the JOA scores between the laminotomy group and the percutaneous transforaminal endoscopic surgery group, no statistically significant differences were found in the pre-treatment scores for lower back pain, leg pain, walking ability, and total JOA scores. However, post-treatment JOA scores showed significantly greater improvement in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group for lower back pain (2.09 ± 0.28 vs. 1.92 ± 0.48, χ2 = 11.477, p = 0.003), leg pain (2.00 ± 0.00 vs. 1.86 ± 0.66, χ2 = 34.244, p < 0.001), walking ability (2.33 ± 0.47 vs. 1.88 ± 0.74, χ2 = 25.194, p < 0.001), and total JOA (22.87 ± 2.43 vs. 21.82 ± 3.13 W = 2.170, p = 0.032). In comparing the ODI scores between the two groups, no statistically significant differences were found in the pre-treatment scores (70.85 ± 0.48vs. 70.70 ± 0.52, t = 2558.5, p = 0.081). However, post-treatment ODI scores demonstrated a significant improvement in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group (37.94 ± 0.50 vs. 35.06 ± 0.29, t = 4485, p < 0.001) . 3.3. Correlation analysis Based on the correlation analysis, significant negative correlations were observed between surgical time, intraoperative blood loss, and ambulation time with postoperative wound healing, pain intensity, and lumbar spine function in elderly patients with lumbar spinal stenosis (r = −0.981, r2 = 0.962, p < 0.001; r = −0.998, r2 = 0.996, p < 0.001; r = −0.979, r2 = 0.959, p < 0.001, respectively). However, the correlation between wound healing time and postoperative pain, lower back pain, leg pain, walking ability, total Japanese Orthopaedic Association (JOA) score, and Oswestry Disability Index (ODI) scores after treatment did not reach statistical significance ( p > 0.05). Notably, the correlation coefficients for postoperative pain, lower back pain, leg pain, walking ability, total JOA score, and ODI scores after treatment were 0.251, 0.197, 0.297, 0.304, 0.186, and −0.97, respectively, with corresponding r2 values of 0.063, 0.039, 0.088, 0.092, 0.035, and 0.941. Findings suggest important associations between surgical variables and postoperative outcomes in elderly patients with lumbar spinal stenosis .
General information Based on the results of our study comparing the influence of decompression by laminotomy and percutaneous transforaminal endoscopic surgery on postoperative wound healing, pain intensity, and lumbar function in elderly patients with lumbar spinal stenosis, there were no significant differences between the laminotomy group ( n = 65) and the percutaneous transforaminal endoscopic group ( n = 69) with respect to age, gender distribution, body mass index (BMI), smoking status, alcohol consumption, diabetes prevalence, hypertension prevalence, hyperlipidemia prevalence, history of lumbar injury, physical labor intensity, and the course of the disease ( p > 0.05). Our study found no significant differences between the two groups in terms of the distribution of different conditions, including pure lumbar spinal stenosis (21.54% vs. 21.74%), lumbar disc herniation (49.23% vs. 50.72%), degenerative spondylolisthesis (26.15% vs. 23.19%), and degenerative scoliosis (3.08% vs. 4.35%). Similarly, there were no significant differences in the distribution of ASA Classification grades between the two groups, with Grade I (12.31% vs. 14.49%), Grade II (76.92% vs. 71.01%), and Grade III (10.77% vs. 14.49%). There were no significant differences observed in the distribution of stenosis at L2-3 (4.62% vs. 7.25%), L3-4 (23.08% vs. 20.29%), and L4-5 (44.62% vs. 47.83%) levels. However, at the L5-S1 level, there was a trend towards a higher incidence of stenosis in the percutaneous transforaminal endoscopic group compared to the laminotomy group (27.69% vs. 24.64%), although this did not reach statistical significance ( t = 0.042, p = 0.837). The comparison of surgical parameters between the laminotomy group and the percutaneous transforaminal endoscopic group revealed significant differences in surgical time and intraoperative blood loss. Specifically, the percutaneous transforaminal endoscopic surgery demonstrated significantly reduced surgical time compared to laminotomy (70.78 ± 6.80 min vs. 128.97 ± 4.70 min, t = 4485, p < 0.001), along with substantially decreased intraoperative blood loss (94.22 ± 7.69 mL vs. 327.68 ± 6.44 mL, t = 190.871, p < 0.001). However, no significant difference was found in the length of hospital stay between the two groups (14.26 ± 5.45 days vs. 13.49 ± 2.49 days, t = 1.060, p = 0.292). The study demonstrated comparable baseline demographic and clinical characteristics between the two groups, which strengthens the validity of the subsequent comparisons for the primary outcome measures .
Postoperative complications and evaluation scores The comparison of complications between the laminotomy group and the percutaneous transforaminal endoscopic group did not reveal any statistically significant differences in the incidence of dural tear (1.54% vs. 1.45%), urinary tract infection (6.15% vs. 0%), urinary retention (6.15% vs. 0%), pneumonia (3.08% vs. 2.90%), and postoperative anemia (1.54% vs. 1.45%). The comparison of postoperative wound healing between the two group revealed significant differences in ambulation time and wound healing time. The percutaneous transforaminal endoscopic surgery demonstrated a markedly shorter ambulation time compared to laminotomy (3.00 ± 0.00 days vs. 5.06 ± 0.30 days, χ2 = 134, p < 0.001). Similarly, the wound healing time was significantly shorter in the percutaneous transforaminal endoscopic group compared to the laminotomy group (9.93 ± 1.19 days vs. 12.23 ± 1.74 days, W = 3873, p < 0.001). In the comparison of pain intensity before and after surgery between thetwo groups, no statistically significant differences were observed in preoperative pain levels (VAS) (7.29 ± 1.17 vs. 7.46 ± 1.12, p = 0.758). However, postoperative pain levels (VAS) were found to be significantly lower in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group (3.48 ± 1.11 vs. 2.80 ± 1.05, p = 0.007). In comparing the JOA scores between the laminotomy group and the percutaneous transforaminal endoscopic surgery group, no statistically significant differences were found in the pre-treatment scores for lower back pain, leg pain, walking ability, and total JOA scores. However, post-treatment JOA scores showed significantly greater improvement in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group for lower back pain (2.09 ± 0.28 vs. 1.92 ± 0.48, χ2 = 11.477, p = 0.003), leg pain (2.00 ± 0.00 vs. 1.86 ± 0.66, χ2 = 34.244, p < 0.001), walking ability (2.33 ± 0.47 vs. 1.88 ± 0.74, χ2 = 25.194, p < 0.001), and total JOA (22.87 ± 2.43 vs. 21.82 ± 3.13 W = 2.170, p = 0.032). In comparing the ODI scores between the two groups, no statistically significant differences were found in the pre-treatment scores (70.85 ± 0.48vs. 70.70 ± 0.52, t = 2558.5, p = 0.081). However, post-treatment ODI scores demonstrated a significant improvement in the percutaneous transforaminal endoscopic surgery group compared to the laminotomy group (37.94 ± 0.50 vs. 35.06 ± 0.29, t = 4485, p < 0.001) .
Correlation analysis Based on the correlation analysis, significant negative correlations were observed between surgical time, intraoperative blood loss, and ambulation time with postoperative wound healing, pain intensity, and lumbar spine function in elderly patients with lumbar spinal stenosis (r = −0.981, r2 = 0.962, p < 0.001; r = −0.998, r2 = 0.996, p < 0.001; r = −0.979, r2 = 0.959, p < 0.001, respectively). However, the correlation between wound healing time and postoperative pain, lower back pain, leg pain, walking ability, total Japanese Orthopaedic Association (JOA) score, and Oswestry Disability Index (ODI) scores after treatment did not reach statistical significance ( p > 0.05). Notably, the correlation coefficients for postoperative pain, lower back pain, leg pain, walking ability, total JOA score, and ODI scores after treatment were 0.251, 0.197, 0.297, 0.304, 0.186, and −0.97, respectively, with corresponding r2 values of 0.063, 0.039, 0.088, 0.092, 0.035, and 0.941. Findings suggest important associations between surgical variables and postoperative outcomes in elderly patients with lumbar spinal stenosis .
Discussion Lumbar spinal stenosis was a prevalent condition in the elderly population, often leading to significant impairment in physical function and quality of life . Surgical intervention becomes necessary when conservative treatments fail to alleviate symptoms . This study aimed to investigate the impact of laminotomy decompression and percutaneous transforaminal endoscopic treatment on postoperative wound healing, pain intensity, and lumbar function in elderly patients with lumbar spinal stenosis. Surgical parameters played a significant role in differentiating the two surgical approaches . Percutaneous transforaminal endoscopic surgery demonstrated significantly shorter surgical time and reduced intraoperative blood loss compared to laminotomy. These findings align with the advantages typically associated with minimally invasive surgical techniques, suggesting that the minimally invasive nature of percutaneous transforaminal endoscopic surgery allows for smaller incisions and reduced tissue disruption, leading to shorter surgical time and reduced intraoperative blood loss . This approach relies on advanced endoscopic techniques and specialized instruments to access the spinal canal and perform decompression with minimal disruption to surrounding tissues . There were no significant differences in the incidence of dural tear, pneumonia, and postoperative anemia between the two surgical methods. However, the Laminotomy group showed a significantly higher incidence of urinary tract infection and urinary retention compared to the Decompression by Percutaneous Transforaminal Endoscopic group. However, the difference was not statistically significant. These findings suggest that percutaneous endoscopic lumbar surgery is relatively safe, highlighting its feasibility in treating elderly patients with lumbar spinal stenosis. Because the doctors and nurses were very experienced in Decompression by Laminotomy Surgeryand Percutaneous Tansforaminal Endoscopic Surgery, and the postoperative care was very good, this made the incidence of postoperative complications very low, and there was a certain difference in the number of cases between the two groups, but the difference was not statistically significant. In recent years, there have been many studies on the treatment of lumbar spinal stenosis in the elderly, and the results of these studies were consistent with ours. In this study, the postoperative complications of patients with lumbar spine surgery showed that Urinary tract infection and Urinary retention occurred in 4 cases (6.15%) and 4 cases (6.15%) respectively in Decompression by Laminotomy group ( n = 65). However, Urinary tract infection and Urinary retention were not found in Percutaneous Tansforaminal Endoscopic group ( n = 69). Obviously Percutaneous cutaneous Tansforaminal Endoscopic surgery has advantages, but our statistical results showed that the p of both comparisons was 0.113, which may indicate that there is an error in our study. Although there are some errors in our study, the results of our study are similar to those of other recent studies, many of which have shown that patients with Urinary tract infection or Urinary retention after lumbar surgery are generally poor in health or have had multiple lumbar surgeries. Patil A et al. performed laminectomy in patients with achondroplasia and hypoplasia and followed up 90 days after surgery to compare the different types of adverse events. Patients with achondroplasia had significantly higher rates of blood transfusion (OR = 6.40, p < 0.001), urinary tract infection (OR = 3.79, p < 0.001), wound rupture (OR = 3.71, p < 0.001), and hematoma (OR = 2.94, p = 0.032). Mormol JD et al. studied the risk factors of urinary retention after posterior lumbar fusion. A retrospective cohort study was conducted on the characteristics of 814 patients who underwent lumbar laminectomy and fusion before, during and after surgery, and the results showed that postoperative urinary tract infection (OR = 5.60, p = 0.005) was associated with postoperative urinary retention. History of previous lumbar surgery (OR = 0.55; p = 0.019) was associated with decreased urinary retention after surgery. Postoperative wound healing and pain intensity were notable areas of divergence between the two surgical approaches. Percutaneous transforaminal endoscopic surgery demonstrated significantly shorter ambulation time and wound healing time compared to laminotomy. Additionally, postoperative pain levels were significantly lower in the percutaneous transforaminal endoscopic group, highlighting the benefits of this minimally invasive approach in promoting faster recovery and improved pain control in elderly patients with lumbar spinal stenosis. The use of percutaneous transforaminal endoscopic surgery involves targeted visualization and precise manipulation of anatomical structures, allowing for more focused and efficient decompression of neural elements . By specifically targeting the affected area through a transforaminal approach, the procedure may minimize the need for extensive tissue retraction and bony removal, contributing to faster wound healing and reduced postoperative pain . The functional outcomes, as assessed by JOA and ODI scores, revealed interesting trends favoring percutaneous transforaminal endoscopic surgery. Post-treatment JOA scores showed significantly greater improvement in the percutaneous transforaminal endoscopic group for lower back pain, leg pain, and walking ability. Similarly, post-treatment ODI scores demonstrated significant improvement in the percutaneous transforaminal endoscopic group compared to the laminotomy group. These findings suggest the potential effectiveness of percutaneous transforaminal endoscopic surgery in achieving improved postoperative pain intensity and lumbar function in elderly patients with lumbar spinal stenosis, supporting its consideration as a favorable treatment modality in this patient population. The reduced tissue trauma associated with percutaneous transforaminal endoscopic surgery may lead to less postoperative inflammation and scarring, which were crucial factors in promoting faster wound healing and overall recovery . Furthermore, the minimally invasive approach may result in better preservation of surrounding musculature, potentially leading to enhanced postoperative lumbar function and reduced postoperative pain . The correlation analysis in our study revealed important associations between surgical variables and postoperative outcomes in elderly patients with lumbar spinal stenosis. Specifically, surgical time, intraoperative blood loss, and ambulation time showed significant negative correlations with postoperative wound healing, pain intensity, and lumbar spine function. These findings emphasize the importance of surgical variables in influencing postoperative outcomes and highlight the potential benefits of minimizing surgical time and intraoperative blood loss in achieving favorable postoperative results in this patient population. The results of this study contribute valuable data to the existing literature by providing comparative insights into the outcomes of laminotomy decompression and percutaneous transforaminal endoscopic treatment in elderly patients with lumbar spinal stenosis. These findings support the consideration of percutaneous transforaminal endoscopic surgery as a promising alternative to traditional laminotomy decompression in this patient population. Nevertheless, it was important to acknowledge certain limitations of this study. Firstly, the retrospective nature of the study may have introduced inherent biases and limitations in data collection. At the same time, the grouping process is not random, but made according to the doctor’s advice and considering the patient’s economic situation, which is prone to selectivity bias. Additionally, the relatively limited sample size may impact the generalizability of the findings. Future prospective studies with larger sample sizes and longer follow-up periods were warranted to further validate these results and provide more robust evidence regarding the comparative effectiveness of the two surgical approaches in elderly patients with lumbar spinal stenosis.
Conclusion In conclusion, this study provides valuable insights into the influence of laminotomy decompression and percutaneous transforaminal endoscopic treatment on postoperative outcomes in elderly patients with lumbar spinal stenosis. The findings suggest that percutaneous transforaminal endoscopic surgery may offer advantages in terms of shorter surgical time, reduced intraoperative blood loss, faster wound healing, improved pain control, and enhanced lumbar function compared to traditional laminotomy decompression. These results warrant further consideration and prospective investigation to guide clinical decision-making and improve outcomes for elderly patients with lumbar spinal stenosis.
|
Loss of clear cell characteristics in aggressive clear cell odontogenic carcinoma: a case report | 79ec0a9f-3af6-4938-89b5-dd4b75e44efa | 11320854 | Anatomy[mh] | Clear cell odontogenic carcinoma (CCOC) is a rare type of odontogenic carcinoma, characterized by sheets of islands of vacuolated and clear cells. According to 2005 and 2017 WHO Classification of Head and Neck Tumors, CCOC can be classified into three subtypes: monophasic, biphasic, and ameloblastoma-like types . In the updated WHO classification of Head and Neck Tumor in 2022, the description of the three histological subtypes of CCOC was removed. In aggressive CCOC, necrosis, conspicuous cytological malignancy, and perineural infiltration can be observed . Despite the generally indolent behavior of some CCOC cases, it is important to note that approximately 20% of reported cases have been found to metastasize, and 42% have experienced recurrence. Molecular studies have indicated that approximately 80% of CCOC cases harbor EWSR1 rearrangements . This case describes an aggressive CCOC with EWSR1::ATF1 gene fusion, which lost its clear cell characteristics and underwent prominent squamous differentiation after repeated recurrence. Medical and histopathology history in 2012 In 2012, a 56-year-old male reported a sensation of looseness in his lower anterior teeth, ultimately resulting in the sudden loss of one of the lower anterior teeth. A cone-beam computed tomography (CBCT) examination revealed a bone defect spanning from teeth 38 to 44, a localized depression near the upper border of the mandible, and rough edges in the affected area (Fig. ). Subsequently, the patient underwent a wide resection of the mandible from teeth 38 to 44, followed by the reconstruction of the defect using a peroneal myocutaneous flap. The histological examination revealed that the tumor consisted of epithelial nests of varying sizes arranged in a biphasic pattern. These nests comprised predominantly clear cells along with peripheral dark, unvacuolated basaloid cells (Fig. A, B). At the lesion’s periphery, a small subset of tumor cells displayed characteristics of epidermoid cells, characterized by eosinophilic cytoplasm. Each nest of epidermoid cells typically contained only a few dozen cells (Fig. C). Pathologic mitosis and necrosis were observed within the epithelial nest of tumor (Fig. D). The tumor cells were observed to be invading the bone tissue (Fig. E). The tumor cells were found to be positive for AE1/AE3, KRT19, Pan-CK, EMA, P40, and P63, and negative for KRT7, S-100, and P53 (Figure A-I). The Ki-67 proliferation index averaged 5% (Figure J). Medical and histopathology history in 2015 In 2015, a firm mass was palpable in the left submandibular area of the patient, measuring 1.5 × 1.5 × 1.2 cm, exhibiting close adherence to the mandible. No palpable lymph nodes were evident in the neck during this period. CBCT scan showed an enlarged mass in the submental region (Fig. A). Furthermore, an ill-defined radiolucent lesion was observed in the right posterior alveolar bone of the mandible. (Fig. B-C). The patient underwent surgical resection of the mass, extraction of 45, and resection of the surrounding bone of 45 under general anesthesia. Histologically, the tumor primarily presented as a monophasic variant, with nuclei of different sizes (Fig. D-F). At the invasive edge of the tumor, tumor cells exhibited an epidermoid morphology (Fig. G). The immunohistochemical marker findings were generally in line with those observed in the 2012 sample (Figure ). Compared to the specimen from 2012, KRT7 and P53 exhibited weak and focal positivity. (Figure C and I). The Ki-67 proliferation index displayed a notable rise, averaging 15% (Figure J), suggesting heightened proliferative activity among the tumor cells. FISH analysis showed a rearrangement of the EWSR1 gene (97%, Fig. A) and a gene fusion of EWSR1::ATF1 (80%, Fig. B). The mass was diagnosed as recurrent CCOC. Following the surgery, the patient did not undergo radiotherapy or chemotherapy. Medical history in 2018 In 2018, the tumor recurred, alongside bilateral lung metastases (Fig. ). Before the surgical intervention, the patient underwent targeted therapy and chemotherapy. The treatment regimen consisted of Apatinib Mesylate at a dosage of 250 mg once daily and Tegafur/Gimeracil/Oteracil Potassium Capsules at 60 mg twice daily, administered orally. Nevertheless, owing to the occurrence of headaches, the administration of Tegafur/Gimeracil/Oteracil Potassium Capsules was ceased after one week, with only Apatinib Mesylate being continued for a month. Subsequently, the patient underwent an extended mandibulectomy, defect repair involving the transfer of a fibular myocutaneous flap, and submandibular lymph node dissection at another medical facility, where metastasis of the tumor to the submandibular lymph nodes was noted. Post the surgical intervention, the patient did not proceed with radiotherapy or chemotherapy. Medical and histopathology history in 2020 In 2020, at the age of 64, the patient presented with a 3.5 × 3 × 1.5 cm mass in the right submandibular region. A CBCT scan conducted at another medical facility unveiled a lesion spanning from 38 to 46, characterized by partial depression along the upper border of the mandible near the chin, delineated by a rugged contour. Subsequently, the patient underwent a right neck dissection, excision of the tumor located in the parapharyngeal space via an external cervical approach, and resection with transplantation of the pediculated fascial flap. Histologically, the recurrent tumor was composed of varying sized epithelial nests of epidermoid cells, with dense collagen fibers forming fibrous septa (Fig. A-B). Tumor cells were mildly atypical with eosinophilic cytoplasm, vesicular chromatin, and prominent nucleoli (Fig. C-D). At higher magnification, abnormal mitosis was occasionally observed in tumor cells, and the nucleoplasm ratio of tumor cells are increased (Fig. E-F). The tumor exhibited aggressive features, including necrosis, destruction of muscle and adipose tissue, perineural spread, and vascular invasion (Fig. A-D). The tumor cells were positive for AE1/AE3, KRT19, Pan-CK, EMA, P40, and P63 (Fig. A-F), and negative for S-100 (Fig. G), which were consistent with findings in 2012 and 2015 samples. Ki-67 proliferation index averaged 15% (Fig. H). Moreover, due to the tumor cells appeared as prominent squamous differentiation, and some regions mimic neuroendocrine differentiation, non-keratinizing squamous cell carcinoma and neuroendocrine carcinoma should be ruled out. The tumor cells were positive for CK34βE12, and negative for CK35βH11 and KRT7 (Fig. A-C). Neuroendocrine markers, including CD56, CgA and Syn were negative in tumor cells (Fig. D-F). The weak cytoplasmic expression of CD99 rules out the possibility of Ewing’s sarcoma (Fig. G), which also harbors EWSR1 gene rearrangement. FISH was performed to assess the rearrangement of EWSR1 . Of the 100 tumor nuclei counted for EWSR1 break-apart probe, 96 nuclei were found to exhibit positive signals (Fig. H). The main differential diagnosis and immunohistochemistry panel of this case were listed in Table . The immunostaining of P53 exhibited a progressive augmentation in tandem with the advancement of the tumor. In the year 2012, the tumor displayed a lack of P53 staining (Figure I), while manifesting focal positivity in 2015 (Figure I). Notably, by 2018, the tumor cells demonstrated a widespread positivity for P53 (Fig. A). In order to ascertain if the aggressive progression was attributed to TP53 mutations, we conducted Sanger sequencing for exons 5, 7, and 8, which encompass high frequency mutation loci within the TP53 gene. The results reveal the absence of common TP53 mutations in any of the tumor samples from 2012, 2015, and 2020 (Fig. B). Unfortunately, the patient passed away due to an accident without receiving any further treatment following the surgical procedure. In 2012, a 56-year-old male reported a sensation of looseness in his lower anterior teeth, ultimately resulting in the sudden loss of one of the lower anterior teeth. A cone-beam computed tomography (CBCT) examination revealed a bone defect spanning from teeth 38 to 44, a localized depression near the upper border of the mandible, and rough edges in the affected area (Fig. ). Subsequently, the patient underwent a wide resection of the mandible from teeth 38 to 44, followed by the reconstruction of the defect using a peroneal myocutaneous flap. The histological examination revealed that the tumor consisted of epithelial nests of varying sizes arranged in a biphasic pattern. These nests comprised predominantly clear cells along with peripheral dark, unvacuolated basaloid cells (Fig. A, B). At the lesion’s periphery, a small subset of tumor cells displayed characteristics of epidermoid cells, characterized by eosinophilic cytoplasm. Each nest of epidermoid cells typically contained only a few dozen cells (Fig. C). Pathologic mitosis and necrosis were observed within the epithelial nest of tumor (Fig. D). The tumor cells were observed to be invading the bone tissue (Fig. E). The tumor cells were found to be positive for AE1/AE3, KRT19, Pan-CK, EMA, P40, and P63, and negative for KRT7, S-100, and P53 (Figure A-I). The Ki-67 proliferation index averaged 5% (Figure J). In 2015, a firm mass was palpable in the left submandibular area of the patient, measuring 1.5 × 1.5 × 1.2 cm, exhibiting close adherence to the mandible. No palpable lymph nodes were evident in the neck during this period. CBCT scan showed an enlarged mass in the submental region (Fig. A). Furthermore, an ill-defined radiolucent lesion was observed in the right posterior alveolar bone of the mandible. (Fig. B-C). The patient underwent surgical resection of the mass, extraction of 45, and resection of the surrounding bone of 45 under general anesthesia. Histologically, the tumor primarily presented as a monophasic variant, with nuclei of different sizes (Fig. D-F). At the invasive edge of the tumor, tumor cells exhibited an epidermoid morphology (Fig. G). The immunohistochemical marker findings were generally in line with those observed in the 2012 sample (Figure ). Compared to the specimen from 2012, KRT7 and P53 exhibited weak and focal positivity. (Figure C and I). The Ki-67 proliferation index displayed a notable rise, averaging 15% (Figure J), suggesting heightened proliferative activity among the tumor cells. FISH analysis showed a rearrangement of the EWSR1 gene (97%, Fig. A) and a gene fusion of EWSR1::ATF1 (80%, Fig. B). The mass was diagnosed as recurrent CCOC. Following the surgery, the patient did not undergo radiotherapy or chemotherapy. In 2018, the tumor recurred, alongside bilateral lung metastases (Fig. ). Before the surgical intervention, the patient underwent targeted therapy and chemotherapy. The treatment regimen consisted of Apatinib Mesylate at a dosage of 250 mg once daily and Tegafur/Gimeracil/Oteracil Potassium Capsules at 60 mg twice daily, administered orally. Nevertheless, owing to the occurrence of headaches, the administration of Tegafur/Gimeracil/Oteracil Potassium Capsules was ceased after one week, with only Apatinib Mesylate being continued for a month. Subsequently, the patient underwent an extended mandibulectomy, defect repair involving the transfer of a fibular myocutaneous flap, and submandibular lymph node dissection at another medical facility, where metastasis of the tumor to the submandibular lymph nodes was noted. Post the surgical intervention, the patient did not proceed with radiotherapy or chemotherapy. In 2020, at the age of 64, the patient presented with a 3.5 × 3 × 1.5 cm mass in the right submandibular region. A CBCT scan conducted at another medical facility unveiled a lesion spanning from 38 to 46, characterized by partial depression along the upper border of the mandible near the chin, delineated by a rugged contour. Subsequently, the patient underwent a right neck dissection, excision of the tumor located in the parapharyngeal space via an external cervical approach, and resection with transplantation of the pediculated fascial flap. Histologically, the recurrent tumor was composed of varying sized epithelial nests of epidermoid cells, with dense collagen fibers forming fibrous septa (Fig. A-B). Tumor cells were mildly atypical with eosinophilic cytoplasm, vesicular chromatin, and prominent nucleoli (Fig. C-D). At higher magnification, abnormal mitosis was occasionally observed in tumor cells, and the nucleoplasm ratio of tumor cells are increased (Fig. E-F). The tumor exhibited aggressive features, including necrosis, destruction of muscle and adipose tissue, perineural spread, and vascular invasion (Fig. A-D). The tumor cells were positive for AE1/AE3, KRT19, Pan-CK, EMA, P40, and P63 (Fig. A-F), and negative for S-100 (Fig. G), which were consistent with findings in 2012 and 2015 samples. Ki-67 proliferation index averaged 15% (Fig. H). Moreover, due to the tumor cells appeared as prominent squamous differentiation, and some regions mimic neuroendocrine differentiation, non-keratinizing squamous cell carcinoma and neuroendocrine carcinoma should be ruled out. The tumor cells were positive for CK34βE12, and negative for CK35βH11 and KRT7 (Fig. A-C). Neuroendocrine markers, including CD56, CgA and Syn were negative in tumor cells (Fig. D-F). The weak cytoplasmic expression of CD99 rules out the possibility of Ewing’s sarcoma (Fig. G), which also harbors EWSR1 gene rearrangement. FISH was performed to assess the rearrangement of EWSR1 . Of the 100 tumor nuclei counted for EWSR1 break-apart probe, 96 nuclei were found to exhibit positive signals (Fig. H). The main differential diagnosis and immunohistochemistry panel of this case were listed in Table . The immunostaining of P53 exhibited a progressive augmentation in tandem with the advancement of the tumor. In the year 2012, the tumor displayed a lack of P53 staining (Figure I), while manifesting focal positivity in 2015 (Figure I). Notably, by 2018, the tumor cells demonstrated a widespread positivity for P53 (Fig. A). In order to ascertain if the aggressive progression was attributed to TP53 mutations, we conducted Sanger sequencing for exons 5, 7, and 8, which encompass high frequency mutation loci within the TP53 gene. The results reveal the absence of common TP53 mutations in any of the tumor samples from 2012, 2015, and 2020 (Fig. B). Unfortunately, the patient passed away due to an accident without receiving any further treatment following the surgical procedure. Clear cell odontogenic carcinoma is a rare odontogenic tumor that first described by Hansen et al. in 1985. According to 2005 and 2017 WHO Head and Neck Tumor Classification, CCOC is categorized into three subtypes histologically: biphasic variant, ameloblastoma-like variant, and monophasic variant. In the biphasic variant, tumor nests were primarily comprised of clear cells, with a few basal-like cells visible in the surrounding layer of the epithelial nest. The cytoplasm of basal-like cells was weakly eosinophilic. In the ameloblastoma-like type, the structure of tumor nests resembles that of an ameloblastoma, with surrounding cell nuclei exhibiting inverted polarity and forming a palisade. The monophasic type consists entirely of clear cells. Up to this point, approximately 131 cases of CCOC have been documented in scholarly works. The histological subcategories of CCOC have been elucidated in 85 publications, encompassing a total of 116 cases. Notably, the biphasic variant predominated with 97 occurrences. Conversely, the ameloblastoma-like variant and monophasic variant were less common, with only 14 and 5 cases reported, respectively. However, in 2022 WHO Head and Neck Tumor Classification, the histological categories of CCOC have been eliminated, indicating histological subtypes may not correlate with the biological behavior and prognosis of the tumor. Other atypical pathological features, including cystic degeneration , keratin pearls formation and detinoid deposition , have also been observed in a few cases. Upon reviewing the English literature, CCOC exhibited an invasive growth pattern, with a recurrence rate estimated at 42%. It is noteworthy that only a single published study detailed the histopathology of recurrent CCOC . Omar Breik et al. noted that in recurrent tumors, clear cells were supplanted by clusters, cords, trabeculae, and sheets of neoplastic epithelial cells . An intriguing observation from our investigation in this case is the transition from the initial clear cell phenotype to a prominent squamous differentiation CCOC following multiple recurrences. In clear cell tumors, the transparency of the cytoplasm, which does not stain in HE, was traditionally attributed to the accumulation of glycogen and deposition of fat . During aggressive progress of the tumor, metabolic alterations can result in heightened glycogen consumption stored within the cytoplasm of tumor cells. This phenomenon may partially elucidate the loss of clear cell characteristics in the tumor cells. The research published in Nature in 2013 shed light on the genetic changes associated with clear cell renal cell carcinoma. It also presented evidence of metabolic shifts in aggressive and recurrent clear cell renal cell carcinoma, such as the down-regulation of genes involved in the tricarboxylic acid (TCA) cycle, up-regulation of glutamine transporter genes, and increased levels of acetyl-CoA carboxylase protein. These metabolic alterations are crucial in understanding the progression and behavior of this type of cancer . Although the mechanisms remain unclear, the disappearance of the transparent phenotype of CCOC might relate to the increased consumption of glycogen and aggressive progression. The diagnostic considerations in this case encompassed neoplasms consist of epidermoid cells, including poorly differentiated mucoepidermoid carcinoma, nonkeratinizing squamous cell carcinoma, and neuroendocrine carcinoma. Histologically, poorly differentiated mucoepidermoid carcinoma exhibited a solid growth pattern with a decreased presence of mucous cells and an increased abundance of epidermoid cells, demonstrating heightened cytologic atypia, necrosis, and perineural invasion. Mucoepidermoid carcinoma typically displayed robust positivity for KRT7 and negativity for KRT19. Additionally, a majority of mucoepidermoid carcinoma harbored rearrangement in the MAML2 gene. Nonkeratinizing squamous cell carcinoma was characterized by its relative immaturity, minimal to no keratinization, nuclear atypia, numerous mitotic figures, and peripheral palisading of tumor nuclei. The tumor comprised interconnecting squamous sheets that invade the stroma with a broad, pushing border. Immunohistochemically, nonkeratinizing squamous cell carcinoma was positive for high-molecular-weight cytokeratin, p63, and p40. Neuroendocrine carcinoma was composed of cells with hyperchromatic nuclei, indistinct nucleoli, and scant cytoplasm. The presence of numerous mitoses and apoptotic cells is notable. At least one neuroendocrine marker, such as Syn, CgA, or CD56, was typically immunopositive in neuroendocrine carcinomas. The rearrangement of the EWSR1 gene was crucial evidence for diagnosing CCOC in this case. EWSR1 gene rearrangements can be detected in various benign and malignant lesions, including soft tissue and bone entities. In this particular case, ATF1 was identified as the partner gene involved in the EWSR1 rearrangement. ATF1 , CREB1 , and CREM are members of the CREB (cAMP response element-binding protein) family and are among the most common partner genes found in EWSR1 rearrangements. This genetic rearrangement plays a significant role in the pathogenesis and diagnosis of certain tumors, providing important molecular information for accurate classification and management . So far, within all documented instances of CCOC harboring EWSR1 gene rearrangement, the partner genes identified were ATF1/CREB1/CREM . In addition to CCOC, the EWSR1-ATF1 gene fusion has been observed in clear cell carcinoma (CCC) of salivary gland, hyalinizing clear cell carcinoma (HCCC) , angiomatoid fibrous histiocytoma , malignant mesothelioma and atypical central neurocytoma . According to the same molecular alterations, Xuan et al. suggest that it is reasonable to include HCCC as a subtype of CCC . Both CCOC, CCC and HCCC are characterized by the presence of clear cells, leading to arguments suggesting that they are essentially analogous tumors manifesting in distinct anatomical sites. CCOC typically arises in the jaw, while HCCC emerges in the submucosa. Differential diagnosis between these entities relies on supportive evidence from pathological features and tumor localization . In this report, we described a novel recurring CCOC with high-grade transformation and disappearance of the transparent phenotype of tumor cells. Because this case is a rare phenotype of CCOC, more cases and a longer follow-up period are necessary to further elucidate its biologic behavior, prognosis, and genetic profile. Below is the link to the electronic supplementary material. Supplementary Material 1: Fig. 1 IHC staining of tumor in 2012. (A) AE1/AE3, (B)KRT19, (C) KRT7, (D) Pan-CK, (E)EMA, (F)P40, (G)P63, (H) S-100, (I) P53 and (J) Ki-67. (IHC, ×200). Supplementary Material 2: Fig. 2 IHC staining of recurred tumor in 2015. (A) AE1/AE3, (B)KRT19, (C) KRT7, (D) Pan-CK, (E)EMA, (F)P40, (G)P63, (H) S-100, (I) P53 and (J) Ki-67. (IHC, ×200). |
Reducing Healing Period with DDM/rhBMP-2 Grafting for Early Loading in Dental Implant Surgery | b7911d92-45fb-46b8-b3c8-5e93b9c3217d | 11794915 | Dentistry[mh] | Autogenous bone is currently regarded as the optimal graft for bone induction because it possesses three critical properties required for bone formation: osteogenesis (the ability to form new bone), osteoconduction (the capacity to support the growth of new bone along its surface), and osteoinduction (the ability to induce bone formation) . However, the use of autogenous bone grafts presents several limitations . The graft must be harvested from the patient, which limits the available quantity of bone. Additionally, harvesting bone creates a secondary surgical site, leading to increased surgical time and blood loss. Morbidity at the donor site is a common and persistent issue. As a result, significant research in tissue engineering has been devoted to finding alternatives to autogenous bone grafts. Many of the bone graft substitutes undergo processing that denatures or removes proteins within the graft, leading to diminished or nonexistent osteoinductive properties. While xenografts are widely used in clinical practice and can achieve sufficient bone formation in favorable environments, they have the drawbacks of being difficult to monitor for bone formation at the terminal sites and being prone to infection. Dentin shares chemical components with bone, including biological apatite (70%), collagen (18%), non-collagenous proteins (2%), and body fluids by weight. The dentin matrix is characterized by nano-sized dentinal tubules, typically ranging from 1 to 3 µm in diameter. These tubules play a key role in the release of intrinsic growth factors embedded within the matrix, as well as proteins that bind to hydroxyapatite. The density of dentinal tubules is approximately 18,000 to 21,000 per square millimeter . The average porosity of these tubules is about 3.5%, which is notably lower than the 6.2% porosity found in natural human bones . The process of demineralizing the dentin matrix involves the extraction of inorganic salts while minimizing the leaching or denaturation of its organic components. As a result, a demineralized dentin matrix (DDM) emerges, characterized as a cell-free matrix composed of acid-insoluble, highly cross-linked type I collagen that contains matrix-binding proteins, including bone morphogenetic proteins (BMPs), within its microporous dentinal tubules. These osteoinductive elements and growth factors within DDM represent approximately 5% of the natural spectrum of growth factors, such as transforming growth factors, insulin-like growth factors, and BMPs. Additionally, BMPs derived from the tooth matrix exhibit similar biological activity as those sourced from bone tissue . The porosity of dentinal tubules increases from 3%–6% to an average of 20%, while the uncollapsed freeze-dried interfibrillar space reaches approximately 50% . The porous structure and collagen-rich matrix of DDM make it suitable as a carrier for recombinant human BMP-2 (rhBMP-2) . Studies have suggested synergy between endogenous BMPs and externally applied rhBMP-2. DDM combined with rhBMP-2 (AutoBT.BMP, Korea Tooth Bank, Seoul, Republic of Korea) results in more effective bone formation and osteocyte embedding compared to DDM alone . Clinically, DDM has been recognized as a promising functional bone graft material in implant dentistry as well as an rhBMP-2 carrier . The goal of dental implant surgery is to achieve osseointegration, where bone forms directly on the implant surface. Traditionally, a healing period of 3 months for the mandible and 5–6 months for the maxilla has been recommended to ensure successful osseointegration. In a two-stage procedure, implants are typically placed 4–9 months after autograft bone transplantation, with a longer healing period (≥ nine months) suggested for larger defects. This approach has allowed implants to be placed in relatively stable conditions . Early loading of implants has been attempted when sufficient primary stability is achieved without the need for bone grafting . BMPs, which are critical members of the highly conserved signaling proteins of the transforming growth factor-beta (TGF-β) superfamily, play a significant role in bone regeneration . Several animal studies have shown that rhBMP-2 induces faster and greater initial bone formation and significantly increases the bone-to-implant contact ratio when used in conjunction with implant placement . Currently, rhBMP-2 is clinically utilized with carriers such as absorbable collagen sponge (ACS), biphasic calcium phosphate, β-tricalcium phosphate, hydroxyapatite, demineralized bone matrix, platelet-rich fibrin. Since rhBMP-2 needs to be released steadily at low concentrations within the body, a robust carrier that can sustain and bind to the rhBMP-2 is required . Among these carrier ACS demonstrates a high binding capacity for rhBMP-2 and is widely used in oral and maxillofacial applications . However, due to the physically instability of ACS, compression from surrounding soft tissues and fluids in the body can lead to the localized release of high doses of rhBMP-2, which may result in ectopic bone formation, swelling, erythematous, and even a potential risk of tumor . Since DDM is mainly consist of type 1 collagen matrix, it has been suggested as a stable carrier for rhBMP-2, which could allow for the sequential and slow release of rhBMP-2 over a month . The consistent release of rhBMP-2 during the first month may promote increased bone formation and potentially accelerate the timeline for bone healing, even at a reduced concentration of 0.2 mg/mL compared to the FDA-approved 1.5 mg/mL . It is hypothesized that DDM combined with rhBMP-2 could enhance bone healing more effectively than autogenous bone alone. The authors propose that even when bone grafting is performed simultaneously with implant placement, DDM combined with rhBMP-2 may allow for earlier loading, potentially in less than three months. This study aimed to demonstrate clinical outcomes suggesting that grafting DDM incorporated with rhBMP-2 at the time of implant placement could reduce the healing period for the early loading. All procedures in this study adhered to the ethical guidelines set by the institutional and national committees responsible for human experimentation, in accordance with the Helsinki Declaration of 1975, revised in 2008. Informed consent was obtained from all participants involved in the study. The study was approved by the Jeonbuk National University Hospital Institutional Review Board (IRB No. 2022-09-063). Through review of medical records from January 2020 to September 2022, patients were included according to the following inclusion criteria: (1) age over 20 years; (2) no history of uncontrolled systemic diseases or syndromes; (3) implant placement in the 5posterior mandible or maxilla within six weeks after extraction; (4) need for bone grafting due to implant fixture exposure (more than four threads; ≥ 3.2 mm); (5) a two-stage implant placement requiring simultaneous bone grafting using only autogenous demineralized dentin matrix incorporated with rhBMP-2; (6) a second surgery performed within 4 months of implant placement; (7) the presence of natural teeth, implants, or fixed prostheses in the opposing dentition; and (8) availability of cone-beam computed tomography (CBCT) imaging. Patients were excluded if they had undergone bone grafts mixed with other graft materials or membranes, or if there were missing records of follow-up or CBCT imaging. Clinical outcomes, such as primary surgery stability (implant stability quotient; ISQ), healing period, secondary surgery stability (ISQ), and bone width measured on CBCT, were analyzed separately for implants placed in the maxilla and mandible. For the maxilla, the analysis was further divided into ridge augmentation and sinus grafting via lateral approach, where bone grafting was performed only at the fixture apex. Since sinus graft groups did not bone graft around the fixture neck area, the group was considered as a control group. Surgical procedure Teeth that were deemed unsalvageable were extracted and sent to a manufacturer (Korea Tooth Bank, Seoul, Republic of Korea). The procurement, storage, processing, and packaging of teeth were individually handled in accordance with the Good Practice Guidelines for Tooth Handling Institutions, as stipulated by the Korea Administration of Health and Welfare . The processing method for the demineralized dentin matrix (DDM) includes refrigerating the teeth in 70% ethyl alcohol, followed by rinsing and removing any attached soft tissue and pulp using a retrograde technique. The dentin is then ground into particles (300–800 μm), followed by defatting and demineralization with 0.6 N HCl, incorporating a viral inactivation procedure as described in Patent EP 2601982, resulting the volume ranged from 0.4 to 1.0 cc . Subsequently, rhBMP-2 is applied to the DDM powder at a concentration of 0.2 mg/mL (Cowellmedi, Seoul, Republic of Korea). Implant placement and bone grafting at the extraction site were performed within 4–6 weeks after extraction. If other teeth required extraction at sites other than the implant surgery site, AutoBT.BMP was prepared in advance, and surgery was performed immediately after extraction. The implant was placed, and a cover screw was connected when a minimum initial fixation force of 30Ncm was achieved . AutoBT.BMP was grafted around the exposed implant threads, and primary closure was achieved by using 4–0 vicryl (Ethicon, Johnson & Johnson International, New Jersey) without the use of a membrane or other bone substitute (Fig. ). All patients underwent single CBCT (Sirona scanner, Dentsply Sirona, Charlotte, North Carolina; 85 kV and 6.5 mA; voxel size of 160 μm; a scanning time of 14 s) imaging immediately after surgery. Routine postoperative care included the administration of antibiotics and anti-inflammatory analgesics for five days, with sutures being removed after one week. Secondary surgery was performed on average after 2 months for the mandible and 3 months for the maxilla, with loading occurring within three weeks of secondary surgery. During the second surgery, if bone tissue formed above the cover screw could be collected, a biopsy was performed with the patient's consent. The implant stability was evaluated during the second operation using the Osstell Mentor device (Osstell, Gothenburg, Sweden) as ISQ value . CBCT imaging and histological examination of the bone tissue above the implant cover screw were performed during the secondary surgery for consenting patients. Measurement of bone graft using cone-beam computed tomography Using linear measurement tools on CBCT, two examiners (J.H. Lim and J.A. Lim) independently measured vertical marginal bone height and horizontal bone width, and the results were averaged. The marginal bone height around the implant was measured along the center of the implant in a cross-sectional slice, from the crest of the buccal and lingual marginal bone to the base of the implant. In the cross-sectional view, a vertical reference line was established from the radiolucent center of the cover screw to the midpoint of the fixture. A horizontal reference line was then drawn perpendicular to this vertical line at the level of the marginal bone crest. Measurement of the marginal bone was taken using these reference lines, relative to both the marginal crest and the apex of the dental implant . At the same reference line, horizontal bone width was measured at 1 mm and 5 mm below the highest point of the fixture thread. Measurements were taken immediately after the bone graft and at the time of the second surgery (Fig. ). Histological observation and histomorphometry analysis Specimens were fixed in 10% neutral-buffered formalin and then demineralized using 10% formic acid. Longitudinal sections, 5–8 μm thick, were prepared from the central region of the specimens using a microtome. The sections were then stained with hematoxylin and eosin (H&E) and scanned using a Panoramic 250 Flash III scanner (3DHISTECH, Budapest, Hungary). Following this, the samples were fixed in 10% buffered formalin and decalcified with 10% formic acid. Histological evaluation involved H&E staining as well as immunohistochemical staining for osteocalcin and BMP-2 . The scanned slides were observed using slide-viewing software (Case Viewer ver. 2.1, 3DHISTECH, Budapest, Hungary). Statistical analysis The parametric assumptions of the data were evaluated using the Kolmogorov–Smirnov test. An independent samples t-test was conducted to compare the jaw and types of graft (ridge augmentation and sinus graft) in the maxilla. Statistical analysis was performed using SPSS 25.0 for Windows (SPSS Inc., Chicago, IL, USA). Data were presented as the mean ± standard deviation. Teeth that were deemed unsalvageable were extracted and sent to a manufacturer (Korea Tooth Bank, Seoul, Republic of Korea). The procurement, storage, processing, and packaging of teeth were individually handled in accordance with the Good Practice Guidelines for Tooth Handling Institutions, as stipulated by the Korea Administration of Health and Welfare . The processing method for the demineralized dentin matrix (DDM) includes refrigerating the teeth in 70% ethyl alcohol, followed by rinsing and removing any attached soft tissue and pulp using a retrograde technique. The dentin is then ground into particles (300–800 μm), followed by defatting and demineralization with 0.6 N HCl, incorporating a viral inactivation procedure as described in Patent EP 2601982, resulting the volume ranged from 0.4 to 1.0 cc . Subsequently, rhBMP-2 is applied to the DDM powder at a concentration of 0.2 mg/mL (Cowellmedi, Seoul, Republic of Korea). Implant placement and bone grafting at the extraction site were performed within 4–6 weeks after extraction. If other teeth required extraction at sites other than the implant surgery site, AutoBT.BMP was prepared in advance, and surgery was performed immediately after extraction. The implant was placed, and a cover screw was connected when a minimum initial fixation force of 30Ncm was achieved . AutoBT.BMP was grafted around the exposed implant threads, and primary closure was achieved by using 4–0 vicryl (Ethicon, Johnson & Johnson International, New Jersey) without the use of a membrane or other bone substitute (Fig. ). All patients underwent single CBCT (Sirona scanner, Dentsply Sirona, Charlotte, North Carolina; 85 kV and 6.5 mA; voxel size of 160 μm; a scanning time of 14 s) imaging immediately after surgery. Routine postoperative care included the administration of antibiotics and anti-inflammatory analgesics for five days, with sutures being removed after one week. Secondary surgery was performed on average after 2 months for the mandible and 3 months for the maxilla, with loading occurring within three weeks of secondary surgery. During the second surgery, if bone tissue formed above the cover screw could be collected, a biopsy was performed with the patient's consent. The implant stability was evaluated during the second operation using the Osstell Mentor device (Osstell, Gothenburg, Sweden) as ISQ value . CBCT imaging and histological examination of the bone tissue above the implant cover screw were performed during the secondary surgery for consenting patients. Using linear measurement tools on CBCT, two examiners (J.H. Lim and J.A. Lim) independently measured vertical marginal bone height and horizontal bone width, and the results were averaged. The marginal bone height around the implant was measured along the center of the implant in a cross-sectional slice, from the crest of the buccal and lingual marginal bone to the base of the implant. In the cross-sectional view, a vertical reference line was established from the radiolucent center of the cover screw to the midpoint of the fixture. A horizontal reference line was then drawn perpendicular to this vertical line at the level of the marginal bone crest. Measurement of the marginal bone was taken using these reference lines, relative to both the marginal crest and the apex of the dental implant . At the same reference line, horizontal bone width was measured at 1 mm and 5 mm below the highest point of the fixture thread. Measurements were taken immediately after the bone graft and at the time of the second surgery (Fig. ). Specimens were fixed in 10% neutral-buffered formalin and then demineralized using 10% formic acid. Longitudinal sections, 5–8 μm thick, were prepared from the central region of the specimens using a microtome. The sections were then stained with hematoxylin and eosin (H&E) and scanned using a Panoramic 250 Flash III scanner (3DHISTECH, Budapest, Hungary). Following this, the samples were fixed in 10% buffered formalin and decalcified with 10% formic acid. Histological evaluation involved H&E staining as well as immunohistochemical staining for osteocalcin and BMP-2 . The scanned slides were observed using slide-viewing software (Case Viewer ver. 2.1, 3DHISTECH, Budapest, Hungary). The parametric assumptions of the data were evaluated using the Kolmogorov–Smirnov test. An independent samples t-test was conducted to compare the jaw and types of graft (ridge augmentation and sinus graft) in the maxilla. Statistical analysis was performed using SPSS 25.0 for Windows (SPSS Inc., Chicago, IL, USA). Data were presented as the mean ± standard deviation. Clinical results The study included 30 participants (17 males and 13 females, with a mead age 55.0 ± 8.8 years). A total of 96 implants (46 in the mandible and 50 in the maxilla) were placed. The implants had sandblasted, acid-etched surfaces with internal hex connections (TSIII SA, Osstem, Seoul, Republic of Korea; SuperLine, Dentium, Suwon, Republic of Korea). A two-stage protocol was followed, with an initial insertion torque of > 35 Ncm, and simultaneous grafting with AutoBT.BMP. After the average healing period was 82.4 ± 19.4 days, the implant stability quotient (ISQ) at the second surgery averaged 75.5 ± 9.5 (Table ) . There were no cases of implant osseointegration failure, and all final prostheses were of the SCPR type and were secured with a torque of over 50Ncm. According to the jaw, 46 and 50 implants were placed on the mandible and maxilla, respectively (Table ). There were no statistically significant differences in gender or age. The healing period was on average 20.9 days shorter in the mandible ( p < 0.001), and the implant stability was lower in the maxilla (71.7 ± 9.4) compared to the mandible (80.9 ± 6.5) ( p < 0.001). Regarding the maxilla (Table ), AutoBT.BMP was grafted for ridge augmentation on 36 implants and for sinus grafting on 14 implants without bone graft on the marginal bone. Age, gender, and recovery period were similar, but the implant stability was lower in the group that had only the sinus graft. ( p = 0.003). Measurement of marginal bone change using cone-beam computed tomography Immediately after bone graft surgery and secondary surgery, CBCT was performed on a total of 33 maxillary implants (19 men, 14 women, average age 53.8 ± 8.4 years). Among them, 23 implants underwent ridge augmentation due to the exposure of three or more threads of the fixture, and 10 implants underwent sinus grafting without fixture exposure. There was no difference in the implant marginal bone level and ridge horizontal width between the group that underwent ridge augmentation using AutoBT.BMP and the group that did not graft the marginal bone (sinus graft) (Table ). Histological results Two patients provided consent for histological examination. Tissue samples were obtained from the mandible at 70 days and from the maxilla at 101 days during secondary surgery, specifically from the cover screw area. Histological analysis of the grafted AutoBT.BMP revealed osteoinductive bone healing outcomes in the both of mandible and maxilla (Fig. ). In the mandible at 70 days (Fig. A), detailed examination of DDM on the gingival side showed both osteoconductive and osteoinductive bone formation, with loose fibrous connective tissues between DDM particles and newly formed bone. The boundary between DDM and new bone appeared indistinct, resembling aponeurosis (black arrow). In the maxilla at 101 days (Fig. B), high magnification of the lower root exhibited osteoinductive bone formation of DDM, accompanied by the development of structures similar to bone marrow. Staining with Osteocalcin (OCN) further highlighted these observations: in the maxilla (Fig. C), OCN-positive cells (red arrow) were evident around osteocytes within newly formed bone and between the new bone and DDM particles. In the mandible (Fig. D), OCN-positive cells (red arrow) were observed around osteocytes in newly formed bone. BMP-2 staining images revealed significant findings: in the maxilla (Fig. E), BMP-2 was not specifically identified in dentin or newly formed bone, but fibrous tissues between DDM particles and new bone exhibited abundant BMP-2-positive cells (yellow arrow), consistent with observations in the mandible (Fig. F). The study included 30 participants (17 males and 13 females, with a mead age 55.0 ± 8.8 years). A total of 96 implants (46 in the mandible and 50 in the maxilla) were placed. The implants had sandblasted, acid-etched surfaces with internal hex connections (TSIII SA, Osstem, Seoul, Republic of Korea; SuperLine, Dentium, Suwon, Republic of Korea). A two-stage protocol was followed, with an initial insertion torque of > 35 Ncm, and simultaneous grafting with AutoBT.BMP. After the average healing period was 82.4 ± 19.4 days, the implant stability quotient (ISQ) at the second surgery averaged 75.5 ± 9.5 (Table ) . There were no cases of implant osseointegration failure, and all final prostheses were of the SCPR type and were secured with a torque of over 50Ncm. According to the jaw, 46 and 50 implants were placed on the mandible and maxilla, respectively (Table ). There were no statistically significant differences in gender or age. The healing period was on average 20.9 days shorter in the mandible ( p < 0.001), and the implant stability was lower in the maxilla (71.7 ± 9.4) compared to the mandible (80.9 ± 6.5) ( p < 0.001). Regarding the maxilla (Table ), AutoBT.BMP was grafted for ridge augmentation on 36 implants and for sinus grafting on 14 implants without bone graft on the marginal bone. Age, gender, and recovery period were similar, but the implant stability was lower in the group that had only the sinus graft. ( p = 0.003). Immediately after bone graft surgery and secondary surgery, CBCT was performed on a total of 33 maxillary implants (19 men, 14 women, average age 53.8 ± 8.4 years). Among them, 23 implants underwent ridge augmentation due to the exposure of three or more threads of the fixture, and 10 implants underwent sinus grafting without fixture exposure. There was no difference in the implant marginal bone level and ridge horizontal width between the group that underwent ridge augmentation using AutoBT.BMP and the group that did not graft the marginal bone (sinus graft) (Table ). Two patients provided consent for histological examination. Tissue samples were obtained from the mandible at 70 days and from the maxilla at 101 days during secondary surgery, specifically from the cover screw area. Histological analysis of the grafted AutoBT.BMP revealed osteoinductive bone healing outcomes in the both of mandible and maxilla (Fig. ). In the mandible at 70 days (Fig. A), detailed examination of DDM on the gingival side showed both osteoconductive and osteoinductive bone formation, with loose fibrous connective tissues between DDM particles and newly formed bone. The boundary between DDM and new bone appeared indistinct, resembling aponeurosis (black arrow). In the maxilla at 101 days (Fig. B), high magnification of the lower root exhibited osteoinductive bone formation of DDM, accompanied by the development of structures similar to bone marrow. Staining with Osteocalcin (OCN) further highlighted these observations: in the maxilla (Fig. C), OCN-positive cells (red arrow) were evident around osteocytes within newly formed bone and between the new bone and DDM particles. In the mandible (Fig. D), OCN-positive cells (red arrow) were observed around osteocytes in newly formed bone. BMP-2 staining images revealed significant findings: in the maxilla (Fig. E), BMP-2 was not specifically identified in dentin or newly formed bone, but fibrous tissues between DDM particles and new bone exhibited abundant BMP-2-positive cells (yellow arrow), consistent with observations in the mandible (Fig. F). The author hypothesized that AutoBT.BMP would effectively demonstrate the osteoinductive potential of demineralized dentin matrix (DDM) as a bone graft material by leveraging the synergistic effects of its nanoporous structure and collagen-rich matrix as a carrier for rhBMP-2. The study's findings support this hypothesis, showing that AutoBT.BMP facilitated successful osseointegration across all implants, with no failures observed, and maintained satisfactory bone volume. Histological analysis further indicated that DDM could serve as an effective alternative to autogenous bone. The osteoinductive capability, which promotes bone formation by directly inducing osteoblast differentiation through vascular formation and mesenchymal cell recruitment at the graft site, is anticipated to enhance osseointegration around the implant fixture, potentially shortening the traditional bone healing period. The patients in this study achieved an initial fixation strength of 30Ncm and underwent secondary surgery after an average of 2 months for the mandible and 3 months for the maxilla to enable early loading. During secondary surgery, an average fixation strength of 75.5 ISQ was recorded, and all final prostheses were successfully mounted using a torque of over 50Ncm, with no osseointegration failures. BMP is a naturally occurring multifunctional protein initially discovered for its bone-inducing capabilities as a secreted cytokine. BMP signaling is mediated through serine/threonine kinase receptors . It can activate p38 mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase (ERK), and c-Jun N-terminal kinase (JNK) signaling pathways, stimulating the expression of key osteogenic transcription factors such as runt-related transcription factor 2 (Runx2), distal-less homeobox 5 (Dlx5), and osterix (Osx). Runx2 plays a crucial role in osteogenesis . At the cellular level, BMP functions as a ligand for receptors on various cells, including osteoblasts, osteoclasts, adipose stem cells, mesenchymal stem cells, and tendon fibroblasts, promoting their differentiation and proliferation . Bone marrow stromal cells are regulated by the Runx2 and Osx genes, with Runx2 being the primary driver throughout the differentiation process. The BMP-2 signaling pathway directly influences Runx2 expression, thereby enhancing osteoblast differentiation and regulating bone formation . As a result, BMP-2 significantly promotes the formation of mineralized nodules, osteogenic differentiation, and bone healing . In 2002, the FDA approved rhBMP-2 for clinical use in spinal fusion, treatment of open or nonunion fractures, and maxillofacial bone augmentation, delivered via a collagen sponge . Several studies have reported that the use of rhBMP-2 in the field of oral and maxillofacial surgery increases the amount of bone formation . Absorbable collagen sponges are employed as rhBMP-2 carriers for maxillary sinus grafts and tooth extractions, with FDA approval at a concentration of 1.5 mg/mL . However, common adverse events include oral pain, edema, and erythema, largely due to burst release from the carrier's mechanical instability in a physiological environment . Effective carrier scaffolds must maintain a BMP concentration at the local site high enough to support healing while avoiding adverse effects. In comparison to other carriers such as absorbable collagen sponge (ACS) and β-tricalcium phosphate (β-TCP), demineralized dentin matrix (DDM) offers distinct advantages. While ACS has a high binding capacity for rhBMP-2, its burst release under physiological pressure can lead to adverse effects such as ectopic bone formation and inflammation. DDM, with its nano-porous structure and collagen-rich matrix, provides a more stable release profile and better integration with surrounding bone tissue. In 2019, it was proposed that the hypothesized release profile of DDM combined with rhBMP-2 includes both physically adsorbed and modified, as well as physically entrapped rhBMP-2, sequentially released from the DDM surface during the initial implantation phase over 36 days, including the latest emerging endogenous BMPs within the dentin. Thus, DDM/rhBMP-2 grafts can maintain high BMP activity for at least one month post-implantation, leading to significant early bone formation and a reduced bone healing period. Although dentin is a collagen matrix, it has greater density and physical stability compared to collagen sponges. When grafted around implants, DDM/rhBMP-2 can function effectively for up to a month without burst release, potentially enabling faster osseointegration. Some animal studies have reported that rhBMP-2 reduces bone healing time and enhances implant osseointegration . However, few studies have examined the application of rhBMP-2 to the exposed implant area for early loading. When bone grafting is necessary for bone defects in humans, primary closure of deficient soft tissue can cause compression, raising concerns about the rapid release of rhBMP-2 under physiologic pressure . In previous studies using DDM as a carrier for rhBMP-2, there were no complications from rapid release, even in challenging periodontal pocket grafting where soft tiszsue coverage is challenging . Therefore, DDM/rhBMP-2 grafts around implants could stably facilitate early osseointegration and subsequent loading. In this study, early loading achieved a stability of 50Ncm in all cases, with an average ISQ of 75.5 and at least 68.0 in sinus graft cases, indicating that loading was feasible . The ISQ value measures the lateral stiffness of the bone-implant interface and the rigidity of the surrounding bone. Implant stability tends to decrease during the initial healing phase due to bone resorption at the healing site . According to Nappo et al. in 2019, in cases with low bone density in the maxilla, the ISQ value tends to decrease during the osseointegration process, even if the healing period is extended . Interestingly, in our study, after a three-month healing period, the ISQ value did not increase in the maxilla (Table ). Due to more extensive bone remodeling in the maxilla, maxillary implants can show greater ISQ increases with function . Since implant stability is closely related to the condition of the marginal bone around the implant fixture , the ridge augmentation group was compared, which underwent marginal bone grafting in the maxilla, with the sinus graft group where host bone was present (Table and ). The ISQ values and CBCT findings of bone resorption in the group that received grafting at the fixture top were not inferior to the response seen in host bone. Additionally, histological analysis of bone tissue obtained from the implant cover screw (Fig. ), an area least likely to exhibit bone formation, showed new bone formation and vascularization at one month in the mandible and at one and three months in the maxilla, suggesting significant bone formation around the implant. Shortening the overall healing period to achieve efficient clinical outcomes is a goal for many clinicians. Evaluating bone healing after bone grafting typically requires histological confirmation, often necessitating studies where trephine burs are used to assess bone healing during implant placement. In 2023, platelet–rich fibrin and demineralized bovine bone mineral were used, and results showed an implant stability quotient (ISQ) of over 60 after 4 months, with histomorphometric analysis demonstrating greater bone formation and faster bone healing compared to controls . However, they involved a four-walled defect, which is conducive to bone formation in the maxillary sinus, and despite setting a long healing period of 4 months for the experimental group and 8 months for the control group. Few studies have attempted to shorten the bone healing period to less than 4 months, especially in challenging cases involving simultaneous bone grafting and implant placement in humans. In this study, without a control group and unable to directly assess the extent of osseointegration of the actual implants, it is difficult to conclusively determine if DDM/rhBMP-2 significantly shortened the healing period. Nevertheless, this study involved 1–2 walled defects, which are more challenging for bone formation, and yet no implant failures were observed despite a short healing period of 2 months for the mandible and 3 months for the maxilla. Moreover, clear new bone formation was observed in histological results (Fig. ) in areas least likely for bone formation, such as above the cover screw, supporting the clinical validity of a shorter healing period. This study is the first to explore early loading in patients undergoing bone grafting with DDM/rhBMP-2 at sites with exposed implants. If further validated, these findings could revolutionize traditional implant dentistry protocols, which typically require a 5–6 months healing period post-bone grafting or simultaneous implant placement before loading. While the primary focus of this study is on dental implant surgery, the findings have broader implications for other fields of bone regeneration, particularly in orthopedics. The combination of DDM and rhBMP-2 could be applied to the treatment of long bone fractures, non-union fractures, and osteonecrosis, where rapid bone regeneration and vascularization are critical. Additionally, the ability of DDM/rhBMP-2 to support early loading in dental implants suggests its potential use in orthopedic bone augmentation and reconstruction procedures. However, as a retrospective study, it has several limitations. The lack of a control group, the inability to measure ISQ during primary surgery, and the absence of CBCT imaging during secondary surgery in all cases are notable. Moreover, since this was a study involving human subjects, it was not possible to evaluate the bone-implant contact ratio to precisely determine osseointegration at the time of loading. Future animal studies and well-designed prospective studies are necessary. This study suggests that incorporating DDM with rhBMP-2 during implant placement shows promise in reducing the healing period, potentially allowing for earlier loading. However, as a retrospective study, it has several limitations. One of the main limitations of this study is the restricted histological sample size, with only two patients consenting to provide biopsy specimens for detailed analysis. This limited number of histological samples restricts the robustness and generalizability of our findings regarding the osteoinductive effects of the DDM/rhBMP-2 graft. Notably, the absence of a control group without rhBMP-2 limits the strength of the conclusions. Further well-designed studies with appropriate control groups are necessary to confirm these findings and establish the comparative effectiveness of this approach in reducing the healing period for early implant loading. |
Mechanism-based organization of neural networks to emulate systems biology and pharmacology models | 31d2483e-37ca-4301-96d8-ec0faacbc67b | 11130269 | Pharmacology[mh] | Machine learning models are a subset of artificial intelligence (AI) that utilize algorithms to imitate human-like learning and intelligence . These types of models are increasingly being used to solve complex problems across all areas of research including the development of autonomous vehicles, superhuman mastery of chess or go, or even advertising and marketing , The healthcare space is no exception, with machine learning models being employed for natural language processing (NLP) of COVID-19 research findings, in silico simulation of massive clinical trials, and even the discovery and development of new drug formulations – . In recent years, AI has become widely adopted and even commonplace within the healthcare and regulatory spaces with over 500 machine learning applications being approved as Software as a Medical Device (SaMD) by FDA to date. The 2021 FDA AIML SaMD Action Plan further cements the expansion of AI applications into modern health care and regulation , . While these AI tools are widely used and allow for rapid results and promising research breakthroughs, they are often viewed as “black boxes,” wherein it is difficult to trace model outputs back to model inputs due to a lack of clarity over the internal mechanisms. This ambiguity has led to calls to find better methods to explain AI outputs or to even do away with these types of models entirely in favor of more understandable alternatives for high impact decision making , . This presents a unique and challenging dilemma with model utility being pitted against user and public confidence. One particularly interesting example highlighting this mechanism vs. black-box dilemma is the use of deep learning neural networks to emulate mechanistic model simulations – . For instance, systems biology or pharmacology models are typically mechanistic models using mathematical equations to quantitatively describe essential biological or pharmacological processes underlying the systems dynamics (time courses of physiological changes or pharmacological measurements). As numerical simulation of these equations is time consuming, recently Wang et al. proposed an artificial neural networks-based method that can learn a mapping between the parameters of mechanistic models and the final systems dynamics, bypassing the underlying mechanisms completely . While demonstrating massive acceleration in computational speed, this method “flips” a mechanistic model into a black-box one, trading the former’s strength (transparency and interpretability) for that of the latter (computing efficiency). In this work, we employed the algorithms proposed by Wang et al. and endeavored to develop a mechanistically inspired deep learning model capable of leveraging the medium’s strengths without sacrificing interpretability. We found that, by reorganizing the layers of artificial neural networks to mimic the biological/pharmacological processes underlying the systems of interest, it is possible to turn a black-box deep learning model into a semi-mechanistic one. The resulting model not only maintained the clarity of the mechanistic simulations, but also improved training rates and predictive capabilities relative to the previously proposed black-box AI-based emulation approach. Mechanistic model to simulate respiratory depression under opioid agonists and antagonists Our research group recently developed a translational pharmacokinetic-pharmacodynamic (PK-PD) model for the prediction of opioid overdose and subsequent recovery of respiratory depression after administration of the opioid antagonist naloxone . As a proof of concept, we implemented a simplified version of this model (Fig. ). This model has sufficient mechanistic information for us to investigate ways to introduce system mechanisms into a deep learning framework. On its own the mechanistic model is specialized in simulating a specific clinical situation where subjects have their alveolar (end-tidal) CO2 partial pressure maintained at an elevated and constant level, a common practice in clinical studies investigating respiratory depression – . This model has different mechanistic components to describe different biological and pharmacological processes, including receptor binding, PK, and PD. These components work together to determine the dynamics of the clinical variable of interest: the fractional change of minute ventilation volume ( V F ) under the influence of opioids and naloxone. The receptor binding component uses the following ordinary differential equation (ODE) to describe the system: 1 [12pt]{minimal} $$_{L}}{dt}={K}_{on}{L}^{n}R-{K}_{off}{R}_{L}$$ d R L dt = K on L n R - K off R L where L , R , and R L are free ligands (opioids or naloxone), fraction of free (unoccupied) opioid receptors, and fraction of ligand-occupied receptors, respectively. K on , K off , and n are the association (binding) rate, dissociation (unbinding) rate, and the slope of the dose–effect relationship, respectively. For each ligand, these binding parameters were estimated by fitting to in vitro binding data, during which bootstrapping was used to capture the variability of in vitro data and uncertainty of model fitting, resulting in 2000 parameter sets that approximate the joint probability distribution of K on , K off , and n . For the PK component of naloxone, the following equations are used. 2 [12pt]{minimal} $$_{1}}{dt}={K}_{tr}DF{e}^{-{K}_{tr}t}-{K}_{tr}{T}_{1}$$ d T 1 dt = K tr D F e - K tr t - K tr T 1 3 [12pt]{minimal} $$_{2}}{dt}={K}_{tr}{T}_{1}-{K}_{in}{T}_{2}$$ d T 2 dt = K tr T 1 - K in T 2 4 [12pt]{minimal} $$=_{in}}{V}{T}_{2}-_{L}}{V}P$$ dP dt = K in V T 2 - C L V P This PK component is a transit compartment model with 2 transition ( T 1 and T 2 ) and 1 central ( P ) compartment to simulate the delayed absorption of naloxone into the plasma following intranasal (IN) administration. D is the drug dose in mg. The parameters K tr , K in , V , and C L (transition rate constant, absorption rate constant, volume of distribution and total clearance respectively) were estimated by fitting to plasma concentration data from the FDA label for NARCAN . For the PK component of opioids, we used a fentanyl PK model from literature . For the purposes of this case study, carfentanil PK was assumed to match that of fentanyl. 5 [12pt]{minimal} $$_{F}}{dt}={K}_{21}{P}_{F2}+{K}_{31}{P}_{F3}-{K}_{12}{P}_{F}-{K}_{13}{P}_{F}-{K}_{out}{P}_{F}$$ d P F dt = K 21 P F 2 + K 31 P F 3 - K 12 P F - K 13 P F - K out P F 6 [12pt]{minimal} $$_{F2}}{dt}={-K}_{21}{P}_{F2}+{K}_{12}{P}_{F}$$ d P F 2 dt = - K 21 P F 2 + K 12 P F 7 [12pt]{minimal} $$_{F3}}{dt}=-{K}_{31}{P}_{F3}+{K}_{13}{P}_{F}$$ d P F 3 dt = - K 31 P F 3 + K 13 P F The opioid PK component is a 3-compartment model with 1 central compartment ( P F ) and 2 peripheral compartments ( P F2 and P F3 ) to simulate bolus administration of IV opioid. The parameters K out , K 12 , K 21 , K 13 , and K 31 (elimination rate constant, forward and reverse rate constant between the central and first peripheral compartment, and the forward and reverse rate constant between the central and the second peripheral compartment) were taken from literature where the reported mean and standard deviation were used to sample 2000 parameter sets that approximate the distribution of the PK parameters in a general population with inter-subject variabilities . For the PD component, the transfer of carfentanil and naloxone from the plasma to the brain effect site was modeled as a biophase transition model with equilibration parameters taken from the literature , . 8 [12pt]{minimal} $$=_{1}{P}_{F}}{{V}_{c}{M}_{mass}}1e9-{k}_{1}L$$ dL dt = k 1 P F V c M mass 1 e 9 - k 1 L The biophase transition model controls the rate at which the effect site concentration ( L ) equilibriates with the plasma compartment ( P F ). The parameters k 1 and V c (biophase equilibriation term and central compartment volume) are taken from literature , and the 1e9 scaling is used to convert to the pMol concentrations used to estimate the receptor binding parameters. The effect site concentrations for opioids and naloxone were used as input to the receptor binding component to calculate the fraction of opioid mu receptor occupied by opioids ( R L in Eq. ), which is then translated into the fraction of minute ventilation volume relative to the baseline: 9 [12pt]{minimal} $${V}_{F}=1- {R}_{L}$$ V F = 1 - α R L where V F is fractional minute ventilation volume, α is the opioid agonism coefficient and R L is fraction opioid receptor occupancy. For fentanyl and its derivatives like carfentanil the α value is set to 1 . Black-box deep learning model as proposed by Wang et al. The deep learning model as proposed by Wang et al. is a Recurrent Neural Network (RNN) utilizing a long short-term memory (LSTM) framework . RNNs are a type of deep learning model that incorporate loops to allow prior states to inform future outputs in time series data. LSTM models are a subset of these RNNs which utilize memory cells to prevent state effects from vanishing over time. Wang et al. proposed to stack fully connected layers, which are widely used as hidden layers for different deep learning tasks, on top of LSTM layers as the internal network structure of their neural network model to emulate mechanistic models. Because the target system’s mechanisms are ignored, the same deep learning structure can be applied to very different mechanistic models (and hence we refer to this type of model as a “black-box” model). We developed such a black-box model similar to Wang et al., which is comprised of a single input layer to receive parameters of the mechanistic model, a hidden fully connected layer, and a LSTM layer for output of opioid receptor occupancy time course, which is then translated into the dynamics of minute ventilation through the PD equation above (Fig. (A)). Semi-mechanistic deep learning model The mechanistically inspired machine learning model attempts to mirror the structure of the mechanistic model to better replicate its results. Rather than a single input layer containing all parameters of interest, there are now three distinct input layers: the first for the opioid dose and PK parameters, the second for naloxone dose and PK parameters, and the third for opioid and naloxone receptor binding parameters. The PK parameters and dosing information for opioids and naloxone both pass to their own middle LSTM layers, which generate internal recurrent data that can be thought of corresponding to the time course of opioid and naloxone’s effect-site concentration in the brain, similar to the mechanistic model. This information is then passed to the final LSTM layer along with the opioid and naloxone receptor binding parameters to produce time course data for the opioid receptor occupancy, followed by translation into minute ventilation. Unlike the black-box model, there are no hidden layers in the semi-mechanistic model. The model structure can be found in Fig. (B). Training We trained both the black-box and semi-mechanistic deep learning models based on the inputs and outputs of the mechanistic model. The output is the time course of the mu opioid receptor occupancy following a specific opioid (carfentanil) and naloxone dosing scenario. The inputs include kinetic parameters associated with the mechanistic model, as well as parameters associated with dosing scenarios. For the former, 2000 sets of kinetic parameters were randomly sampled and combined from the probability distributions of PK and receptor binding parameters as estimated through experimental data (see previous sections). For the latter, it includes the opioid dose (12 discrete levels from 0.013 to 0.157 mg), the total number of naloxone doses administered (0, 1, 2, 3, or 4), the respiratory thresholds required to administer naloxone (40%, 25% and 10% of baseline minute ventilation), and the delay between the first and subsequent doses of naloxone for scenarios where additional doses were administered (2, 3 or, 5 min). In total, the 2000 kinetic parameter sets (virtual subjects) and the 540 dosing scenarios led to 1,080,000 parameter combinations as training data. We utilized the same training methodology for both machine learning models with the objective function aiming to minimize the mean square error of opioid receptor occupancy relative to simulated results. As in the publication by Wang et al., we utilized the Adam algorithm of gradient descent to optimize the results . Both models were trained for 48 h on GPUs (NVIDIA Tesla V100 GPU) linked to the FDA’s high-performance computing (HPC) cluster. In each epoch, we randomly set aside 10% of the training data to calculate and report the training error. Prediction The PK and receptor binding parameter distributions were randomly sampled and combined again to generate another set of 2000 kinetic parameters (a new virtual population that is different from the one used in training). The same 540 dosing scenarios were applied, leading to 1,080,000 new parameter combinations as testing samples for both deep learning models to predict. To evaluate the performance of the semi-mechanistic and black-box deep learning models we calculated the overall root mean squared error of the median and 95% confidence intervals of the fractional minute ventilation data against the original mechanistic simulations. As a predictive “baseline”, we also implemented the Partial Lease Square Regression (PLSR) model using the Scikit-learn library in Python . During training, a 15-fold cross-validation was used to determine the optimal number of PLS components. Subsequently, the trained model was used to predict the outcome of the same 1,080,000 parameter combinations as the black-box and semi-mechanistic AI models. Computational systems The mechanistic model was numerically solved by deSolve in R, a high-level language with a performance similar to MATLAB , , which was used by Wang et al. to implement their mechanistic models for benchmarking . The deep learning models were implemented in python 3.6 with TensorFlow 1.9 . As the computational efficiency depends on the computing resources (e.g., number of CPU or GPUs), we report the normalized time it would take for a single CPU (Intel® Xeon® Gold 6226 CPU @ 2.70GH) to finish the mechanistic model simulation, or a single GPU (NVIDIA Tesla V100 GPU) to finish the neural network computation. To finish one dosing scenario for 2000 virtual subjects, it would take 30 min for the mechanistic model, and 2–3 min for the neural networks. To finish all 540 dosing scenarios on the 2000 virtual subjects, it would take more than 10 days for the mechanistic model, while 19 min for the neural networks. This study used the computational resources of the High-Performance Computing clusters at the Food and Drug Administration, Center for Devices and Radiological Health. Our research group recently developed a translational pharmacokinetic-pharmacodynamic (PK-PD) model for the prediction of opioid overdose and subsequent recovery of respiratory depression after administration of the opioid antagonist naloxone . As a proof of concept, we implemented a simplified version of this model (Fig. ). This model has sufficient mechanistic information for us to investigate ways to introduce system mechanisms into a deep learning framework. On its own the mechanistic model is specialized in simulating a specific clinical situation where subjects have their alveolar (end-tidal) CO2 partial pressure maintained at an elevated and constant level, a common practice in clinical studies investigating respiratory depression – . This model has different mechanistic components to describe different biological and pharmacological processes, including receptor binding, PK, and PD. These components work together to determine the dynamics of the clinical variable of interest: the fractional change of minute ventilation volume ( V F ) under the influence of opioids and naloxone. The receptor binding component uses the following ordinary differential equation (ODE) to describe the system: 1 [12pt]{minimal} $$_{L}}{dt}={K}_{on}{L}^{n}R-{K}_{off}{R}_{L}$$ d R L dt = K on L n R - K off R L where L , R , and R L are free ligands (opioids or naloxone), fraction of free (unoccupied) opioid receptors, and fraction of ligand-occupied receptors, respectively. K on , K off , and n are the association (binding) rate, dissociation (unbinding) rate, and the slope of the dose–effect relationship, respectively. For each ligand, these binding parameters were estimated by fitting to in vitro binding data, during which bootstrapping was used to capture the variability of in vitro data and uncertainty of model fitting, resulting in 2000 parameter sets that approximate the joint probability distribution of K on , K off , and n . For the PK component of naloxone, the following equations are used. 2 [12pt]{minimal} $$_{1}}{dt}={K}_{tr}DF{e}^{-{K}_{tr}t}-{K}_{tr}{T}_{1}$$ d T 1 dt = K tr D F e - K tr t - K tr T 1 3 [12pt]{minimal} $$_{2}}{dt}={K}_{tr}{T}_{1}-{K}_{in}{T}_{2}$$ d T 2 dt = K tr T 1 - K in T 2 4 [12pt]{minimal} $$=_{in}}{V}{T}_{2}-_{L}}{V}P$$ dP dt = K in V T 2 - C L V P This PK component is a transit compartment model with 2 transition ( T 1 and T 2 ) and 1 central ( P ) compartment to simulate the delayed absorption of naloxone into the plasma following intranasal (IN) administration. D is the drug dose in mg. The parameters K tr , K in , V , and C L (transition rate constant, absorption rate constant, volume of distribution and total clearance respectively) were estimated by fitting to plasma concentration data from the FDA label for NARCAN . For the PK component of opioids, we used a fentanyl PK model from literature . For the purposes of this case study, carfentanil PK was assumed to match that of fentanyl. 5 [12pt]{minimal} $$_{F}}{dt}={K}_{21}{P}_{F2}+{K}_{31}{P}_{F3}-{K}_{12}{P}_{F}-{K}_{13}{P}_{F}-{K}_{out}{P}_{F}$$ d P F dt = K 21 P F 2 + K 31 P F 3 - K 12 P F - K 13 P F - K out P F 6 [12pt]{minimal} $$_{F2}}{dt}={-K}_{21}{P}_{F2}+{K}_{12}{P}_{F}$$ d P F 2 dt = - K 21 P F 2 + K 12 P F 7 [12pt]{minimal} $$_{F3}}{dt}=-{K}_{31}{P}_{F3}+{K}_{13}{P}_{F}$$ d P F 3 dt = - K 31 P F 3 + K 13 P F The opioid PK component is a 3-compartment model with 1 central compartment ( P F ) and 2 peripheral compartments ( P F2 and P F3 ) to simulate bolus administration of IV opioid. The parameters K out , K 12 , K 21 , K 13 , and K 31 (elimination rate constant, forward and reverse rate constant between the central and first peripheral compartment, and the forward and reverse rate constant between the central and the second peripheral compartment) were taken from literature where the reported mean and standard deviation were used to sample 2000 parameter sets that approximate the distribution of the PK parameters in a general population with inter-subject variabilities . For the PD component, the transfer of carfentanil and naloxone from the plasma to the brain effect site was modeled as a biophase transition model with equilibration parameters taken from the literature , . 8 [12pt]{minimal} $$=_{1}{P}_{F}}{{V}_{c}{M}_{mass}}1e9-{k}_{1}L$$ dL dt = k 1 P F V c M mass 1 e 9 - k 1 L The biophase transition model controls the rate at which the effect site concentration ( L ) equilibriates with the plasma compartment ( P F ). The parameters k 1 and V c (biophase equilibriation term and central compartment volume) are taken from literature , and the 1e9 scaling is used to convert to the pMol concentrations used to estimate the receptor binding parameters. The effect site concentrations for opioids and naloxone were used as input to the receptor binding component to calculate the fraction of opioid mu receptor occupied by opioids ( R L in Eq. ), which is then translated into the fraction of minute ventilation volume relative to the baseline: 9 [12pt]{minimal} $${V}_{F}=1- {R}_{L}$$ V F = 1 - α R L where V F is fractional minute ventilation volume, α is the opioid agonism coefficient and R L is fraction opioid receptor occupancy. For fentanyl and its derivatives like carfentanil the α value is set to 1 . The deep learning model as proposed by Wang et al. is a Recurrent Neural Network (RNN) utilizing a long short-term memory (LSTM) framework . RNNs are a type of deep learning model that incorporate loops to allow prior states to inform future outputs in time series data. LSTM models are a subset of these RNNs which utilize memory cells to prevent state effects from vanishing over time. Wang et al. proposed to stack fully connected layers, which are widely used as hidden layers for different deep learning tasks, on top of LSTM layers as the internal network structure of their neural network model to emulate mechanistic models. Because the target system’s mechanisms are ignored, the same deep learning structure can be applied to very different mechanistic models (and hence we refer to this type of model as a “black-box” model). We developed such a black-box model similar to Wang et al., which is comprised of a single input layer to receive parameters of the mechanistic model, a hidden fully connected layer, and a LSTM layer for output of opioid receptor occupancy time course, which is then translated into the dynamics of minute ventilation through the PD equation above (Fig. (A)). The mechanistically inspired machine learning model attempts to mirror the structure of the mechanistic model to better replicate its results. Rather than a single input layer containing all parameters of interest, there are now three distinct input layers: the first for the opioid dose and PK parameters, the second for naloxone dose and PK parameters, and the third for opioid and naloxone receptor binding parameters. The PK parameters and dosing information for opioids and naloxone both pass to their own middle LSTM layers, which generate internal recurrent data that can be thought of corresponding to the time course of opioid and naloxone’s effect-site concentration in the brain, similar to the mechanistic model. This information is then passed to the final LSTM layer along with the opioid and naloxone receptor binding parameters to produce time course data for the opioid receptor occupancy, followed by translation into minute ventilation. Unlike the black-box model, there are no hidden layers in the semi-mechanistic model. The model structure can be found in Fig. (B). We trained both the black-box and semi-mechanistic deep learning models based on the inputs and outputs of the mechanistic model. The output is the time course of the mu opioid receptor occupancy following a specific opioid (carfentanil) and naloxone dosing scenario. The inputs include kinetic parameters associated with the mechanistic model, as well as parameters associated with dosing scenarios. For the former, 2000 sets of kinetic parameters were randomly sampled and combined from the probability distributions of PK and receptor binding parameters as estimated through experimental data (see previous sections). For the latter, it includes the opioid dose (12 discrete levels from 0.013 to 0.157 mg), the total number of naloxone doses administered (0, 1, 2, 3, or 4), the respiratory thresholds required to administer naloxone (40%, 25% and 10% of baseline minute ventilation), and the delay between the first and subsequent doses of naloxone for scenarios where additional doses were administered (2, 3 or, 5 min). In total, the 2000 kinetic parameter sets (virtual subjects) and the 540 dosing scenarios led to 1,080,000 parameter combinations as training data. We utilized the same training methodology for both machine learning models with the objective function aiming to minimize the mean square error of opioid receptor occupancy relative to simulated results. As in the publication by Wang et al., we utilized the Adam algorithm of gradient descent to optimize the results . Both models were trained for 48 h on GPUs (NVIDIA Tesla V100 GPU) linked to the FDA’s high-performance computing (HPC) cluster. In each epoch, we randomly set aside 10% of the training data to calculate and report the training error. The PK and receptor binding parameter distributions were randomly sampled and combined again to generate another set of 2000 kinetic parameters (a new virtual population that is different from the one used in training). The same 540 dosing scenarios were applied, leading to 1,080,000 new parameter combinations as testing samples for both deep learning models to predict. To evaluate the performance of the semi-mechanistic and black-box deep learning models we calculated the overall root mean squared error of the median and 95% confidence intervals of the fractional minute ventilation data against the original mechanistic simulations. As a predictive “baseline”, we also implemented the Partial Lease Square Regression (PLSR) model using the Scikit-learn library in Python . During training, a 15-fold cross-validation was used to determine the optimal number of PLS components. Subsequently, the trained model was used to predict the outcome of the same 1,080,000 parameter combinations as the black-box and semi-mechanistic AI models. The mechanistic model was numerically solved by deSolve in R, a high-level language with a performance similar to MATLAB , , which was used by Wang et al. to implement their mechanistic models for benchmarking . The deep learning models were implemented in python 3.6 with TensorFlow 1.9 . As the computational efficiency depends on the computing resources (e.g., number of CPU or GPUs), we report the normalized time it would take for a single CPU (Intel® Xeon® Gold 6226 CPU @ 2.70GH) to finish the mechanistic model simulation, or a single GPU (NVIDIA Tesla V100 GPU) to finish the neural network computation. To finish one dosing scenario for 2000 virtual subjects, it would take 30 min for the mechanistic model, and 2–3 min for the neural networks. To finish all 540 dosing scenarios on the 2000 virtual subjects, it would take more than 10 days for the mechanistic model, while 19 min for the neural networks. This study used the computational resources of the High-Performance Computing clusters at the Food and Drug Administration, Center for Devices and Radiological Health. The conceptual framework of reorganizing deep learning neural networks to mimic the mechanisms of the target systems The structure of a mechanistic model can usually by depicted as a diagram to give a conceptual presentation of the underlying processes (mechanisms) of the target system. For example, a pharmacokinetic-pharmacodynamic (PK-PD) model about the effects of opioids and naloxone on respiratory depression, such as the recently published translational model , could have processes depicting the accumulation and clearance of opioids and naloxone in the human body, the competition between opioids and naloxone in binding to the opioid receptor, and the effects of opioid-bound receptor on human’s ventilation volume per minute (minute ventilation or MV) as a clinical endpoint. Such a mechanistic model could be depicted as a diagram in Fig. . In contrast, although conceptually inspired by the human brain , typical artificial neural networks differ significantly from biological neural networks on the structural or mechanistic level. This is even true when the deep learning model was designed to emulate a specific biological system. For example, Wang et al. recently proposed a deep learning model based on Long-Short-Term Memory (LSTM) units that can be trained by a relatively small number of simulations generated by a mechanistic model, and subsequently used in place of the mechanistic model to simulate the target system in a larger parametric space and under more scenarios . While there is a significant gain in computational speed with such an approach, the deep learning neural networks would lose all mechanistic information about the target system and become a “black-box” as it is hard to trace the output back to the input. A neural network similar to Wang et al. for emulating the PK-PD model above is shown in Fig. (A). A comparison between the mechanistic PK-PD model (Fig. ) and the black-box deep learning model (Fig. (A)) reveals distinct structural differences. For example, in the black-box model, the information contained in the kinetic parameters of different sub-processes (PK, receptor binding, etc.), as well as the information contained in the parameters about the overdose scenarios (opioid dose, naloxone dose, etc.) are all propagated into the common hidden layer (Fig. (A)). In contrast, in the mechanistic model, these different types of information were segregated into different components, and only merged in the final step, when the PK and receptor binding components are connected (Fig. ). We reorganized the layers of neural networks to better mimic the structure of the mechanistic model. In this new model (Fig. (B)), the information flow is divided into three parts: the PK and dose of opioids is connected to one LSTM layer to mimic the opioids PK component; the PK and dose of naloxone is connected to another LSTM layer to mimic the naloxone PK component; and the outputs from the two LSTM layers above are combined with the opioid and naloxone receptor binding parameters to mimic the connection of the PK and receptor binding components in the mechanistic model. We call such a model a “semi-mechanistic deep learning model” as it is a deep learning framework with the neural network structure reorganized to partially mimic the target system it tries to emulate. The semi-mechanistic deep learning model outperforms the black-box model in training Following Wang et al. , we used the mechanistic model to generate some simulation results to train the neural network models (see Methods). Figure demonstrates the training error comparison between the black-box (blue) and semi-mechanistic deep learning model (red). To compare the training efficiency, both models were trained for the same time period (48 h). The error for the semi-mechanistic model is significantly lower than its black-box counterpart, reaching a training error of 0.17 compared to 2.25 at the end of the training. The training process also converges much more quickly. After approximately 8 h, the semi-mechanistic model’s training error drops to 2, which is not only four-fold lower than the black-box model’s error of 8.7 at the same time point, but also lower than the black-box model’s minimum training error after 48 h. The difference in training error is similarly seen on a per epoch basis. The semi-mechanistic model first outperforms the final training error for the black-box model (there are 192 epochs in the 48 h period) by epoch 15. Exploratory analysis using longer training time indicates that training error for the black-box’s error plateaus above the semi-mechanistic AI minimum error with a significant margin. After 72 h, the black-box model error is above 1.0, more than 5 times the final error for its semi-mechanistic counterpart. The semi-mechanistic deep learning model can substitute the mechanistic model for population simulation One important application of mechanistic models is to simulate large quantities of parameter combinations to represent populations of virtual subjects. For example, the mechanistic PK-PD model in Fig. can be used to answer the question: if a specific population of subjects (defined by a specific kinetic parameter set) received a certain dose of carfentanil to suppress respiration and then a certain dose of naloxone for rescue, what is the median and 95% confidence interval (CI) of the time course of minute ventilation for this population? We generated a population of 2000 virtual subjects not seen in training and used both the semi-mechanistic and black-box deep learning models to answer this question for different opioids and naloxone dosing scenarios (see Methods). The time course comparison of each of the two deep learning models against the simulation results from the mechanistic model (as the target of emulation) for a specific dosing scenario (carfentanil 0.11 mg intravenous injection, followed by naloxone 4 mg intranasal administration after minute ventilation dropped to 25% of baseline), can be seen in Fig. . Both the semi-mechanistic and black-box deep learning models are able to capture the overall trend and the “reversal point” of the median time course of minute ventilation for the virtual population (Fig. (A,B)). However, the semi-mechanistic deep learning model is better able to capture the minute ventilation at nadir (lowest point) as well as at the end of the 1 h time course (Fig. (A,B)). The difference in performance between the two models becomes more apparent when predicting the 95% CI of the population results. The semi-mechanistic model captures both the 2.5th and 97.5th (Fig. (A) blue) of the time course of minute ventilation in the population very well. However, the black-box model misses the time to nadir of the 2.5th percentile time course by approximately 200 s and the inaccuracy is increased for both the 97.5th and 2.5th percentile minute ventilation values near the end of the time course (Fig. (B) red). To quantify the overall performance over all the 540 dosing scenarios (see Methods), we calculated the root mean squared error (RMSE) between the mechanistic model simulation and either the semi-mechanistic or black-box deep learning model predictions. The semi-mechanistic deep learning model had RMSE values of 0.2, 0.375 and 0.35 for the median, 2.5th percentile, and 97.5th percentile time course minute ventilation data, respectively. In comparison, the black-box model had RMSE values of 0.6, 1.27, and 1.37 respectively (Fig. (C)). Of note, both the black-box and semi-mechanistic models outperformed a “baseline” method of using PLSR (Partial Least Square Regression) to emulate the mechanistic model . One key advantage of using a deep learning model to emulate a mechanistic one is the massive acceleration in computational speed. When the number of parameter sets (virtual subjects) is relatively small, for example finishing one single dosing scenario for a population of 2000 virtual subjects, the time taken by the deep learning models is approximately 7 times shorter than the mechanistic model. The speed gain for the deep learning framework increases as the number of simulations increases (more virtual subjects or more dosing scenarios) because the start up time is slower but individual runs are significantly faster. To finish all 540 dosing scenarios for the population, the deep learning models used less than 19 min, while using the mechanistic model to finish all these simulations would take over 10 days (see Methods). The structure of a mechanistic model can usually by depicted as a diagram to give a conceptual presentation of the underlying processes (mechanisms) of the target system. For example, a pharmacokinetic-pharmacodynamic (PK-PD) model about the effects of opioids and naloxone on respiratory depression, such as the recently published translational model , could have processes depicting the accumulation and clearance of opioids and naloxone in the human body, the competition between opioids and naloxone in binding to the opioid receptor, and the effects of opioid-bound receptor on human’s ventilation volume per minute (minute ventilation or MV) as a clinical endpoint. Such a mechanistic model could be depicted as a diagram in Fig. . In contrast, although conceptually inspired by the human brain , typical artificial neural networks differ significantly from biological neural networks on the structural or mechanistic level. This is even true when the deep learning model was designed to emulate a specific biological system. For example, Wang et al. recently proposed a deep learning model based on Long-Short-Term Memory (LSTM) units that can be trained by a relatively small number of simulations generated by a mechanistic model, and subsequently used in place of the mechanistic model to simulate the target system in a larger parametric space and under more scenarios . While there is a significant gain in computational speed with such an approach, the deep learning neural networks would lose all mechanistic information about the target system and become a “black-box” as it is hard to trace the output back to the input. A neural network similar to Wang et al. for emulating the PK-PD model above is shown in Fig. (A). A comparison between the mechanistic PK-PD model (Fig. ) and the black-box deep learning model (Fig. (A)) reveals distinct structural differences. For example, in the black-box model, the information contained in the kinetic parameters of different sub-processes (PK, receptor binding, etc.), as well as the information contained in the parameters about the overdose scenarios (opioid dose, naloxone dose, etc.) are all propagated into the common hidden layer (Fig. (A)). In contrast, in the mechanistic model, these different types of information were segregated into different components, and only merged in the final step, when the PK and receptor binding components are connected (Fig. ). We reorganized the layers of neural networks to better mimic the structure of the mechanistic model. In this new model (Fig. (B)), the information flow is divided into three parts: the PK and dose of opioids is connected to one LSTM layer to mimic the opioids PK component; the PK and dose of naloxone is connected to another LSTM layer to mimic the naloxone PK component; and the outputs from the two LSTM layers above are combined with the opioid and naloxone receptor binding parameters to mimic the connection of the PK and receptor binding components in the mechanistic model. We call such a model a “semi-mechanistic deep learning model” as it is a deep learning framework with the neural network structure reorganized to partially mimic the target system it tries to emulate. Following Wang et al. , we used the mechanistic model to generate some simulation results to train the neural network models (see Methods). Figure demonstrates the training error comparison between the black-box (blue) and semi-mechanistic deep learning model (red). To compare the training efficiency, both models were trained for the same time period (48 h). The error for the semi-mechanistic model is significantly lower than its black-box counterpart, reaching a training error of 0.17 compared to 2.25 at the end of the training. The training process also converges much more quickly. After approximately 8 h, the semi-mechanistic model’s training error drops to 2, which is not only four-fold lower than the black-box model’s error of 8.7 at the same time point, but also lower than the black-box model’s minimum training error after 48 h. The difference in training error is similarly seen on a per epoch basis. The semi-mechanistic model first outperforms the final training error for the black-box model (there are 192 epochs in the 48 h period) by epoch 15. Exploratory analysis using longer training time indicates that training error for the black-box’s error plateaus above the semi-mechanistic AI minimum error with a significant margin. After 72 h, the black-box model error is above 1.0, more than 5 times the final error for its semi-mechanistic counterpart. One important application of mechanistic models is to simulate large quantities of parameter combinations to represent populations of virtual subjects. For example, the mechanistic PK-PD model in Fig. can be used to answer the question: if a specific population of subjects (defined by a specific kinetic parameter set) received a certain dose of carfentanil to suppress respiration and then a certain dose of naloxone for rescue, what is the median and 95% confidence interval (CI) of the time course of minute ventilation for this population? We generated a population of 2000 virtual subjects not seen in training and used both the semi-mechanistic and black-box deep learning models to answer this question for different opioids and naloxone dosing scenarios (see Methods). The time course comparison of each of the two deep learning models against the simulation results from the mechanistic model (as the target of emulation) for a specific dosing scenario (carfentanil 0.11 mg intravenous injection, followed by naloxone 4 mg intranasal administration after minute ventilation dropped to 25% of baseline), can be seen in Fig. . Both the semi-mechanistic and black-box deep learning models are able to capture the overall trend and the “reversal point” of the median time course of minute ventilation for the virtual population (Fig. (A,B)). However, the semi-mechanistic deep learning model is better able to capture the minute ventilation at nadir (lowest point) as well as at the end of the 1 h time course (Fig. (A,B)). The difference in performance between the two models becomes more apparent when predicting the 95% CI of the population results. The semi-mechanistic model captures both the 2.5th and 97.5th (Fig. (A) blue) of the time course of minute ventilation in the population very well. However, the black-box model misses the time to nadir of the 2.5th percentile time course by approximately 200 s and the inaccuracy is increased for both the 97.5th and 2.5th percentile minute ventilation values near the end of the time course (Fig. (B) red). To quantify the overall performance over all the 540 dosing scenarios (see Methods), we calculated the root mean squared error (RMSE) between the mechanistic model simulation and either the semi-mechanistic or black-box deep learning model predictions. The semi-mechanistic deep learning model had RMSE values of 0.2, 0.375 and 0.35 for the median, 2.5th percentile, and 97.5th percentile time course minute ventilation data, respectively. In comparison, the black-box model had RMSE values of 0.6, 1.27, and 1.37 respectively (Fig. (C)). Of note, both the black-box and semi-mechanistic models outperformed a “baseline” method of using PLSR (Partial Least Square Regression) to emulate the mechanistic model . One key advantage of using a deep learning model to emulate a mechanistic one is the massive acceleration in computational speed. When the number of parameter sets (virtual subjects) is relatively small, for example finishing one single dosing scenario for a population of 2000 virtual subjects, the time taken by the deep learning models is approximately 7 times shorter than the mechanistic model. The speed gain for the deep learning framework increases as the number of simulations increases (more virtual subjects or more dosing scenarios) because the start up time is slower but individual runs are significantly faster. To finish all 540 dosing scenarios for the population, the deep learning models used less than 19 min, while using the mechanistic model to finish all these simulations would take over 10 days (see Methods). Herein, we presented a machine learning modeling framework designed to improve interpretability of results and alleviate some concerns over the “black box” nature of AI models. The key feature of this model, that improves both end user and researcher comprehension, is that it maintains the mechanistic representation of the underlying physiological processes when emulating a mechanistic model to simulate a target system. While in this work the semi-mechanistic deep learning framework has been applied to a simplified version of our previously published opioid overdose model , the strategy should be applicable to any systems where mechanistic information about internal processes underlying some system dynamics is available. In addition to being more interpretable, the semi-mechanistic model also shows improvements over its black-box counterpart in both its training and predictive capabilities. From the outset the training error is greatly reduced, with the semi-mechanistic neural networks reaching the minimum error of the black-box neural networks 8 times faster (6 h vs. 48 h) without sacrificing any predictive accuracy. This reduction in training time would further increase the advantage of such a deep learning framework to be used in place of mechanistic models, as now the time cost of “converting” an established mechanistic model to a deep learning emulator is greatly reduced. On the other hand, the fact that the semi-mechanistic deep learning model can achieve a lower training error without overfitting (as evidenced by predicting new data in Fig. ) suggests that reorganizing the neural networks to mimic the structure of a mechanistic model allows it to learn some information or pattern contained in the target system better than stacking up layers of neural networks (the “black box”). One specific application we demonstrated using our semi-mechanistic deep learning framework is to use such models (after being adequately trained to emulate a mechanistic model) to predict outcomes from large virtual populations relatively quickly. The speed gain compared to the default method (running the mechanistic model directly) depends on the complexity of the model, the software and hardware used, and the parameter space (number of potential virtual subjects or simulation scenarios). Mechanistic PK-PD models like the one we used in this study most likely would benefit from this approach because these models are complex enough to warrant a semi-mechanistic reorganizing of the deep learning neural networks, and often require the exploration of a large parameter space (e.g., global sensitivity analysis or uncertainty quantification , ) or a large number of scenarios (e.g., the 540 different simulation scenarios used in this work only represent a tiny fraction of all possible combinations of opioids and naloxone dosing schemes). There is one limitation to the methodology employed in this study when expanding to other translational models. While, in theory, this methodology should be directly applicable to other mechanistic scenarios; it has only been tested and implemented for a simplified version of our translational model to simulate opioid receptor occupancy. Future research will expand this model first to the full translational model simulations and then to other mechanistic scenarios to confirm this assumption. Similarly, we did not perform a systematic comparison between our AI models and other data-driven models in the context of emulating mechanistic models, such as Partial Least Square Regression (PLSR). Even though one implementation of PLSR was used in Fig. (C), it is intended to serve as a “baseline prediction performance” rather than a true evaluation of such methods, given that there are many different variants and improvements of PLSR that we did not implement , , . In summary, we implemented a machine learning framework that maintains the mechanistic structure of its translational model counterpart, allowing us to peer into the “black box” of artificial intelligence modeling and produce interpretable results. This framework can be expanded to cover more complex models, for instance additional opioid scenarios and opioid antagonist formulations , , , to leverage its computational efficiency and interpretability to improve understanding of overdose patient outcomes in the community setting. While the concept of reorganizing neural network structures to mimic the target system only applies to those deep learning models that are designed to emulate mechanistic models, this initial effort to “break” the black box can serve as an example for increasing interpretability of other AI-based models across different areas. |
Preparation and analysis of quinoa active protein (QAP) and its mechanism of inhibiting | d6af4227-f283-4aec-a32c-d9a197078544 | 11831975 | Biochemistry[mh] | Quinoa ( Chenopodium quinoa Willd.), an indigenous plant of South America and a member of the Chenopodiaceae family, has attracted significant interest from the agricultural sector owing to its relatively elevated starch content, ranging from 53.2 to 75.1 g/100 g , as well as its high protein content, which includes all nine essential amino acids crucial for various physiological functions . This crop is noted for its substantial nutritional value and its adaptability to a wide range of ecological and climatic conditions . Consequently, several countries, including China, India, and Canada, have commenced large-scale cultivation of quinoa. Quinoa seeds, predominantly characterized by their round shape and white color, are also available in red and dark black varieties . In compared to conventional grains such as wheat, quinoa is notable for its enhanced nutritional profile, distinct starch properties, and higher dietary fiber content . Additionally, quinoa is recognized as an exceptional source of essential minerals and vitamins, and it is abundant in bioactive compounds like polyphenols and flavonoids, which are associated with numerous health benefits . In recent years, significant research endeavors have focused on exploring the potential of quinoa as a functional food, aiming to maximize the utilization of this valuable resource . Furthermore, the abundant presence of bioactive compounds in quinoa underscores its potential candidate for biomedical applications, particularly in the prevention of chronic diseases . Among these bioactive compounds, quinoa proteins demonstrate considerable developmental potential . utilized proteomics techniques to analyze the components of quinoa protein and discovered that the peptides derived from 11S seed storage globulin B possess potential anti-diabetic activity. As a fundamental nutrient essential for human physiological functions, protein is integral to maintaining health and well-being . Due to its abundant protein content and balanced amino acid composition , quinoa exhibits high biological value , and progressively gained recognition as a valuable source of plant-based protein within the food industry . Among the various parts of the plant, mature quinoa seeds, as the storage organs for nutrients, accumulate most of the protein in quinoa. The protein composition of mature quinoa seeds is predominantly composed of globulins, which constitute approximately 37% of the total protein, followed by albumins, which are present in lower concentrations (0.5–7% of the total protein) . Currently, prevalent methods for the extraction of quinoa protein encompass alkali extraction and acid precipitation techniques , as well as the salting-out method . In natural ecosystems, higher plants generally coexist with a diverse range of microorganisms and pests within a common environment during various growth stages, including seed storage and plant establishment. Over the course of long-term evolution, higher plants have developed intricate defense mechanisms to counteract the invasions of deleterious organisms, including bacteria and fungi . For instance, proteins with antimicrobial properties primarily include chitinase and β-1, 3-glucanase . These non-specific proteins, collectively referred to as antimicrobial proteins, are essential components of the plant’s innate immune defense system. They are characterized by a broad antimicrobial spectrum, strong antimicrobial activity, and the advantage of not inducing drug resistance. Furthermore, antimicrobial proteins are predominantly found in plant seeds . It has been documented that proteins isolated from Medicago sativa seeds exert an inhibitory effects on Verticillium dahlia , while proteins extracted from Momordica charantia seeds demonstrate antifungal activity against species such as Aspergillus niger . Proteins derived from Moringa seeds have been demonstrated to significantly inhibit the growth of Bacillus pumilus , as well as other microorganisms . However, there remains a paucity of research regarding the antimicrobial properties of proteins derived from quinoa. Candida albicans ( C. albicans ) is a common symbiotic fungus and an opportunistic pathogenic fungus , recognized as the primary etiological agent of human fungal infections. Although C. albicans typically does not affect healthy individuals, it can readily cause superficial mucosal infections or fatal systemic infections in immunocompromised individuals , leading to a high mortality rate . Annually, more than 6.5 million individuals globally contract infections caused by invasive fungal infections, resulting in up to 3.75 million fatalities and substantial economic repercussions . Among them, the infections caused by C. albicans account for 40–60% of all invasive fungal infections . A further significant concern for human health is the propensity of C. albicans to adhere to diverse medical device surfaces and form biofilms , which markedly contributes to the incidence of invasive candidiasis . The process of filamentous growth is integral to biofilm formation, as it necessitates the expression of multiple filament-associated genes essential for surface adhesion . C. albicans exhibits the ability to transition from its oval yeast form to a filamentous mycelium form. This morphological plasticity is intricately linked to its adaptability and pathogenicity , thereby presenting significant challenges for the clinical management of candidiasis resulting from C. albicans infection . Despite the availability of clinical drugs for the treatment of C. albicans , which target various mechanisms—such as echinocandins (disrupting β-1,3-glucan synthesis in cell walls) , fluconazole (FLC, inhibiting ergosterol synthesis) , and other antifungal medications , the emergence of drug resistance and the prevalence of side effects have become increasingly prominent concerns . Consequently, this issue has escalated into a significant global concern necessitating immediate intervention . As a result, the pursuit of innovative anti- C. albicans compounds sourced from natural products has emerged as a central objective in contemporary drug development endeavors . Therefore, the aim of this study was to isolate the quinoa active protein (QAP) with inhibitory properties against C. albicans from quinoa seeds. To achieve this, the highly sensitive and nearly 100% reliable LC-MS/MS proteomics approach, along with the advanced AB SCIEX Triple TOF™ 5600 plus mass spectrometer, was employed for the mass spectrometric identification of QAP. This facilitated the exploration of critical information, including the amino acid sequences and molecular weights of the associated proteins. Furthermore, a series of experiments were conducted to investigate the mode of action of QAP on C. albicans , and the molecular mechanism underlying QAP’s inhibitory effect on C. albicans was elucidated through the integration of RNA sequencing technology. Plant materials The quinoa seeds (Cheng Li No.1) utilized in this study were supplied by the Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, Chengdu University . This quinoa cultivar, meticulously developed by Chengdu University, is characterized by a high protein content. Specifically, the protein content of this variety is approximately 147.38 g/kg, while the starch content is approximately 563.36 mg/g. Strains and culture conditions C. albicans (SIIA 2284) was provided by the Department of Urology at the Affiliated Hospital of Chengdu University. The strain was initially cultured in Sabouraud Dextrose Broth (SDB) and subsequently activated by incubation for 18 h at 180 rpm and 37 °C. Preparation of QAP Quinoa seeds were initially screened to eliminate impurities, followed by drying and grinding into a fine powder. The resultant powder was then passed through a 100-mesh sieve and subsequently immersed in petroleum ether for defatting. Post-defatting, the quinoa flour was subjected to extraction at a 1:2 ratio using a 7 mM phosphate extraction buffer (pH 7.0, containing 10 mM NaCl and 1 mM EDTA) at 4 °C for 2 h. The extraction mixture was centrifuged at 4 °C and 10,000 rpm for 20 min, and the supernatant was collected as the crude QAP extract. The protein content of the crude QAP extract was quantified using the protein quantification assay kit according to the instructions provided by the Nanjing Jiancheng Bioengineering Institute. Solid ammonium sulfate was finely ground, and the crude extract of QAP was subjected to stepwise salting out using a 10% gradient. Following incubation at 4 °C for 40 min, the precipitates obtained from each fraction were dissolved in a minimal volume of phosphate extraction buffer (pH 7.0) containing 20 mmol/L. These solutions were subsequently dialyzed against a phosphate-buffered saline (PBS) buffer solution (20 mmol/L). Finally, the QAP solution was concentrated to a uniform volume, and the protein concentration was measured and adjusted as necessary. The inhibitory effects of QAP on C. albicans were evaluated using the Oxford cup-hole method. Under aseptic conditions, Oxford cups were placed into Petri dishes. Once the Sabouraud dextrose agar medium had cooled to approximately 50 °C, a suspension of C. albicans at a concentration of 3 × 10 8 CFU/mL was added at a volume ratio of 1,000:1 and thoroughly mixed before the medium was poured into the Petri dishes. Following the solidification of the medium, the Oxford cups were meticulously removed using sterilized tweezers. Subsequently, 200 µL of QAP extracts, which were prepared under different ammonium sulfate saturation conditions, were added into the cavities left by the Oxford cups. The Petri dishes were then incubated at 37 °C for 24 h. Physiological saline served as the control group, while the clinical drug fluconazole (FLC) functioned as the positive control group. The criterion for assessing the antifungal efficacy was predicated on the external diameter of the Oxford cup. Specifically, the diameter of the inhibition zone was quantified using the cross-streaking technique, with the center of the indentation created by the Oxford cup serving as the reference point for measurement. A diameter exceeding 7.8 mm was considered indicative of a significant antifungal effect. Conversely, diameters below this threshold were deemed to demonstrate no antifungal activity, as the growth and reproduction of C. albicans were not effectively inhibited. In accordance with clinical pharmacological guidelines, the concentration of FLC was maintained at 160 µg/mL. Quinoa bioactive proteins are separated and purified using gel chromatography The preparation of the QAP involved utilizing Sephadex G-75 and Sephadex G-50 as packing materials within a chromatographic column with dimensions of 2.0 cm × 40 cm. An appropriate quantity of Sephadex G-75 powder was accurately weighed and transferred into a 500 mL beaker. Subsequently, deionized water or elution buffer, in a volume 7–10 times that of the Sephadex G-75, was added. The resulting (wet) gel was subjected to a digital constant temperature water bath set near boiling for 30 min. After this thermal treatment and cooling, the upper layer containing gel debris and impurities was carefully removed. Additional deionized water or elution buffer was added, and the mixture was equilibrated at ambient temperature for a duration of 2 h to ensure complete swelling. Subsequently, the supernatant was decanted from the swollen Sephadex G-75 gel, and a volume of phosphate extraction solution equivalent to 1–2 times the gel volume was added to form a suspension. This suspension was then gently agitated and transferred into the column using a glass rod to facilitate slurry packing. Initially, a small portion of the gel was permitted to settle within the column prior to opening the outlet valve. The slurry packing process continued was carried out by incrementally adding and uniformly distributing the gel until a settling height of approximately 35 cm was attained, at which point the outlet valve was closed. Following the formation of the gel column, the extraction solution was added into the elution bottle. The gel column was equilibrated by passing three column volumes of the extraction solution through the column at a flow rate of 1.3 mL/min. The procedure for handling Sephadex G-50 filler was identical to that employed for Sephadex G-75. Protein mass spectrometry The supernatants containing quinoa proteins, precipitated at varying levels of ammonium sulfate saturation, were analyzed using SDS-PAGE electrophoresis followed by silver staining. For protein identification, the QAP extracted under different ammonium sulfate saturation conditions, along with those purified via Sephadex G-75 and Sephadex G-50 gel filtration chromatography, were subjected to LC-MS/MS analysis (Triple ToF 5600+AB-SCIEX). The resulting peptide sequences were identified using ProteinPilot software. Subsequently, the proteins were classified via the InterProScan (IPR)-based domain annotation method. Quantification of QAP’s minimum inhibitory concentration (MIC) The QAP solution (with concentrations ranging from 728 µg/mL to 2.8 µg/mL) and the FLC solution (with concentrations from 256 µg/mL to 0.5 µg/mL) were prepared by serial two-fold dilution. According to the method of , the antifungal activity of the QAP solution was determined using the Oxford cup punching method, consistent with the procedure detailed in the “Preparation of QAP” section. Using a sterile pipette, 200 µL of each concentration of the QAP and FLC solutions was aspirated and then added into the holes created by the Oxford cups to establish test wells. An equivalent volume of sterile physiological saline served as a blank control. The culture dishes were then sealed and incubated at a constant temperature of 37 °C for 24 h. Following incubation, observations were made regarding the areas surrounding the test wells in the culture dishes. The minimum inhibitory concentration (MIC) for the QAP solution was identified as the lowest concentration that resulted in an inhibition zone exceeding 7.8 mm in diameter. Similarly, the MIC for the FLC solution was determined as the lowest concentration that produced an inhibition zone with a diameter greater than 7.8 mm. Growth kinetics of C. albicans under QAP Under sterile conditions, 10 µL of C. albicans solution (SIIA 2284, 3 × 10 8 CFU/mL) was extracted from the blank control group and transferred to a sterilized 96-well plate. Subsequently, 190 µL of RPMI-1640 complete medium was added and thoroughly mixed. For the QAP treatment group, 10 µL of C. albicans solution was obtained from a sterilized 96-well plate and combined with 100 µL of QAP solution (0.78 mg/mL) and 90 µL of RPMI-1640 complete medium. The mixture was meticulously blended before being incubated at 37 °C with agitation at 80 rpm for growth monitoring. The growth curve was constructed by measuring A600 every two hours using a Multifunctional Microplate Reader (Synergy HTX; BioTek, Winooski, VT, USA). Determination of alkaline phosphatase (AKP) activity To explore the inhibitory mechanism of QAP against C. albicans , we assessed the activity of alkaline phosphatase (AKP), a commonly utilized indicator for evaluating the cell wall damage or integrity . The activated C. albicans suspension was centrifuged at 4,000 rpm for 2 min and the supernatant was removed. The resultant pellet was subsequently re-suspended in sterile phosphate-buffered saline (PBS) to attain a concentration of 3 × 10 8 CFU/mL. Subsequently, 200 µL of the C. albicans suspension was added to a sterile conical flask containing 20 mL of SDB solution, followed by the addition of 20 mL of QAP for treatment. In the blank control group, an equal volume of SDB solution was added. The cultures were incubated at 37 °C and 180 rpm for 8 h. Post-incubation, the supernatant was collected by centrifugation at 10,000 rpm for 10 min. The extracellular AKP activity was then determined according to the instructions provided with the AKP assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). Activity analysis of Succinate dehydrogenase (SDH), Ca 2+ -Mg 2+ - ATPase and Catalase (CAT) The conditions for sample processing were maintained in accordance with AKP activity determination, whereas the control group was replaced with PBS. The activity of SDH, CAT and Ca 2+ -Mg 2+ -ATPase was determined by an Enzyme-linked Immunoassay Kit (Jiangsu Meimian industrial Co., Ltd.) according to the manufacturer’s protocol. The specific techniques employed for measurement are provided in . C. albicans hyphal morphology assay The nutrient-rich Spider medium, which utilizes mannitol as its carbon source, is commonly employed to facilitate morphogenesis . For the current study, the Spider agar medium was utilized to evaluate the impact of QAP on the yeast-to-hyphal transition in C. albicans . C. albicans was cultured on the Spider agar medium with varying treatments: a negative control (0 mg/mL), a test group with 0.78 mg/mL QAP, and a positive control group with 160 µg/mL FLC. The cultures were incubated at 37 °C for 96 h. Following incubation, the morphological characteristics of the C. albicans colonies were documented. Observation of cell morphology by scanning electron microscope (SEM) Under aseptic conditions, activated C. albicans cells were centrifuged, and the C. albicans suspension was adjusted to a concentration of 3 × 10 8 CFU/mL. Two sterilized 150 mL Erlenmeyer flasks were utilized: Flask 1 contained 40 mL of the C. albicans suspension and 20 mL of the QAP component solution at a concentration of 0.78 mg/mL; Flask 2 contained 40 mL of the C. albicans suspension and 20 mL of phosphate extraction liquid. Following an 8-h incubation period at 37 °C with agitation at 180 rpm, the culture was aseptically transferred to sterilized centrifuge tubes and subjected to centrifugation at 4,000 rpm for 1 min. The resulting supernatant was carefully removed, and the fungal organisms were harvested. Subsequently, the fungal cells underwent three washes with pre-sterilized PBS to eliminate any residual contaminants. The washed cells were subsequently fixed in a 2.5% glutaraldehyde solution at 4 °C under light-excluded conditions for 2 h to ensure optimal preservation of cellular structures. Post-fixation, the samples were subjected to a graded ethanol dehydration series (30%, 70%, and 100%) for approximately 10 min each at room temperature. Following dehydration, the samples were air-dried overnight on sample holders secured with double-sided adhesive tape. Subsequently, gold sputter coating was performed utilizing an HVS-GB vacuum evaporator under meticulously controlled conditions. Following this, the samples were subjected to localization and imaging using a Sirion 200 scanning electron microscope. Isolation of total RNA and preparation of cDNA libraries Six 150 mL Erlenmeyer flasks with uniform specification, were prepared and labeled sequentially from 1 to 6. Each flask was filled with 50 mL of SDB and subjected to sterilization at 121 °C for 20 min, followed by cooling for subsequent experimental procedures. Under strictly aseptic conditions, a single colony of C. albicans was inoculated into each flask. The inoculated cultures were then incubated at 37 °C with agitation set at 180 rpm for 18 h. Following the initial incubation phase, Flask 1 was supplemented with 50 mL of QAP solution at a concentration of 0.45 mg/mL; Flasks 2 and 3 underwent the same treatment as Flask 1. Concurrently, Flask 4 received an addition of 50 mL of phosphate extraction liquid, with Flasks 5 and 6 being treated similarly to Flask 4. Subsequently, all six flasks were reincubated under consistent conditions (37 °C, 180 rpm) for an additional 8-h duration. Upon the conclusion of this secondary incubation phase, the contents of each flask were meticulously transferred into sterile centrifuge tubes and centrifuged at 4,000 rpm for 2 min to pellet the biomass. The resultant cell pellets were flash-frozen in liquid nitrogen and were promptly stored at −80 °C for subsequent analysis. The RNA was extracted from C. albicans cells following a standardized protocol and subjected to stringent quality control assessment of RNA integrity using the Agilent 2100 Bioanalyzer. The fragmented mRNA was used as a template for the synthesis first-strand cDNA, employing random oligonucleotides in the M-MuLV reverse transcriptase system. Subsequently, RNase H was utilized to degrade the RNA strand, facilitating the synthesis of second-strand cDNA using dNTPs in the DNA polymerase I system. Following terminal repair, adenylation, and the ligation of sequencing adaptors, the purified double-stranded cDNA was subjected to size selection (approximately 250–300 base pairs) using AMPure XP beads. The selected cDNA fragments were then amplified via polymerase chain reaction (PCR) to construct the library. The statistical power of this experimental design, as calculated using RNASeqPower, is 0.83. Data analysis following illumina sequencing Following a comprehensive library inspection, various libraries are categorized according to the prerequisites for optimal concentration and target data volume necessary for Illumina sequencing technology. Amplification is facilitated by the introduction of four fluorescently labeled deoxynucleotide triphosphates (dNTPs), DNA polymerase, and adapter primers into the flow cell. Each incorporation of a fluorescently labeled dNTP generates a corresponding fluorescence signal. This signal is subsequently captured and processed by computer software to produce sequencing peaks, thereby yielding the sequence information of the desired fragments. The acquired sequence data is subsequently subjected to a quality assessment protocol, wherein statistical analyses are conducted to identify discrepancies between the raw data and the quality-controlled processed data. Following the completion of quality control, the high-quality sequences were aligned to the reference genome using HISAT2 software . Subsequently, the mapped sequences were assembled to the genome utilizing StringTie software . A comparative analysis with existing gene annotations was then conducted using GffCompare software to identify unannotated transcript regions and discover novel transcripts or genes within this species. To ensure accurate assessment of gene expression levels , FPKM values for each gene were calculated using StringTie . Subsequently, the Euclidean distance metric was employed to assess the correlations among samples for the purpose of hierarchical clustering. Following the clustering process, a heatmap was constructed, wherein the intensity of color denotes the variance in gene expression patterns between samples, with lighter shades indicating greater differences and darker shades representing smaller differences. The resulting dendrogram illustrates the degree of similarity among samples, with proximal branches signifying higher similarity, and samples exhibiting greater similarity tending to cluster more closely together. Ultimately, the DESeq2 software was utilized to conduct a significance analysis of variations in gene expression. Typically, a gene demonstrating a fold change (FC) greater than two between two sample groups is considered to exhibit significant differential expression. Quantitative real-time PCR The expression levels of regulatory genes in C. albicans inhibited by QAP were quantitatively assessed using real-time fluorescence quantitative polymerase chain reaction (qRT-PCR) technology. This procedure encompassed RNA extraction and purification, followed by reverse transcription into complementary DNA (cDNA) utilizing the FastKing RT Kit (With gDNase) FastKing cDNA, in accordance with the manufacturer’s protocol provided by TianGen Biotech (Beijing) Co., Ltd. The specific genes analyzed in this study, along with their corresponding primers, are enumerated in . Quantitative real-time PCR (qRT-PCR) assays were conducted in triplicate utilizing a qTOWER [12pt]{minimal} $ {}$ ˆ 3G Real-Time PCR system (Analytik Jena AG, Jena, Germany). The thermal cycling protocol comprised an initial denaturation step at 95 °C for 30 s, followed by 40 cycles of denaturation at 95 °C for 5 s and annealing/extension at 60 °C for 30 s. The actin gene (ACT1) served as the reference gene, and relative gene expression levels were quantified using the [12pt]{minimal} $2~{ {}~}^{- }$ 2 ˆ − Δ Δ CT method. Statistical analysis To elucidate the functionality of specific genes, an extensive search was conducted within the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases . Subsequently, a comprehensive analysis of GO functional enrichment and KEGG pathway enrichment was performed on the set of DEGs using the clusterProfiler software . Following the Z -score normalization of the expression levels of the identified DEGs, clustering analysis was undertaken to group genes exhibiting similar expression patterns. Ultimately, a clustering heatmap was generated using the Wekemo Bioincloud Platform ( http://www.bioincloud.tech ). The data collected in this study were systematically organized using Excel 2016 software and subsequently visualized through GraphPad Prism 9 (version 9.5.0) and Cytoscape (version 3.9.1). Statistical comparisons between two independent samples were analyzed using the t -test, while differences among three or more independent samples were assessed via one-way ANOVA, supplemented by Duncan’s multiple range test. The significance threshold was established at P < 0.05. These statistical analyses were performed using the pertinent functions available in SPSS 27 software. The experiment was conducted in triplicate, and the results are presented as x ± n . Asterisks indicate statistically differences between control and treatment groups. The RNA-seq data have been deposited in the NCBI Sequence Read Archive (SRA) database (accession number: PRJNA1166727 ). The quinoa seeds (Cheng Li No.1) utilized in this study were supplied by the Key Laboratory of Coarse Cereal Processing, Ministry of Agriculture and Rural Affairs, Chengdu University . This quinoa cultivar, meticulously developed by Chengdu University, is characterized by a high protein content. Specifically, the protein content of this variety is approximately 147.38 g/kg, while the starch content is approximately 563.36 mg/g. C. albicans (SIIA 2284) was provided by the Department of Urology at the Affiliated Hospital of Chengdu University. The strain was initially cultured in Sabouraud Dextrose Broth (SDB) and subsequently activated by incubation for 18 h at 180 rpm and 37 °C. Quinoa seeds were initially screened to eliminate impurities, followed by drying and grinding into a fine powder. The resultant powder was then passed through a 100-mesh sieve and subsequently immersed in petroleum ether for defatting. Post-defatting, the quinoa flour was subjected to extraction at a 1:2 ratio using a 7 mM phosphate extraction buffer (pH 7.0, containing 10 mM NaCl and 1 mM EDTA) at 4 °C for 2 h. The extraction mixture was centrifuged at 4 °C and 10,000 rpm for 20 min, and the supernatant was collected as the crude QAP extract. The protein content of the crude QAP extract was quantified using the protein quantification assay kit according to the instructions provided by the Nanjing Jiancheng Bioengineering Institute. Solid ammonium sulfate was finely ground, and the crude extract of QAP was subjected to stepwise salting out using a 10% gradient. Following incubation at 4 °C for 40 min, the precipitates obtained from each fraction were dissolved in a minimal volume of phosphate extraction buffer (pH 7.0) containing 20 mmol/L. These solutions were subsequently dialyzed against a phosphate-buffered saline (PBS) buffer solution (20 mmol/L). Finally, the QAP solution was concentrated to a uniform volume, and the protein concentration was measured and adjusted as necessary. The inhibitory effects of QAP on C. albicans were evaluated using the Oxford cup-hole method. Under aseptic conditions, Oxford cups were placed into Petri dishes. Once the Sabouraud dextrose agar medium had cooled to approximately 50 °C, a suspension of C. albicans at a concentration of 3 × 10 8 CFU/mL was added at a volume ratio of 1,000:1 and thoroughly mixed before the medium was poured into the Petri dishes. Following the solidification of the medium, the Oxford cups were meticulously removed using sterilized tweezers. Subsequently, 200 µL of QAP extracts, which were prepared under different ammonium sulfate saturation conditions, were added into the cavities left by the Oxford cups. The Petri dishes were then incubated at 37 °C for 24 h. Physiological saline served as the control group, while the clinical drug fluconazole (FLC) functioned as the positive control group. The criterion for assessing the antifungal efficacy was predicated on the external diameter of the Oxford cup. Specifically, the diameter of the inhibition zone was quantified using the cross-streaking technique, with the center of the indentation created by the Oxford cup serving as the reference point for measurement. A diameter exceeding 7.8 mm was considered indicative of a significant antifungal effect. Conversely, diameters below this threshold were deemed to demonstrate no antifungal activity, as the growth and reproduction of C. albicans were not effectively inhibited. In accordance with clinical pharmacological guidelines, the concentration of FLC was maintained at 160 µg/mL. The preparation of the QAP involved utilizing Sephadex G-75 and Sephadex G-50 as packing materials within a chromatographic column with dimensions of 2.0 cm × 40 cm. An appropriate quantity of Sephadex G-75 powder was accurately weighed and transferred into a 500 mL beaker. Subsequently, deionized water or elution buffer, in a volume 7–10 times that of the Sephadex G-75, was added. The resulting (wet) gel was subjected to a digital constant temperature water bath set near boiling for 30 min. After this thermal treatment and cooling, the upper layer containing gel debris and impurities was carefully removed. Additional deionized water or elution buffer was added, and the mixture was equilibrated at ambient temperature for a duration of 2 h to ensure complete swelling. Subsequently, the supernatant was decanted from the swollen Sephadex G-75 gel, and a volume of phosphate extraction solution equivalent to 1–2 times the gel volume was added to form a suspension. This suspension was then gently agitated and transferred into the column using a glass rod to facilitate slurry packing. Initially, a small portion of the gel was permitted to settle within the column prior to opening the outlet valve. The slurry packing process continued was carried out by incrementally adding and uniformly distributing the gel until a settling height of approximately 35 cm was attained, at which point the outlet valve was closed. Following the formation of the gel column, the extraction solution was added into the elution bottle. The gel column was equilibrated by passing three column volumes of the extraction solution through the column at a flow rate of 1.3 mL/min. The procedure for handling Sephadex G-50 filler was identical to that employed for Sephadex G-75. The supernatants containing quinoa proteins, precipitated at varying levels of ammonium sulfate saturation, were analyzed using SDS-PAGE electrophoresis followed by silver staining. For protein identification, the QAP extracted under different ammonium sulfate saturation conditions, along with those purified via Sephadex G-75 and Sephadex G-50 gel filtration chromatography, were subjected to LC-MS/MS analysis (Triple ToF 5600+AB-SCIEX). The resulting peptide sequences were identified using ProteinPilot software. Subsequently, the proteins were classified via the InterProScan (IPR)-based domain annotation method. The QAP solution (with concentrations ranging from 728 µg/mL to 2.8 µg/mL) and the FLC solution (with concentrations from 256 µg/mL to 0.5 µg/mL) were prepared by serial two-fold dilution. According to the method of , the antifungal activity of the QAP solution was determined using the Oxford cup punching method, consistent with the procedure detailed in the “Preparation of QAP” section. Using a sterile pipette, 200 µL of each concentration of the QAP and FLC solutions was aspirated and then added into the holes created by the Oxford cups to establish test wells. An equivalent volume of sterile physiological saline served as a blank control. The culture dishes were then sealed and incubated at a constant temperature of 37 °C for 24 h. Following incubation, observations were made regarding the areas surrounding the test wells in the culture dishes. The minimum inhibitory concentration (MIC) for the QAP solution was identified as the lowest concentration that resulted in an inhibition zone exceeding 7.8 mm in diameter. Similarly, the MIC for the FLC solution was determined as the lowest concentration that produced an inhibition zone with a diameter greater than 7.8 mm. C. albicans under QAP Under sterile conditions, 10 µL of C. albicans solution (SIIA 2284, 3 × 10 8 CFU/mL) was extracted from the blank control group and transferred to a sterilized 96-well plate. Subsequently, 190 µL of RPMI-1640 complete medium was added and thoroughly mixed. For the QAP treatment group, 10 µL of C. albicans solution was obtained from a sterilized 96-well plate and combined with 100 µL of QAP solution (0.78 mg/mL) and 90 µL of RPMI-1640 complete medium. The mixture was meticulously blended before being incubated at 37 °C with agitation at 80 rpm for growth monitoring. The growth curve was constructed by measuring A600 every two hours using a Multifunctional Microplate Reader (Synergy HTX; BioTek, Winooski, VT, USA). To explore the inhibitory mechanism of QAP against C. albicans , we assessed the activity of alkaline phosphatase (AKP), a commonly utilized indicator for evaluating the cell wall damage or integrity . The activated C. albicans suspension was centrifuged at 4,000 rpm for 2 min and the supernatant was removed. The resultant pellet was subsequently re-suspended in sterile phosphate-buffered saline (PBS) to attain a concentration of 3 × 10 8 CFU/mL. Subsequently, 200 µL of the C. albicans suspension was added to a sterile conical flask containing 20 mL of SDB solution, followed by the addition of 20 mL of QAP for treatment. In the blank control group, an equal volume of SDB solution was added. The cultures were incubated at 37 °C and 180 rpm for 8 h. Post-incubation, the supernatant was collected by centrifugation at 10,000 rpm for 10 min. The extracellular AKP activity was then determined according to the instructions provided with the AKP assay kit (Nanjing Jiancheng Bioengineering Institute, Nanjing, China). 2+ -Mg 2+ - ATPase and Catalase (CAT) The conditions for sample processing were maintained in accordance with AKP activity determination, whereas the control group was replaced with PBS. The activity of SDH, CAT and Ca 2+ -Mg 2+ -ATPase was determined by an Enzyme-linked Immunoassay Kit (Jiangsu Meimian industrial Co., Ltd.) according to the manufacturer’s protocol. The specific techniques employed for measurement are provided in . hyphal morphology assay The nutrient-rich Spider medium, which utilizes mannitol as its carbon source, is commonly employed to facilitate morphogenesis . For the current study, the Spider agar medium was utilized to evaluate the impact of QAP on the yeast-to-hyphal transition in C. albicans . C. albicans was cultured on the Spider agar medium with varying treatments: a negative control (0 mg/mL), a test group with 0.78 mg/mL QAP, and a positive control group with 160 µg/mL FLC. The cultures were incubated at 37 °C for 96 h. Following incubation, the morphological characteristics of the C. albicans colonies were documented. Under aseptic conditions, activated C. albicans cells were centrifuged, and the C. albicans suspension was adjusted to a concentration of 3 × 10 8 CFU/mL. Two sterilized 150 mL Erlenmeyer flasks were utilized: Flask 1 contained 40 mL of the C. albicans suspension and 20 mL of the QAP component solution at a concentration of 0.78 mg/mL; Flask 2 contained 40 mL of the C. albicans suspension and 20 mL of phosphate extraction liquid. Following an 8-h incubation period at 37 °C with agitation at 180 rpm, the culture was aseptically transferred to sterilized centrifuge tubes and subjected to centrifugation at 4,000 rpm for 1 min. The resulting supernatant was carefully removed, and the fungal organisms were harvested. Subsequently, the fungal cells underwent three washes with pre-sterilized PBS to eliminate any residual contaminants. The washed cells were subsequently fixed in a 2.5% glutaraldehyde solution at 4 °C under light-excluded conditions for 2 h to ensure optimal preservation of cellular structures. Post-fixation, the samples were subjected to a graded ethanol dehydration series (30%, 70%, and 100%) for approximately 10 min each at room temperature. Following dehydration, the samples were air-dried overnight on sample holders secured with double-sided adhesive tape. Subsequently, gold sputter coating was performed utilizing an HVS-GB vacuum evaporator under meticulously controlled conditions. Following this, the samples were subjected to localization and imaging using a Sirion 200 scanning electron microscope. Six 150 mL Erlenmeyer flasks with uniform specification, were prepared and labeled sequentially from 1 to 6. Each flask was filled with 50 mL of SDB and subjected to sterilization at 121 °C for 20 min, followed by cooling for subsequent experimental procedures. Under strictly aseptic conditions, a single colony of C. albicans was inoculated into each flask. The inoculated cultures were then incubated at 37 °C with agitation set at 180 rpm for 18 h. Following the initial incubation phase, Flask 1 was supplemented with 50 mL of QAP solution at a concentration of 0.45 mg/mL; Flasks 2 and 3 underwent the same treatment as Flask 1. Concurrently, Flask 4 received an addition of 50 mL of phosphate extraction liquid, with Flasks 5 and 6 being treated similarly to Flask 4. Subsequently, all six flasks were reincubated under consistent conditions (37 °C, 180 rpm) for an additional 8-h duration. Upon the conclusion of this secondary incubation phase, the contents of each flask were meticulously transferred into sterile centrifuge tubes and centrifuged at 4,000 rpm for 2 min to pellet the biomass. The resultant cell pellets were flash-frozen in liquid nitrogen and were promptly stored at −80 °C for subsequent analysis. The RNA was extracted from C. albicans cells following a standardized protocol and subjected to stringent quality control assessment of RNA integrity using the Agilent 2100 Bioanalyzer. The fragmented mRNA was used as a template for the synthesis first-strand cDNA, employing random oligonucleotides in the M-MuLV reverse transcriptase system. Subsequently, RNase H was utilized to degrade the RNA strand, facilitating the synthesis of second-strand cDNA using dNTPs in the DNA polymerase I system. Following terminal repair, adenylation, and the ligation of sequencing adaptors, the purified double-stranded cDNA was subjected to size selection (approximately 250–300 base pairs) using AMPure XP beads. The selected cDNA fragments were then amplified via polymerase chain reaction (PCR) to construct the library. The statistical power of this experimental design, as calculated using RNASeqPower, is 0.83. Following a comprehensive library inspection, various libraries are categorized according to the prerequisites for optimal concentration and target data volume necessary for Illumina sequencing technology. Amplification is facilitated by the introduction of four fluorescently labeled deoxynucleotide triphosphates (dNTPs), DNA polymerase, and adapter primers into the flow cell. Each incorporation of a fluorescently labeled dNTP generates a corresponding fluorescence signal. This signal is subsequently captured and processed by computer software to produce sequencing peaks, thereby yielding the sequence information of the desired fragments. The acquired sequence data is subsequently subjected to a quality assessment protocol, wherein statistical analyses are conducted to identify discrepancies between the raw data and the quality-controlled processed data. Following the completion of quality control, the high-quality sequences were aligned to the reference genome using HISAT2 software . Subsequently, the mapped sequences were assembled to the genome utilizing StringTie software . A comparative analysis with existing gene annotations was then conducted using GffCompare software to identify unannotated transcript regions and discover novel transcripts or genes within this species. To ensure accurate assessment of gene expression levels , FPKM values for each gene were calculated using StringTie . Subsequently, the Euclidean distance metric was employed to assess the correlations among samples for the purpose of hierarchical clustering. Following the clustering process, a heatmap was constructed, wherein the intensity of color denotes the variance in gene expression patterns between samples, with lighter shades indicating greater differences and darker shades representing smaller differences. The resulting dendrogram illustrates the degree of similarity among samples, with proximal branches signifying higher similarity, and samples exhibiting greater similarity tending to cluster more closely together. Ultimately, the DESeq2 software was utilized to conduct a significance analysis of variations in gene expression. Typically, a gene demonstrating a fold change (FC) greater than two between two sample groups is considered to exhibit significant differential expression. The expression levels of regulatory genes in C. albicans inhibited by QAP were quantitatively assessed using real-time fluorescence quantitative polymerase chain reaction (qRT-PCR) technology. This procedure encompassed RNA extraction and purification, followed by reverse transcription into complementary DNA (cDNA) utilizing the FastKing RT Kit (With gDNase) FastKing cDNA, in accordance with the manufacturer’s protocol provided by TianGen Biotech (Beijing) Co., Ltd. The specific genes analyzed in this study, along with their corresponding primers, are enumerated in . Quantitative real-time PCR (qRT-PCR) assays were conducted in triplicate utilizing a qTOWER [12pt]{minimal} $ {}$ ˆ 3G Real-Time PCR system (Analytik Jena AG, Jena, Germany). The thermal cycling protocol comprised an initial denaturation step at 95 °C for 30 s, followed by 40 cycles of denaturation at 95 °C for 5 s and annealing/extension at 60 °C for 30 s. The actin gene (ACT1) served as the reference gene, and relative gene expression levels were quantified using the [12pt]{minimal} $2~{ {}~}^{- }$ 2 ˆ − Δ Δ CT method. To elucidate the functionality of specific genes, an extensive search was conducted within the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) databases . Subsequently, a comprehensive analysis of GO functional enrichment and KEGG pathway enrichment was performed on the set of DEGs using the clusterProfiler software . Following the Z -score normalization of the expression levels of the identified DEGs, clustering analysis was undertaken to group genes exhibiting similar expression patterns. Ultimately, a clustering heatmap was generated using the Wekemo Bioincloud Platform ( http://www.bioincloud.tech ). The data collected in this study were systematically organized using Excel 2016 software and subsequently visualized through GraphPad Prism 9 (version 9.5.0) and Cytoscape (version 3.9.1). Statistical comparisons between two independent samples were analyzed using the t -test, while differences among three or more independent samples were assessed via one-way ANOVA, supplemented by Duncan’s multiple range test. The significance threshold was established at P < 0.05. These statistical analyses were performed using the pertinent functions available in SPSS 27 software. The experiment was conducted in triplicate, and the results are presented as x ± n . Asterisks indicate statistically differences between control and treatment groups. The RNA-seq data have been deposited in the NCBI Sequence Read Archive (SRA) database (accession number: PRJNA1166727 ). Isolation and purification of antifungal QAP To elucidate the final saturation range of ammonium sulfate necessary for QAP precipitation to inhibit C. albicans , an investigation was conducted within the 0–30% ammonium sulfate saturation range. Although QAP demonstrated inhibitory effects against C. albicans within this range, there were no statistically significant differences in the diameters of the inhibition zones ( p > 0.05). This suggests that the proteins precipitated under these conditions were likely contaminants . As the ammonium sulfate saturation increased to the range of 30%–70%, the QAP precipitated within this range exhibited significant variations in inhibitory activity against C. albicans . However, at a saturation level of 80%, the inhibitory effect markedly weakened, suggesting that the majority of QAP had been fully precipitated under these conditions. Proteomic analysis of protein precipitation at each saturation showed that there were 644, 716, 690, 647, 643 and 622 proteins in different ammonium sulfate saturation segments of 20%–30%, 30%–40%, 40%–50%, 50%–60%, 60%–70% and 70%–80%, respectively. The results showed that most of the active proteins of quinoa could be effectively separated with different saturation of ammonium sulfate, and there were about 149 common proteins in the 30%–80% ammonium sulfate salting phase . Analyses of QAP by SDS-PAGE at different saturation of ammonium sulfate The supernatants containing QAP, precipitated at varying ammonium sulfate saturation levels, were analyzed using SDS-PAGE electrophoresis. As illustrated in , the molecular weights of quinoa proteins were observed to range from approximately 14.4 to 116.0 kDa. Notably, as the ammonium sulfate saturation increased, the proteins within the 18.4–25 kDa range in the supernatant showed a significant decrease. Meanwhile, the antifungal activity of the precipitated proteins exhibited a progressive enhancement. This suggests that the antifungal proteins are likely within this molecular weight range, closely aligning with the molecular mass (24 kDa) of the antimicrobial protein discovered by from the seed of Cynanchum komarovii Al Iljinski . When considered alongside the separation outcomes obtained from Sephadex G-75 chromatography, the relative abundance of proteins within the specified range was observed to increase. This observation further substantiates the hypothesis that the antimicrobial proteins are situated within the 18.4–25 kDa range, corroborating the results of previous proteomic analyses . Furthermore, research conducted by demonstrated that the subunits of soybean 11S globulin exhibit antimicrobial activity. These findings indicate that ammonium sulfate fractionation serves as an effective technique for isolating QAP with inhibitory properties against C. albicans , while simultaneously distinguishing these compounds from other proteins. Purification of QAP by dextran gel chromatography QAP solutions, purified using Sephadex G-75 and Sephadex G-50 columns, were evaluated for their antifungal activity against C. albicans on Sabouraud dextrose agar medium. The QAP solution derived from Sephadex G-75 exhibited a concentration of 0.78 mg/mL, whereas the solution from Sephadex G-50 had a concentration of 0.4 mg/mL. In comparison to the results obtained from purification using a Sephadex G-75 column, the spectrum of active protein fractions in quinoa was further refined when purified with a Sephadex G-50 column . Proteomic analysis revealed that 18 proteins shared between the fractions obtained through Sephadex G-75 and Sephadex G-50 purification . It was observed that, despite a significant decrease in the concentration of QAP, an increase in the diameter of the inhibition zones was observed between the two groups ( p < 0.05). Moreover, when compared to the clinical drug FLC, the inhibitory zone diameter of QAP, purified by Sephadex G-50, was larger than that of FLC ( and ). Determination of MIC of QAP The MIC of QAP against C. albicans was determined to be 182 µg/mL , with an associated antifungal zone diameter of 1.10 ± 0.04 cm. Meanwhile, for the positive control group, the MIC was found to be 16 µg/mL, with a corresponding antifungal zone diameter of 1.84 ± 0.16 cm . The effect diagrams of antifungal activity are presented in . QAP caused cell wall damage in C. albicans As shown in , the extracellular AKP activity of C. albicans treated with 0.78 mg/mL of QAP was 0.98 ± 0.09 U/mL ( p < 0.05), which is higher than that of the control group (0.62 ± 0.09 U/mL). Therefore, we concluded that QAP has the potential to induce cell wall damage in C. albicans . QAP exhibits inhibitory effects on the growth and reproduction of C. albicans As illustrated in , the initial five-hour period was characterized by relatively sluggish growth of C. albicans , with no significant difference in optical density (OD) observed between the two groups. Between 5 and 20 h, the OD of the control group increased rapidly, whereas the growth rate of the OD in the QAP group was markedly lower compared to the control group during this interval. By 30 h, both groups reached a stable state, with the OD of the control group recorded at 1.400 ± 0.008, while the OD of the QAP group was 0.621 ± 0.010. Throughout the duration of the study, the OD of the QAP group consistently remained lower than that of the control group. Activity analysis of succinate dehydrogenase (SDH), Ca 2+ -Mg 2+ - ATPase and catalase (CAT) When C. albicans was treated with QAP at concentrations of 0.78 mg/mL, 0.52 mg/mL, and 0.26 mg/mL for 8 h, the succinate dehydrogenase (SDH) activities were recorded as 334.4 ± 5.8 U/L, 341.8 ± 8.8 U/L, and 363.0 ± 6.5 U/L, respectively, while the corresponding calcium-magnesium ATPase (Ca 2+ -Mg 2+ -ATPase) activities were 92.84 ± 0.97 IU/L, 92.85 ± 2.08 IU/L, and 87.10 ± 1.20 IU/L. Concurrently, in the positive control group treated with fluconazole (FLC) at concentrations of 320 µg/mL, 160 µg/mL, and 80 µg/mL, the SDH activities were measured at 317.5 ± 8.9 U/L, 319.3 ± 5.0 U/L, and 345.0 ± 5.9 U/L, with Ca 2+ -Mg 2+ -ATPase activities of 75.21 ± 0.30 IU/L, 79.99 ± 0.61 IU/L, and 77.69 ± 1.31 IU/L. Compared to the blank control group, which exhibited an SDH activity of 407.8 ± 8.1 U/L and a Ca 2+ -Mg 2+ -ATPase activity of 98.56 ± 1.59 IU/L, both the QAP-treated and FLC-treated groups demonstrated statistically significant reductions in SDH and Ca 2+ -Mg 2+ -ATPase activities ( P < 0.0001). Through comprehensive data analysis, it is evident that QAP affects the Ca 2+ -Mg 2+ -ATPase located on the cell membrane and the intracellular succinate dehydrogenase (SDH) of C. albicans , thereby disrupting intracellular homeostasis and energy metabolism. This disruption is hypothesized to result from modifications in enzymatic activities, which are crucial for maintaining the normal physiological state of the cell ( and ). Moreover, an increase in the CAT activity of C. albicans was observed following treatment with varying concentrations of QAP and FLC. The CAT activity of the negative control group was measured at 14.03 ± 0.36 U/mL, whereas the CAT activities of the QAP-treated group at concentrations of 0.78 mg/mL, 0.52 mg/mL, and 0.26 mg/mL were 19.92 ± 0.44 U/mL, 17.42 ± 0.37 U/mL, and 18.79 ± 0.52 U/mL, respectively. Concurrently, the CAT activities of the FLC-treated group at concentrations of 320 µg/mL, 160 µg/mL, and 80 µg/mL were 17.34 ± 0.18 U/mL, 16.32 ± 0.22 U/mL, and 18.25 ± 0.41 U/mL, respectively, with these differences being statistically significant. This experimental data suggests that QAP has disrupted the normal metabolic activities of C. albicans , potentially affecting energy metabolism and membrane functions, as evidenced by alterations in SDH and Ca 2+ -Mg 2+ -ATPase activities. Consequently, this disruption results in increased intracellular oxidative stress, leading C. albicans to enhance its CAT activity as a compensatory response to mitigate the excessive reactive oxygen species generated due to the disrupted metabolic activities . The effects of QAP on yeast-to-hyphal transition The ability of C. albicans to transition from the yeast form to the hyphal form is considered one of its most crucial virulence factors. As depicted in , the negative control group formed dense long hyphae when cultured on the Spider agar medium. However, upon treatment with either QAP (0.78 mg/mL) or FLC (160 µg/mL), the colonies formed by C. albicans cells displayed irregular edges, and produced only a limited number of hyphae ( and ). This could be attributed to the diffusion of QAP or FLC in the Spider agar medium potentially disrupting the local concentration gradients of nutrients or signaling molecules essential for the normal hyphal growth of C. albicans , leading to the observed irregular colony edges and reduced hyphal production. These findings suggest that QAP effectively inhibits the transition of C. albicans from the yeast form to the hyphal form in Spider agar medium. QAP affects C. albicans colony morphology This study employed SEM to investigate the effects of QAP on the morphology of C. albicans . The results indicated that the C. albicans displayed plump and rough cell bodies undergoing division and growth in the control group. While C. albicans displayed smoother cell surfaces characterized by distinct cracks and a marked decrease in budding sites in the treated group with 0.78 mg/mL of QAP . Results of illumina sequencing and assembly Following the construction and quality assessment of the cDNA library, various library combinations were subjected to Illumina sequencing according to the desired effective concentration and target data volume. Subsequently, the raw sequencing data for each sample were then subjected to quality control using the fastp software. Ultimately, 38.09 G clean reads were obtained, with a GC content of 36.77% and a Q30 value of 93.78% . These metrics satisfy the requirements for transcriptome analysis. In this study, the correlation among samples was evaluated utilizing Euclidean distance, and a hierarchical clustering heatmap was generated to analyze the gene expression level correlations across various samples. The clustering of three biological replicates within each experimental and control group revealed significant differences in expression profiles between the groups, while also demonstrating a high degree of correlation among replicates within each group . A comprehensive analysis revealed that 6,263 genes were expressed across all samples. Differential expression analysis was performed using the DESeq2 software package, employing stringent screening criteria of |log2(fold change)| > 1 and an adjusted p -value ( p . adj) < 0.05. As a result, 1,413 genes exhibited significant differential expression, with 892 genes up-regulated and 521 genes down-regulated ( and ). Unigene function annotation A comprehensive annotated and classified analysis of differentially expressed genes (DEGs) was performed using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses to uncover the underlying biological mechanisms and pathways. GO analysis revealed that, compared to the control group, the QAP-treated group exhibited significant enrichment in GO terms associated with the cell wall, cellular respiration, reactive oxygen species metabolism, ion transport, ergosterol biosynthesis, and transmembrane transport proteins . To further elucidate the molecular mechanisms underlying the inhibitory effects of QAP on C. albicans , these DEGs were further subjected to KEGG pathway annotation and enrichment analysis. The results revealed significant enrichment in pathways including DNA replication proteins, oxidative phosphorylation, the tricarboxylic acid (TCA) cycle, and glycosyltransferase . Subsequently, a heatmap was generated to illustrate differential gene expression based on the DEGs within the selected pathways . Through transcriptome analysis, we identified DEGs within specific pathways for subsequent investigation of protein–protein interaction (PPI) networks using the STRING online platform ( https://string-db.org/ ). The selected DEGs and their corresponding analysis results are listed in . Subsequently, Cytoscape software was utilized to perform topological analyses on the resulting PPI network graph, with node size, coloration, and intensity indicating their respective degree values. Our findings revealed that SDH2 (Succinate dehydrogenase 2) and ACO1 (Aconitase 1) are pivotal proteins . These proteins jointly participate in the TCA cycle and play a crucial role in the energy metabolism of C. albicans . Additionally, qRT-PCR validation of the expression levels of 13 selected differentially expressed genes revealed consistent trends with our initial transcriptome analysis, and all the results exhibited statistically significant differences ( P < 0.05). Specifically, the expression levels of genes ZRT1, SIM1, ALS4, and PRA1 showed a significant increase ( P < 0.05) following the treatment with quinoa active proteins. Conversely, the expression levels of the remaining 11 genes showed a significant decrease ( P < 0.05). Notably, the expression levels of FBA1, CST20, ERG1, and SDH2 showed an extremely significant decrease after being treated with QAP ( P < 0.0001), with the expression level of SDH2 decreasing by more than threefold . To elucidate the final saturation range of ammonium sulfate necessary for QAP precipitation to inhibit C. albicans , an investigation was conducted within the 0–30% ammonium sulfate saturation range. Although QAP demonstrated inhibitory effects against C. albicans within this range, there were no statistically significant differences in the diameters of the inhibition zones ( p > 0.05). This suggests that the proteins precipitated under these conditions were likely contaminants . As the ammonium sulfate saturation increased to the range of 30%–70%, the QAP precipitated within this range exhibited significant variations in inhibitory activity against C. albicans . However, at a saturation level of 80%, the inhibitory effect markedly weakened, suggesting that the majority of QAP had been fully precipitated under these conditions. Proteomic analysis of protein precipitation at each saturation showed that there were 644, 716, 690, 647, 643 and 622 proteins in different ammonium sulfate saturation segments of 20%–30%, 30%–40%, 40%–50%, 50%–60%, 60%–70% and 70%–80%, respectively. The results showed that most of the active proteins of quinoa could be effectively separated with different saturation of ammonium sulfate, and there were about 149 common proteins in the 30%–80% ammonium sulfate salting phase . The supernatants containing QAP, precipitated at varying ammonium sulfate saturation levels, were analyzed using SDS-PAGE electrophoresis. As illustrated in , the molecular weights of quinoa proteins were observed to range from approximately 14.4 to 116.0 kDa. Notably, as the ammonium sulfate saturation increased, the proteins within the 18.4–25 kDa range in the supernatant showed a significant decrease. Meanwhile, the antifungal activity of the precipitated proteins exhibited a progressive enhancement. This suggests that the antifungal proteins are likely within this molecular weight range, closely aligning with the molecular mass (24 kDa) of the antimicrobial protein discovered by from the seed of Cynanchum komarovii Al Iljinski . When considered alongside the separation outcomes obtained from Sephadex G-75 chromatography, the relative abundance of proteins within the specified range was observed to increase. This observation further substantiates the hypothesis that the antimicrobial proteins are situated within the 18.4–25 kDa range, corroborating the results of previous proteomic analyses . Furthermore, research conducted by demonstrated that the subunits of soybean 11S globulin exhibit antimicrobial activity. These findings indicate that ammonium sulfate fractionation serves as an effective technique for isolating QAP with inhibitory properties against C. albicans , while simultaneously distinguishing these compounds from other proteins. QAP solutions, purified using Sephadex G-75 and Sephadex G-50 columns, were evaluated for their antifungal activity against C. albicans on Sabouraud dextrose agar medium. The QAP solution derived from Sephadex G-75 exhibited a concentration of 0.78 mg/mL, whereas the solution from Sephadex G-50 had a concentration of 0.4 mg/mL. In comparison to the results obtained from purification using a Sephadex G-75 column, the spectrum of active protein fractions in quinoa was further refined when purified with a Sephadex G-50 column . Proteomic analysis revealed that 18 proteins shared between the fractions obtained through Sephadex G-75 and Sephadex G-50 purification . It was observed that, despite a significant decrease in the concentration of QAP, an increase in the diameter of the inhibition zones was observed between the two groups ( p < 0.05). Moreover, when compared to the clinical drug FLC, the inhibitory zone diameter of QAP, purified by Sephadex G-50, was larger than that of FLC ( and ). The MIC of QAP against C. albicans was determined to be 182 µg/mL , with an associated antifungal zone diameter of 1.10 ± 0.04 cm. Meanwhile, for the positive control group, the MIC was found to be 16 µg/mL, with a corresponding antifungal zone diameter of 1.84 ± 0.16 cm . The effect diagrams of antifungal activity are presented in . C. albicans As shown in , the extracellular AKP activity of C. albicans treated with 0.78 mg/mL of QAP was 0.98 ± 0.09 U/mL ( p < 0.05), which is higher than that of the control group (0.62 ± 0.09 U/mL). Therefore, we concluded that QAP has the potential to induce cell wall damage in C. albicans . C. albicans As illustrated in , the initial five-hour period was characterized by relatively sluggish growth of C. albicans , with no significant difference in optical density (OD) observed between the two groups. Between 5 and 20 h, the OD of the control group increased rapidly, whereas the growth rate of the OD in the QAP group was markedly lower compared to the control group during this interval. By 30 h, both groups reached a stable state, with the OD of the control group recorded at 1.400 ± 0.008, while the OD of the QAP group was 0.621 ± 0.010. Throughout the duration of the study, the OD of the QAP group consistently remained lower than that of the control group. 2+ -Mg 2+ - ATPase and catalase (CAT) When C. albicans was treated with QAP at concentrations of 0.78 mg/mL, 0.52 mg/mL, and 0.26 mg/mL for 8 h, the succinate dehydrogenase (SDH) activities were recorded as 334.4 ± 5.8 U/L, 341.8 ± 8.8 U/L, and 363.0 ± 6.5 U/L, respectively, while the corresponding calcium-magnesium ATPase (Ca 2+ -Mg 2+ -ATPase) activities were 92.84 ± 0.97 IU/L, 92.85 ± 2.08 IU/L, and 87.10 ± 1.20 IU/L. Concurrently, in the positive control group treated with fluconazole (FLC) at concentrations of 320 µg/mL, 160 µg/mL, and 80 µg/mL, the SDH activities were measured at 317.5 ± 8.9 U/L, 319.3 ± 5.0 U/L, and 345.0 ± 5.9 U/L, with Ca 2+ -Mg 2+ -ATPase activities of 75.21 ± 0.30 IU/L, 79.99 ± 0.61 IU/L, and 77.69 ± 1.31 IU/L. Compared to the blank control group, which exhibited an SDH activity of 407.8 ± 8.1 U/L and a Ca 2+ -Mg 2+ -ATPase activity of 98.56 ± 1.59 IU/L, both the QAP-treated and FLC-treated groups demonstrated statistically significant reductions in SDH and Ca 2+ -Mg 2+ -ATPase activities ( P < 0.0001). Through comprehensive data analysis, it is evident that QAP affects the Ca 2+ -Mg 2+ -ATPase located on the cell membrane and the intracellular succinate dehydrogenase (SDH) of C. albicans , thereby disrupting intracellular homeostasis and energy metabolism. This disruption is hypothesized to result from modifications in enzymatic activities, which are crucial for maintaining the normal physiological state of the cell ( and ). Moreover, an increase in the CAT activity of C. albicans was observed following treatment with varying concentrations of QAP and FLC. The CAT activity of the negative control group was measured at 14.03 ± 0.36 U/mL, whereas the CAT activities of the QAP-treated group at concentrations of 0.78 mg/mL, 0.52 mg/mL, and 0.26 mg/mL were 19.92 ± 0.44 U/mL, 17.42 ± 0.37 U/mL, and 18.79 ± 0.52 U/mL, respectively. Concurrently, the CAT activities of the FLC-treated group at concentrations of 320 µg/mL, 160 µg/mL, and 80 µg/mL were 17.34 ± 0.18 U/mL, 16.32 ± 0.22 U/mL, and 18.25 ± 0.41 U/mL, respectively, with these differences being statistically significant. This experimental data suggests that QAP has disrupted the normal metabolic activities of C. albicans , potentially affecting energy metabolism and membrane functions, as evidenced by alterations in SDH and Ca 2+ -Mg 2+ -ATPase activities. Consequently, this disruption results in increased intracellular oxidative stress, leading C. albicans to enhance its CAT activity as a compensatory response to mitigate the excessive reactive oxygen species generated due to the disrupted metabolic activities . The ability of C. albicans to transition from the yeast form to the hyphal form is considered one of its most crucial virulence factors. As depicted in , the negative control group formed dense long hyphae when cultured on the Spider agar medium. However, upon treatment with either QAP (0.78 mg/mL) or FLC (160 µg/mL), the colonies formed by C. albicans cells displayed irregular edges, and produced only a limited number of hyphae ( and ). This could be attributed to the diffusion of QAP or FLC in the Spider agar medium potentially disrupting the local concentration gradients of nutrients or signaling molecules essential for the normal hyphal growth of C. albicans , leading to the observed irregular colony edges and reduced hyphal production. These findings suggest that QAP effectively inhibits the transition of C. albicans from the yeast form to the hyphal form in Spider agar medium. C. albicans colony morphology This study employed SEM to investigate the effects of QAP on the morphology of C. albicans . The results indicated that the C. albicans displayed plump and rough cell bodies undergoing division and growth in the control group. While C. albicans displayed smoother cell surfaces characterized by distinct cracks and a marked decrease in budding sites in the treated group with 0.78 mg/mL of QAP . Following the construction and quality assessment of the cDNA library, various library combinations were subjected to Illumina sequencing according to the desired effective concentration and target data volume. Subsequently, the raw sequencing data for each sample were then subjected to quality control using the fastp software. Ultimately, 38.09 G clean reads were obtained, with a GC content of 36.77% and a Q30 value of 93.78% . These metrics satisfy the requirements for transcriptome analysis. In this study, the correlation among samples was evaluated utilizing Euclidean distance, and a hierarchical clustering heatmap was generated to analyze the gene expression level correlations across various samples. The clustering of three biological replicates within each experimental and control group revealed significant differences in expression profiles between the groups, while also demonstrating a high degree of correlation among replicates within each group . A comprehensive analysis revealed that 6,263 genes were expressed across all samples. Differential expression analysis was performed using the DESeq2 software package, employing stringent screening criteria of |log2(fold change)| > 1 and an adjusted p -value ( p . adj) < 0.05. As a result, 1,413 genes exhibited significant differential expression, with 892 genes up-regulated and 521 genes down-regulated ( and ). A comprehensive annotated and classified analysis of differentially expressed genes (DEGs) was performed using Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses to uncover the underlying biological mechanisms and pathways. GO analysis revealed that, compared to the control group, the QAP-treated group exhibited significant enrichment in GO terms associated with the cell wall, cellular respiration, reactive oxygen species metabolism, ion transport, ergosterol biosynthesis, and transmembrane transport proteins . To further elucidate the molecular mechanisms underlying the inhibitory effects of QAP on C. albicans , these DEGs were further subjected to KEGG pathway annotation and enrichment analysis. The results revealed significant enrichment in pathways including DNA replication proteins, oxidative phosphorylation, the tricarboxylic acid (TCA) cycle, and glycosyltransferase . Subsequently, a heatmap was generated to illustrate differential gene expression based on the DEGs within the selected pathways . Through transcriptome analysis, we identified DEGs within specific pathways for subsequent investigation of protein–protein interaction (PPI) networks using the STRING online platform ( https://string-db.org/ ). The selected DEGs and their corresponding analysis results are listed in . Subsequently, Cytoscape software was utilized to perform topological analyses on the resulting PPI network graph, with node size, coloration, and intensity indicating their respective degree values. Our findings revealed that SDH2 (Succinate dehydrogenase 2) and ACO1 (Aconitase 1) are pivotal proteins . These proteins jointly participate in the TCA cycle and play a crucial role in the energy metabolism of C. albicans . Additionally, qRT-PCR validation of the expression levels of 13 selected differentially expressed genes revealed consistent trends with our initial transcriptome analysis, and all the results exhibited statistically significant differences ( P < 0.05). Specifically, the expression levels of genes ZRT1, SIM1, ALS4, and PRA1 showed a significant increase ( P < 0.05) following the treatment with quinoa active proteins. Conversely, the expression levels of the remaining 11 genes showed a significant decrease ( P < 0.05). Notably, the expression levels of FBA1, CST20, ERG1, and SDH2 showed an extremely significant decrease after being treated with QAP ( P < 0.0001), with the expression level of SDH2 decreasing by more than threefold . Quinoa seeds are abundant in proteins, polyphenols, and various bioactive compounds. In contemporary nutrition science, plant-derived proteins are increasingly recognized for their significant contributions to human health. Proteins sourced from plant seeds have attracted considerable interest due to their antioxidative, antihypertensive, and lipid-lowering properties. Additionally, research has indicated that proteins extracted from diverse plant seeds possess antimicrobial activities. For instance, antifungal proteins derived from Job’s tears seeds and antimicrobial peptides obtained from soybean proteins have been shown to inhibit the growth of Aspergillus niger . However, there are no published reports indicating that quinoa protein inhibits C. albicans . Multidrug-resistant infections caused by C. albicans , along with the significant clinical side effects associated with such infections, present pervasive challenges that substantially impact global mortality and morbidity rates, while also imposing considerable economic burdens . In this study, active proteins with inhibitory effects against C. albicans were isolated and purified from quinoa using traditional ammonium sulfate precipitation. The inhibitory activity of the purified QAP was evaluated using the Oxford cup method, revealing a MIC of 182 µg/mL against C. albicans . Fractionation with ammonium sulfate gradients, ranging from 30% to 70% saturation, was effective in extracting QAP. Proteomic analysis identified a total of 149 quinoa proteins extracted at different levels of ammonium sulfate saturation, out of an estimated 40,000 proteins present in quinoa seeds. This method thus provides a preliminary strategy for the isolation and purification of QAP. To further purify the active proteins derived from quinoa, gel filtration chromatography Sephadex G-75 and Sephadex G-50 was subsequently implemented. According to the manufacturer’s specifications, Sephadex G-75 and Sephadex G-50 are capable of separating proteins within the molecular weight ranges of 3–80 kDa and 1.5–30 kDa, respectively. Despite significant reduction in the quantity of purified QAP and a gradual decrease in protein concentration, there was an observed increase in the diameter of the inhibitory zone against C. albicans . Integrating data from proteomic analysis, it can be preliminarily inferred that among the 18 proteins common to both fractions, the QAP exhibiting inhibitory effects against C. albicans were identified as 11S globulins. By quantifying the extracellular AKP activity of C. albicans , the findings indicated a significant increase in extracellular AKP levels following treatment with QAP. AKP is localized between the cell wall and the membrane of C. albicans . An increase in cell wall permeability, or damage to the cell wall, results in the leakage of AKP into the extracellular environment. This observation suggests that QAP impacts the cell wall integrity of C. albicans . To further elucidate the mechanism by which QAP inhibits C. albicans , we investigated the effects of different concentrations of QAP and FLC on the activities of SDH, Ca 2+ -Mg 2+ -ATPase, and CAT enzymes in C. albicans . Our results indicate that QAP reduces the activity of the SDH and Ca 2+ -Mg 2+ -ATPase in C. albicans . The SDH is located on the inner mitochondrial membrane, is vital for the TCA cycle and the electron transport chain , converting succinate to fumarate in an energy-dependent manner and producing FADH2, which transfers electrons to coenzyme Q . Reduced SDH activity disrupts the normal function, affecting energy supply, gene regulation, and DNA replication, leading to inhibited cell growth or apoptosis. Ca 2+ -Mg 2+ -ATPase, found on the plasma and inner mitochondrial membrane, hydrolyzes ATP to release energy and extrudes calcium ions, maintaining intracellular calcium balance. A decrease in Ca 2+ -Mg 2+ -ATPase activity may disrupt intracellular calcium homeostasis and potentially lead to an imbalance in the electrochemical gradient across the membrane. Empirical evidence has demonstrated that elevated calcium concentrations can affect the TCA cycle, resulting in decreased SDH activity. Therefore, the reduction in SDH activity induced by QAP may be associated with alterations in calcium concentration due to decreased Ca 2+ -Mg 2+ -ATPase activity . However, further investigation is necessary to confirm whether there is an increase in cytoplasmic free calcium ions in C. albicans cells following QAP treatment. Catalase, a ubiquitous enzyme in C. albicans responsible for the elimination of reactive oxygen species (ROS) , operates by catalyzing the decomposition of hydrogen peroxide into water and oxygen, thereby protecting the cell. Under normal conditions, ROS produced by cells are immediately decomposed by catalase. After treatment with QAP and FLC, there was an observed trend of increased CAT enzyme activity in C. albicans cells compared to the negative control group. This suggests that QAP treatment may induce ROS production in C. albicans , thereby activating endogenous ROS-detoxifying enzymes. Considering the cytotoxic properties of ROS, their excessive accumulation can lead to damage of cellular macromolecules. To examine the effects of QAP on the morphology of C. albicans , this study utilized SEM to observe cellular alterations following treatment. The results indicated that, in comparison to the control group, C. albicans cells exposed to QAP exhibited significant cell shrinkage, with some cells displaying ruptured surfaces. Additionally, there was a marked reduction and flattening of budding sites. It is noteworthy that C. albicans is a dimorphic fungus that reproduces through budding. Notably, QAP effectively inhibited the growth of C. albicans hyphae. These observations are consistent with experimental results demonstrating the inhibitory effect of QAP on C. albicans growth, thereby confirming its efficacy in suppressing proliferation. Furthermore, these morphological alterations suggest that the inhibitory mechanism of QAP against C. albicans involves damage to cell walls and membranes, as well as inhibition of filamentous growth. GO analysis indicated an enrichment of terms associated with cell wall organization, fungal-type cell wall biogenesis, and cation transport. This suggests that QAP may induce some degree of damage to the cell wall and membrane of C. albicans . It is plausible that QAP interacts with membrane proteins or translocate across membranes, potentially affecting processes such as metal ion transport and homeostasis, thereby impacting normal cellular metabolic activities in C. albicans cells. The enrichment of the DNA replication proteins pathway in the KEGG database implies that the activity of QAP may induce cell death in a subset of C. albicans cells, thereby reducing DNA replication and resulting in the observed pathway enrichment. Furthermore, pathways related to energy metabolism, such as oxidative phosphorylation, acetate metabolism, and the TCA cycle, were significantly enriched. This observation suggests that the normal energy metabolism in C. albicans cells may be disrupted, consequently disturbing the balance between energy generation, supply, and utilization. Energy metabolism homeostasis is essential for the growth and proliferation of C. albicans , and its disruption can lead to starvation-induced cell death and substantial impairment of growth and reproductive functions. For instance, oxidative phosphorylation, which is localized in the inner mitochondrial membrane, constitutes the principal energy-generating mechanism for the cell. This process is responsible for approximately 90% of ATP production in C. albicans cells and is critical for fungal viability. Beyond providing the energy requisite for cellular survival and proliferation, cellular respiration plays a crucial role in synthetic metabolism and signal transduction pathways. Interference with this pathway can result in the accumulation of reactive oxygen species within cells, ultimately culminating in cell death. The findings from the integrated analysis of GO, KEGG, and PPI networks suggest that SDH2 and ACO1 are pivotal proteins. SDH2, a protein located in the inner mitochondrial membrane, plays a crucial role in linking oxidative phosphorylation with electron transfer in C. albicans cells and serves as a marker enzyme indicative of mitochondrial function. Previous studies have demonstrated that the generation of the SDH2Δ/Δ mutant through gene knockout, coupled with virulence assays in various infection models, underscores the significance of SDH2 in virulence and hyphal formation. For instance, in the Caenorhabditis elegans model, infection with the wild-type strain resulted in more than 85% nematode mortality, whereas no mortality was observed with the SDH2Δ/Δ mutant. Similarly, in the murine model, the SDH2Δ/Δ mutant exhibited a significantly reduced survival rate and decreased fungal burden in the kidneys. Furthermore, the observation of hyphal growth in various hypha-inducing media and additional experiments confirmed that SDH2 modulates the electron transport chain and intracellular reactive oxygen species (ROS) levels , thereby playing a pivotal role in the virulence and hyphal formation of C. albicans . ACO1, a critical enzyme in the TCA cycle, is essential for cellular energy metabolism. generated mutant strains with alterations in genes related to ACO1 and demonstrated, through growth analysis on various carbon source media, that ACO1 is crucial for the proliferation of C. albicans . A deficiency in ACO1 function results in significant growth defects. Consequently, QAP disrupts the energy metabolism of C. albicans ’s by interfering with its normal respiratory processes . The ALS4 gene predominantly encodes lectin-like sequence proteins that are situated on the cell wall surface of C. albicans , which are crucial for adhesion. Adhesion is a defining characteristic of mycelia and is a fundamental factor in the pathogenesis of C. albicans infections. According to the literature, inhibiting adhesion is essential for achieving antifungal effects . The significant down-regulation of ALS4 may suggest a defect in mycelial growth in C. albicans , resulting in a substantial impairment in its adhesion capability. Adhesion is critical in the initial stages of biofilm formation by C. albicans , implying that the cell membrane of C. albicans is similarly affected. The PLB1 gene encodes phospholipase B, a virulence factor of C. albicans, which not only degrades the phospholipid components of host cells but also facilitates the synthesis and repair of fungal biofilms. Treatment with QAP leads to the downregulation of the PLB1 gene expression, ultimately resulting in the death of C. albicans . The SIM1 gene encodes a protein with adhesin-like properties that is primarily involved in cell wall synthesis and morphological alterations. The cell wall is essential for maintaining cellular shape and plays a crucial role in enabling fungal cells to withstand external physical, chemical, and biological stressors, thus serving as a vital protective barrier. Significant variations in SIM1 expression suggest anomalies in the cell wall structure of C. albicans , indicating the necessity for upregulation of this gene to ensure cell wall integrity. The ERG1 and MET6 genes are linked to the cell membrane, a critical organelle for sustaining the development, metabolism and maintenance of cellular homeostasis in C. albicans . As such, these genes are significant targets for pharmacological interventions aimed at affecting C. albicans cells. The fungal cell membrane of C. albicans is composed of phospholipid bilayers, sterols, and membrane proteins. Ergosterol, a vital component of fungal cell membranes, is crucial for maintaining membrane integrity , permeability, and various other functions . The ERG1 gene encodes squalene monooxygenase, a pivotal enzyme in the biosynthesis of ergosterol. Meanwhile, the enzyme encoded by the MET6 gene is essential for thionine metabolic pathways, which are intricately connected to ergosterol synthesis by supplying precursor molecules necessary for this process and regulating various physiological functions, including cell membrane formation . Numerous studies have demonstrated that the MET6 gene, which encodes methionine synthase, is crucial for the development of C. albicans , with its absence hindering growth even in environments supplemented with exogenous methionine . Upon administration of QAP, there was a marked down-regulation in the expression levels of both ERG1 and MET6 genes, which may be attributed to cellular membrane damage or the inhibition of ergosterol biosynthesis pathways. ZRT1 and PRA1 are genes associated with the transport of metal ions. The protein encoded by ZRT1 mediates the transmembrane transport of zinc ions in C. albicans . The PRA1p protein, encoded by PRA1, particularly in mycelial cells, possesses the ability to bind free zinc ions and facilitate their transfer to the ZRT1p membrane transporter located on the cell membrane surface. Subsequently, zinc ions are internalized for cellular utilization. The significant upregulation of these two genes further substantiates that treatment with QAP compromises the integrity of the cell membrane in C. albicans , inhibits hyphal growth, disrupts intracellular ion homeostasis, and necessitates the upregulation of related genes to sustain normal metabolic functions. QAP induces damage to the cell membrane of C. albicans , thereby affecting its permeability and disrupting its osmotic regulation capabilities . The SDH2 gene encodes succinate dehydrogenase, an enzyme essential to aerobic respiration and the TCA cycle within cells, serving as a component of the mitochondrial respiratory chain. Studies have indicated that the absence of the SDH2 gene leads to impaired mycelial growth, a consequence attributed to the organism’s reliance on fermentable carbon sources, which subsequently disrupts cellular energy metabolism . The CAT1 gene encodes catalase A, an enzyme crucial for regulating oxidative stress resistance in C. albicans cells. Treatment with QAP has been shown to significantly down-regulate CAT1 expression, suggesting that QAP may influence oxidative stress adaptability by down-regulating CAT1 expression. It is hypothesized that the disruption of the C. albicans cell membrane compromises its functionality, thereby increasing its susceptibility to external environmental factors . The CST20 gene encodes a mitogen-activated protein kinase that plays a pivotal role as a positive regulator in the MAPK signaling pathway, modulating morphological changes in C. albicans and promoting filamentous growth within fungal colonies. Prior studies have shown that the deletion of CST20 inhibits mycelial formation, whereas treatment with QAP leads to the down-regulation of CST20 gene expression in C. albicans . Further validation has confirmed that QAP effectively suppresses mycelial growth and development. PCK1, a gene crucial for C. albicans cellular metabolism, encodes the enzyme phosphoenolpyruvate carboxykinase, which is pivotal in the gluconeogenic pathway. This enzyme is essential for maintaining energy and carbon homeostasis under conditions of carbon limitation. The downregulation of PCK1 can lead to disruptions in the gluconeogenic process, resulting in an energy supply deficit and impaired adaptation to carbon deficient environments, potentially compromising the viability and fitness of C. albicans cell. Furthermore, the FBA1 gene encodes fructose-bisphosphate aldolase, an enzyme that plays a critical role in glycolysis, gluconeogenesis, and the pentose phosphate pathways . It has been reported that petroselinic acid can inhibit the mycelial formation of C. albicans by targeting fructose-1, 6-bisphosphate aldolase, thereby exerting an antifungal effect . In C. albicans cells treated with QAP, all three previously mentioned metabolic genes were significantly downregulated, indicating a disruption of the internal metabolic equilibrium and a profound impact on metabolic activities. Based on these changes in regulatory genes expression, a mechanistic diagram illustrating the inhibitory effect of QAP on C. albicans was developed . In conclusion, QAP exhibits substantial antifungal activity against C. albicans by targeting multiple cellular sites. These mechanisms include inducing damage to the cell wall and membrane, inhibiting hyphal growth, and disrupting normal cellular respiration processes such as oxidative phosphorylation, which consequently leads to dysregulation of energy metabolism in C. albicans cells. However, a limitation of this study is the inability to conduct more in-depth research on the hyphae of C. albicans , which constitute its primary virulence factor. The QAP exhibits considerable potential as a natural therapeutic agent devoid of side effects within the framework of traditional Chinese medicine for the treatment of C. albicans infections. This discovery presents a promising opportunity to leverage quinoa resources in the pharmaceutical industry, thereby augmenting its economic value. Future research should focus on further exploring the nutritional and medicinal properties of quinoa, which may contribute to the management of various chronic diseases and the promotion of human health and development . The diversity of protein types in the crude extract of QAP is notably high , posing challenges for its separation and purification. Future efforts will involve the application of various separation and purification techniques, such as anion/cation exchange, hydrophobic chromatography, and ultrafiltration to isolate and purify the existing 18 quinoa proteins. The goal is to narrow down the range of active quinoa proteins or to isolate a single active protein component that capable of inhibiting C. albicans . The findings of this study present an innovative methodology for the isolation and purification of QAP, which exhibits inhibitory effects against C. albicans . This advancement provides new insights into the development of natural pharmaceuticals devoid of side effects. In this investigation, QAP was preliminarily identified as an 11S seed storage protein using ammonium sulfate precipitation, Sephadex gel chromatography, and proteomic data analysis. Further analyses, including the assessment of SDH, Ca 2+ -Mg 2+ -ATPase, and CAT activity, SEM results, and transcriptome sequencing data, reveal that the QAP induces cellular damage to both the cell wall and membrane of C. albicans cells, inhibits hyphal growth, modulates membrane permeability, and disrupts molecular pathways including ergosterol synthesis, cellular metabolism, metal ion transport, and homeostasis. These findings elucidate the mechanistic pathways through which the QAP exerts its inhibitory effects on C. albicans . Consequently, these research findings provide substantial theoretical support for the development of multi-target natural therapeutics that effectively inhibit C. albicans and reduce the likelihood of drug resistance. This study has successfully narrowed the range of QAP to 18 types. In future research, isoelectric focusing electrophoresis will be employed to identify QAP. This technique leverages the differences in the isoelectric points of proteins, allowing them to migrate to their respective isoelectric points within an electric field, thereby facilitating separation. The method’s high resolution enables precise differentiation of proteins with similar isoelectric points. Through this approach, it is anticipated that single active proteins can be isolated from the current mixed proteins, their characteristics and antifungal mechanisms elucidated, and new directions for the research into the antifungal properties of QAP explored. 10.7717/peerj.18961/supp-1 Text S1 Operating Procedures of ELISA Experiment and Standard Curve 10.7717/peerj.18961/supp-2 Data S1 Effect Diagrams of MIC of QAP and FLC 10.7717/peerj.18961/supp-3 Table S1 List of specific primer sequences 10.7717/peerj.18961/supp-4 Table S2 GO terms 10.7717/peerj.18961/supp-5 Table S3 KEGG pathway 10.7717/peerj.18961/supp-6 Table S4 PPI analysis results 10.7717/peerj.18961/supp-7 Supplemental Information 7 Raw data for Figure 2 10.7717/peerj.18961/supp-8 Supplemental Information 8 Raw data for Figure 4 10.7717/peerj.18961/supp-9 Supplemental Information 9 Raw data for Figure 6A 10.7717/peerj.18961/supp-10 Supplemental Information 10 Raw data for Figure 6B 10.7717/peerj.18961/supp-11 Supplemental Information 11 Raw data for Figure 7 10.7717/peerj.18961/supp-12 Supplemental Information 12 Raw Data for Figure 8 10.7717/peerj.18961/supp-13 Supplemental Information 13 Raw data for Figure 9 10.7717/peerj.18961/supp-14 Supplemental Information 14 Raw data for Figure 10 10.7717/peerj.18961/supp-15 Supplemental Information 15 Raw data for Figure 11 10.7717/peerj.18961/supp-16 Supplemental Information 16 Raw data for Figure 14F 10.7717/peerj.18961/supp-17 Supplemental Information 17 Raw data for Figure 16 10.7717/peerj.18961/supp-18 Supplemental Information 18 MIQE Checklist 10.7717/peerj.18961/supp-19 Supplemental Information 19 The detailed procedure for the qPCR experiment |
Inactivation Effects of Hypochlorous Acid, Chlorine Dioxide, and Ozone on Airborne SARS-CoV-2 and Influenza A Virus | 09926439-5697-468e-b95d-212de5b39ede | 11698893 | Microbiology[mh] | The World Health Organization declared the coronavirus disease 2019 (COVID-19) pandemic on March 11, 2020 (World Health Organization, ). Since then, the emergence of mutant strains of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been continuously reported (Chavda et al., ; Hill et al., ; Viana et al., ). The modes of SARS-CoV-2 transmission include contact, droplet, and aerosol transmission (Short & Cowling, ; Zhang et al., ); the corresponding virus control measures have been implemented for each transmission route. To prevent further increases in the number of COVID-19 cases, virus control technologies, which physically reduce or biochemically inactivate infectious viruses in the environment, have been extensively examined (Garg et al., ; Hu et al., ; Xiling et al., ). One of these infection control measures is the inactivation of airborne viruses, which spread from patients with COVID-19 by coughing and sneezing, to address aerosol transmission. The stability of viruses in the air or on the surface is influenced not only by the virus species but also by various environmental factors, such as temperature, humidity, and the suspension medium of the virus particles (van Doremalen et al., ; Kwon et al., ; Bushmaker et al. ; Haddrell et al., ). Accordingly, the inactivation effect of virus control techniques, such as chemical treatment, UV irradiation, and physical removal, may vary depending on the environmental factors in which the virus exists. Therefore, it is important to evaluate virus control techniques against airborne viruses using a suitable evaluation system to implement appropriate strategies in real-world situations. Previous studies have reported the virucidal effects of chemical substances on viral suspensions or virus-contaminated surfaces for SARS-CoV-2 and other viruses (Hakim et al., ; Kubo et al., ; Urushidani et al., ; Yano et al., ). However, the inactivating effects of these chemical substances on viruses existing in the air remain unclear. To establish effective infection control measures against aerosol-transmitted infections, it is essential to verify the effectiveness of these substances against airborne viruses. This study aimed to investigate the inactivating effect of chemicals, such as hypochlorous acid, chlorine dioxide, and ozone, on viruses in the air. Using the developed evaluation system, the inactivation effects of these chemicals were simultaneously evaluated for SARS-CoV-2 and influenza A virus in the air under completely identical environmental conditions of temperature and relative humidity (RH) (23 ± 1 °C, 50 ± 5% RH), to simulate the real-world environment. Preparation of Viral Samples The SARS-CoV-2 strain 2019-nCoV/Japan/TY/WK-521/2020 was provided by the National Institute of Infectious Disease, Japan. Monolayer cell plates of VeroE6/TEMPRESS2 cells (JCRB1819) (JCRB Cell Bank, Osaka, Japan) were incubated with approximately 10 3 plaque-forming units (PFU)/mL of SARS-CoV-2 suspension (0.1 mL/well) at 37 °C under 5% CO 2 for 1.5 h for viral adsorption onto the cells. Further, 1 mL/well of Eagle’s minimum essential medium (EMEM; Sigma-Aldrich, St. Louis, MO) was added to the plate and incubated at 37 °C under 5% CO 2 for 40 h. After crude purification by centrifugation at 1000× g for 15 min at 4 °C, the SARS-CoV-2 was collected by ultracentrifugation at 1,00,000× g for 1 h at 4 °C and resuspended in human saliva from pooled normal donors (Lee BioSolutions, Inc., Maryland Heights, MO) that had been previously diluted tenfold with sterile ultrapure water to reduce the viscosity for particle formation by spraying. The SARS-CoV-2 salivary suspension was adjusted to 1–5 × 10 8 PFU/mL. For influenza A virus proliferation, Madin–Darby canine kidney (MDCK) cells (ATCC CCL-34; American Type Culture Collection, Manassas, VA, USA) were cultured to a monolayer in a 75 mL flask at 37 °C under 5% CO 2 for 3 days in EMEM containing 10% fetal bovine serum (FBS), and 0.06 mg/mL kanamycin sulfate (FUJIFILM Wako Pure Chemical Corp., Osaka, Japan). The cells were inoculated with a 1.0-mL suspension of influenza virus (ATCC VR-1678 strain; A/Hong Kong/8/68, H3N2), adjusted to approximately 10 3 PFU/mL, and incubated at 34 °C under 5% CO 2 for 1 h, for viral adsorption onto the cells. Subsequently, 20 mL of EMEM containing 1.5 ppm trypsin was added to the flask and incubated at 34 °C under 5% CO 2 for 2 days for influenza A virus multiplication. After crude purification by centrifugation at 1000× g for 15 min at 4 °C, the influenza A virus was collected by ultracentrifugation at 1,00,000× g for 1 h at 4 °C and resuspended in human saliva from pooled normal donors that had been previously diluted tenfold with sterile ultrapure water to reduce the viscosity for particle formation by spraying. The salivary suspension of the influenza A virus was adjusted to 1–5 × 10 8 PFU/mL. Finally, equal amounts of SARS-CoV-2 and influenza A virus suspensions were mixed to prepare the mixed test virus suspension. Antiviral Test for Airborne Viruses The evaluation system was a 1 m 3 chamber made of stainless steel with dimensions of 79 cm (length) × 113 cm (width) × 113 cm (height) and was equipped with a thermo-hygrometer, pass box, and grooves (Fig. ). The BioSampler (SKC, Inc., Eighty Four, PA) as an impinger used for collecting air containing the virus was installed in the exhaust unit with a high-efficiency particulate air (HEPA) filter adjacent to the chamber. Additionally, for internal sterilization after the test, the chamber was equipped with ultraviolet lamps, an ozone generator, a chamber air recirculation unit through a HEPA filter, and a chemical removal unit with a HEPA filter and activated carbon. The evaluation system was located in a biosafety Level 3 facility. The test procedure is illustrated in Fig. . First, 2 mL of the test virus mixed suspension containing SARS-CoV-2 and influenza A virus was nebulized using NE-C28 (OMRON Corporation, Kyoto, Japan) into the chamber with a stirring fan for 5 min at 23 ± 1 °C with 30 ± 5% RH. Next, 20 L of air inside the chamber was collected using a BioSampler into 20 mL phosphate-buffered saline (PBS) with 20 µM sodium thiosulfate at 12.5 L/min for 96 s as initial samples. Subsequently, aqueous solutions of the chemical substances were loaded into the ultrasonic humidifier and sprayed for 100 s at a rate of approximately 130 mL/h. The RH in the chamber was increased to 50 ± 5% RH, owing to spraying of the virus suspension and chemical substance solution, and was maintained throughout the test period. Immediately after spraying the chemical substances, 20 L of air was collected using the same method as that for the initial sample. After 5 and 10 min of stirring using a fan, 20 L of the air samples were collected. For the control test, purified water was sprayed instead of the chemical substances. The chemical substances used in the tests included hypochlorous acid, chlorine dioxide, and ozone gas. Spraying of hypochlorous acid water (Nipro Co., Ltd., Osaka Prefecture) at concentrations of 3 and 30 ppm into the chamber resulted in their concentrations of 0.002 ppm and 0.02 ppm, respectively, inside the chamber. Chlorine dioxide water (Taiko Pharmaceutical Co., Ltd., Osaka, Japan) was sprayed into the chamber, and its concentration was adjusted to achieve chlorine dioxide gas concentrations of 0.02, 0.1, and 1.0 ppm, respectively. The chlorine dioxide gas concentration was determined using a GD-70D instrument (RIKEN KEIKI Co., Ltd., Tokyo, Japan). Ozone gas was produced in the chamber using an ozone gas generator (Mitsubishi Heavy Industries, Ltd., Tokyo, Japan). The operating time of the ozone gas generator was adjusted to achieve ozone gas concentrations of 0.1, 0.3, and 1.0 ppm in the chamber, as confirmed using an ozone gas monitor OZG-EM-011 K (Applics Co., Ltd., Tokyo, Japan). Purified water was sprayed for 100 s to match the humidity in the chamber with the test conditions. Each assay was performed twice independently. Measurement of Virus Infectivity Titer using a Plaque Assay To accurately determine the infectivity titers of the viral mixture sample, we confirmed that in MDCK and VeroE6/TMPREE2, no interference was detected in plaque formation by both viruses, respectively. In the SARS-CoV-2 infectivity titer assay, the collected 20 L air samples in PBS with 20 µM sodium thiosulfate were diluted with 2% FBS-containing Dulbecco’s modified Eagle medium. The diluted samples were inoculated into 10 wells of 6-well plates (0.1 mL each), and the viral infectivity titer per 1.0 mL of the test mixture was measured. The plate was incubated at 37 °C with 5% CO 2 for 1.5 h to adsorb the virus onto the cells. After washing the cells with EMEM, 3.0 mL of overlay medium containing 0.75% agar, 2% FBS, and 0.01% DEAE-dextran in EMEM was added to each well and incubated for 2 days. The cells in the wells were fixed with 1% glutaraldehyde for 1 h and then stained with 0.0375% methylene blue for plaque quantification. For the influenza A virus infectivity titer assay, MDCK cells were cultured in 6-well plates in EMEM containing 10% FBS, 0.06 mg/mL kanamycin sulfate at 37 °C with 5% CO 2 for 3–5 days. After washing the cells with EMEM, 0.1 mL of the diluted samples were added. The plate was incubated at 34 °C with 5% CO 2 for 1 h to adsorb the virus onto the cells. The surface of the cultured cells was washed with EMEM once, and 3.0 mL of the overlay medium containing 0.75% agar, 1.5 ppm trypsin, and 0.01% DEAE-Dextran in EMEM was added and incubated at 34 °C with 5% CO 2 for 2 days. The cells were fixed with 1% glutaraldehyde for 1 h and stained with 0.0375% methylene blue for plaque quantification. The virus inactivation effect was calculated using the following formula: [12pt]{minimal} $$Virus inactivation effect = ({T}_{0} - {T}_{t}) - ({C}_{0} - {C}_{t})$$ V i r u s i n a c t i v a t i o n e f f e c t = ( T 0 - T t ) - ( C 0 - C t ) where, T 0 : Average infectivity titer (log 10 PFU/20 L-air) at 0 min evaluating the chemical substances. T t : Average infectivity titer (log 10 PFU/20 L-air) at sampling time for evaluating the chemical substances. C 0 : Average infectivity titer (log 10 PFU/20 L air) at 0 min in the control test. C t : Average infectivity titer (log 10 PFU/20 L air) at sampling time in the control test. Measurement of SARS-CoV-2 RNA Using Quantitative Reverse Transcription Polymerase Chain Reaction Each viral suspension was directly subjected to quantitative reverse transcription polymerase chain reaction (RT-qPCR) using the SARS-CoV-2 N1 gene detection kit (TOYOBO Co., Ltd., Osaka, Japan), according to the manufacturer’s protocol. For the standard curve, a tenfold dilution series from 10 4 to 10 6 copies/mL of SARS-CoV-2 positive control RNA (Nihon Gene Research Laboratories, Inc., Miyagi, Japan) was used, and the copy number of SARS-CoV-2 RNA in each sample was calculated. The SARS-CoV-2 strain 2019-nCoV/Japan/TY/WK-521/2020 was provided by the National Institute of Infectious Disease, Japan. Monolayer cell plates of VeroE6/TEMPRESS2 cells (JCRB1819) (JCRB Cell Bank, Osaka, Japan) were incubated with approximately 10 3 plaque-forming units (PFU)/mL of SARS-CoV-2 suspension (0.1 mL/well) at 37 °C under 5% CO 2 for 1.5 h for viral adsorption onto the cells. Further, 1 mL/well of Eagle’s minimum essential medium (EMEM; Sigma-Aldrich, St. Louis, MO) was added to the plate and incubated at 37 °C under 5% CO 2 for 40 h. After crude purification by centrifugation at 1000× g for 15 min at 4 °C, the SARS-CoV-2 was collected by ultracentrifugation at 1,00,000× g for 1 h at 4 °C and resuspended in human saliva from pooled normal donors (Lee BioSolutions, Inc., Maryland Heights, MO) that had been previously diluted tenfold with sterile ultrapure water to reduce the viscosity for particle formation by spraying. The SARS-CoV-2 salivary suspension was adjusted to 1–5 × 10 8 PFU/mL. For influenza A virus proliferation, Madin–Darby canine kidney (MDCK) cells (ATCC CCL-34; American Type Culture Collection, Manassas, VA, USA) were cultured to a monolayer in a 75 mL flask at 37 °C under 5% CO 2 for 3 days in EMEM containing 10% fetal bovine serum (FBS), and 0.06 mg/mL kanamycin sulfate (FUJIFILM Wako Pure Chemical Corp., Osaka, Japan). The cells were inoculated with a 1.0-mL suspension of influenza virus (ATCC VR-1678 strain; A/Hong Kong/8/68, H3N2), adjusted to approximately 10 3 PFU/mL, and incubated at 34 °C under 5% CO 2 for 1 h, for viral adsorption onto the cells. Subsequently, 20 mL of EMEM containing 1.5 ppm trypsin was added to the flask and incubated at 34 °C under 5% CO 2 for 2 days for influenza A virus multiplication. After crude purification by centrifugation at 1000× g for 15 min at 4 °C, the influenza A virus was collected by ultracentrifugation at 1,00,000× g for 1 h at 4 °C and resuspended in human saliva from pooled normal donors that had been previously diluted tenfold with sterile ultrapure water to reduce the viscosity for particle formation by spraying. The salivary suspension of the influenza A virus was adjusted to 1–5 × 10 8 PFU/mL. Finally, equal amounts of SARS-CoV-2 and influenza A virus suspensions were mixed to prepare the mixed test virus suspension. The evaluation system was a 1 m 3 chamber made of stainless steel with dimensions of 79 cm (length) × 113 cm (width) × 113 cm (height) and was equipped with a thermo-hygrometer, pass box, and grooves (Fig. ). The BioSampler (SKC, Inc., Eighty Four, PA) as an impinger used for collecting air containing the virus was installed in the exhaust unit with a high-efficiency particulate air (HEPA) filter adjacent to the chamber. Additionally, for internal sterilization after the test, the chamber was equipped with ultraviolet lamps, an ozone generator, a chamber air recirculation unit through a HEPA filter, and a chemical removal unit with a HEPA filter and activated carbon. The evaluation system was located in a biosafety Level 3 facility. The test procedure is illustrated in Fig. . First, 2 mL of the test virus mixed suspension containing SARS-CoV-2 and influenza A virus was nebulized using NE-C28 (OMRON Corporation, Kyoto, Japan) into the chamber with a stirring fan for 5 min at 23 ± 1 °C with 30 ± 5% RH. Next, 20 L of air inside the chamber was collected using a BioSampler into 20 mL phosphate-buffered saline (PBS) with 20 µM sodium thiosulfate at 12.5 L/min for 96 s as initial samples. Subsequently, aqueous solutions of the chemical substances were loaded into the ultrasonic humidifier and sprayed for 100 s at a rate of approximately 130 mL/h. The RH in the chamber was increased to 50 ± 5% RH, owing to spraying of the virus suspension and chemical substance solution, and was maintained throughout the test period. Immediately after spraying the chemical substances, 20 L of air was collected using the same method as that for the initial sample. After 5 and 10 min of stirring using a fan, 20 L of the air samples were collected. For the control test, purified water was sprayed instead of the chemical substances. The chemical substances used in the tests included hypochlorous acid, chlorine dioxide, and ozone gas. Spraying of hypochlorous acid water (Nipro Co., Ltd., Osaka Prefecture) at concentrations of 3 and 30 ppm into the chamber resulted in their concentrations of 0.002 ppm and 0.02 ppm, respectively, inside the chamber. Chlorine dioxide water (Taiko Pharmaceutical Co., Ltd., Osaka, Japan) was sprayed into the chamber, and its concentration was adjusted to achieve chlorine dioxide gas concentrations of 0.02, 0.1, and 1.0 ppm, respectively. The chlorine dioxide gas concentration was determined using a GD-70D instrument (RIKEN KEIKI Co., Ltd., Tokyo, Japan). Ozone gas was produced in the chamber using an ozone gas generator (Mitsubishi Heavy Industries, Ltd., Tokyo, Japan). The operating time of the ozone gas generator was adjusted to achieve ozone gas concentrations of 0.1, 0.3, and 1.0 ppm in the chamber, as confirmed using an ozone gas monitor OZG-EM-011 K (Applics Co., Ltd., Tokyo, Japan). Purified water was sprayed for 100 s to match the humidity in the chamber with the test conditions. Each assay was performed twice independently. To accurately determine the infectivity titers of the viral mixture sample, we confirmed that in MDCK and VeroE6/TMPREE2, no interference was detected in plaque formation by both viruses, respectively. In the SARS-CoV-2 infectivity titer assay, the collected 20 L air samples in PBS with 20 µM sodium thiosulfate were diluted with 2% FBS-containing Dulbecco’s modified Eagle medium. The diluted samples were inoculated into 10 wells of 6-well plates (0.1 mL each), and the viral infectivity titer per 1.0 mL of the test mixture was measured. The plate was incubated at 37 °C with 5% CO 2 for 1.5 h to adsorb the virus onto the cells. After washing the cells with EMEM, 3.0 mL of overlay medium containing 0.75% agar, 2% FBS, and 0.01% DEAE-dextran in EMEM was added to each well and incubated for 2 days. The cells in the wells were fixed with 1% glutaraldehyde for 1 h and then stained with 0.0375% methylene blue for plaque quantification. For the influenza A virus infectivity titer assay, MDCK cells were cultured in 6-well plates in EMEM containing 10% FBS, 0.06 mg/mL kanamycin sulfate at 37 °C with 5% CO 2 for 3–5 days. After washing the cells with EMEM, 0.1 mL of the diluted samples were added. The plate was incubated at 34 °C with 5% CO 2 for 1 h to adsorb the virus onto the cells. The surface of the cultured cells was washed with EMEM once, and 3.0 mL of the overlay medium containing 0.75% agar, 1.5 ppm trypsin, and 0.01% DEAE-Dextran in EMEM was added and incubated at 34 °C with 5% CO 2 for 2 days. The cells were fixed with 1% glutaraldehyde for 1 h and stained with 0.0375% methylene blue for plaque quantification. The virus inactivation effect was calculated using the following formula: [12pt]{minimal} $$Virus inactivation effect = ({T}_{0} - {T}_{t}) - ({C}_{0} - {C}_{t})$$ V i r u s i n a c t i v a t i o n e f f e c t = ( T 0 - T t ) - ( C 0 - C t ) where, T 0 : Average infectivity titer (log 10 PFU/20 L-air) at 0 min evaluating the chemical substances. T t : Average infectivity titer (log 10 PFU/20 L-air) at sampling time for evaluating the chemical substances. C 0 : Average infectivity titer (log 10 PFU/20 L air) at 0 min in the control test. C t : Average infectivity titer (log 10 PFU/20 L air) at sampling time in the control test. Each viral suspension was directly subjected to quantitative reverse transcription polymerase chain reaction (RT-qPCR) using the SARS-CoV-2 N1 gene detection kit (TOYOBO Co., Ltd., Osaka, Japan), according to the manufacturer’s protocol. For the standard curve, a tenfold dilution series from 10 4 to 10 6 copies/mL of SARS-CoV-2 positive control RNA (Nihon Gene Research Laboratories, Inc., Miyagi, Japan) was used, and the copy number of SARS-CoV-2 RNA in each sample was calculated. A mixed sample of SARS-CoV-2 and influenza A virus in diluted saliva was nebulized in the chamber of the evaluation system, and the inactivating effects of hypochlorous acid, chlorine dioxide, and ozone on both the viruses in the air were evaluated over time using a plaque assay. In addition, the number of SARS-CoV-2 RNA copies in the collected air samples was measured using RT-qPCR. The virus inactivation effect of hypochlorous acid for SARS-CoV-2 at 0.002 ppm for 10 min was 0.72, and the effects at 0.02 ppm for 5 and 10 min were 2.22 and 2.17, respectively (Fig. a). For influenza A virus, the inactivation effect of hypochlorous acid at 0.002 ppm for 10 min was 0.80, and the effects at 0.02 ppm for 5 and 10 min were 2.56 and 2.82, respectively (Fig. b). The infectivity titer of influenza A virus at 0.02 ppm of hypochlorous acid for 10 min was lower than the detection limit of the plaque assay. Reduction in both the viral infectivity titers was observed within approximately one order of magnitude at 0.002 ppm of hypochlorous acid and two orders of magnitude at 0.02 ppm after more than 5 min of contact. In contrast, the viral RNA assay for SARS-CoV-2 using RT-qPCR showed no reduction in viral RNA in the air samples under all conditions (Fig. c). Further, the inactivating effect of chlorine dioxide at 0.02, 0.1, and 1.0 ppm on viruses in air was evaluated. For SARS-CoV-2, the virus inactivation effects at chlorine dioxide concentrations of 0.02 and 0.1 ppm for 10 min were 0.20 and 0.21, respectively. The virus inactivation effect of 1.0 ppm chlorine dioxide on SARS-CoV-2 was 1.15 after 100 s contact and the infectivity titer of SARS-CoV-2 decreased over time, with the virus inactivation effects reaching 1.49 and 1.93 for 5 and 10 min, respectively (Fig. a). For influenza A virus, the virus inactivation effects at chlorine dioxide concentrations of 0.02 and 0.1 ppm were 0.77 and 1.43, respectively, at 10 min. The virus inactivation effects of 1.0 ppm chlorine dioxide for influenza A virus were 1.66, 2.11, and 2.84 for 100 s, 5 min, and 10 min, respectively (Fig. b). Chlorine dioxide at 1.0 ppm for 10 min reduced both viral infectivity titers by two orders of magnitude, and the infectivity titer of the influenza A virus was particularly low. In contrast, the assay results of SARS-CoV-2 RNA detection from air samples showed no reduction in viral RNA in any of the collected samples (Fig. c). Furthermore, the inactivating effect of ozone gas at 0.1, 0.3, and 1.0 ppm on the airborne viruses was evaluated. For SARS-CoV-2, the inactivation effects at ozone gas concentrations of 0.1 and 0.3 ppm for 10 min were 0.42 and 0.89, respectively. The virus inactivation effects of 1.0 ppm ozone gas on SARS-CoV-2 were 2.11, 2.26, and 2.14 after 100 s, 5 min, and 10 min, respectively (Fig. a). For influenza A virus, the inactivation effect of ozone gas concentrations at 0.1 and 0.3 ppm for 10 min were 0.56 and 1.76, respectively. The virus inactivation effects of 1.0 ppm ozone gas for the influenza A virus were 1.78, 3.25, and 2.76 for 100 s, 5 min, and 10 min, respectively (Fig. b). Ozone gas at 1.0 ppm for more than 5 min reduced both viral infectivity titers by two orders of magnitude, with particularly low titers for the influenza A virus. Similar to the results for the tested hypochlorous acid and chlorine dioxide, the assay results for SARS-CoV-2 RNA detection from the air samples showed no reduction in viral RNA in any of the collected samples (Fig. c). To prevent the aerosol-based transmission of respiratory viruses, it is necessary to establish an effective method to inactivate the airborne viruses. Although the virucidal effects of chemical substances on viral suspensions or virus-attached surfaces have been evaluated, the inactivation effect for the airborne viruses, particularly for SARS-CoV-2 in the air, is limited. In this study, hypochlorous acid, chlorine dioxide, and ozone showed an inactivation effect on both airborne SARS-CoV-2 and influenza A virus, depending on the concentration and exposure time, although the effective concentrations varied depending on the chemical substances. To achieve more than a 2-log reduction in viral infectivities, exposures of 0.02 ppm hypochlorous acid for more than 5 min, 1.0 ppm chlorine dioxide for 5 to over 10 min, and 1.0 ppm ozone for 100 s to 5 min were required. In a previous study on the inactivation of SARS-CoV-2 suspensions using aqueous ozone, viral titers decreased by over 1.8 log 10 FFU/mL after 5 min of contact at 0.75 mg/L (Albert et al., ). Additionally, the inactivation effect of chlorine dioxide at 8 ppm for 10 s to 3 min achieved a 3 to 4 log 10 TCID 50 /mL reduction in viral suspensions (Hatanaka et al., ). These results from viral suspensions were comparable to the findings in the air samples of the present study. In contrast, hypochlorous acid water containing 0.02% FBS exhibited a virucidal effect, achieving a 5.3 log 10 TCID 50 /mL reduction at 10 ppm within 5 min, whereas a 1 ppm solution had no virucidal effect on SARS-CoV-2 (Kubo et al., ). The virucidal effect of hypochlorous acid differed significantly between suspension and air samples. The findings of this study revealed that although the stability of the tested SARS-CoV-2 and influenza A virus strains in the air under control conditions showed no significant difference, the infectivity titer of influenza A virus was reduced by the tested chemical substances to less than 20 PFU/20 L air of the detection limit within 10 min at the tested maximum concentration, and viable SARS-CoV-2 persisted under the same conditions. A previous study reported a higher stability of SARS-CoV-2 than that of the influenza A virus on plastic surfaces with fogging hypochlorous acid or hydrogen peroxide (Urushidani et al., ). In the air, SARS-CoV-2 may be more resistant to disinfectants than the influenza A virus. In this study, the virucidal effect of chemical substances on SARS-CoV-2 was evaluated using the viral infectivity titer and viral RNA copy number. A reduction in the infectivity of their titer was observed when exposed to chemical substances, whereas the RNA copy number remained constant, and there was no reduction during the tested incubation time. This implies that there was no reduction due to the natural settling or physical adsorption of the nebulized virus particles on the chamber walls; rather, the tested viruses in the air were inactivated by the various chemical substances. Furthermore, it is speculated that viral inactivation within the concentration range used in this study may not be due to damage to the viral RNA but rather to denaturation of the viral envelope. Previous reports have shown that the main inactivation mechanisms of hypochlorous acid, chlorine dioxide, and ozone are based on viral lipid peroxidation and the subsequent lipid envelope and protein shell disruption (Ataei-Pirkooh et al., ; Block & Rowan, ; Ge et al., ). The chemical substances evaluated in this study—ozone gas, chlorine dioxide, and hypochlorous acid—are well known to have harmful effects on humans when they exceed certain levels. They also have corrosive effects on materials such as iron, natural rubber, and nylon. The permissible exposure limits for ozone gas and chlorine dioxide are defined as 0.1 ppm each for 8 h/day (40 h/week) by the Occupational Safety and Health Administration (OSHA) in the United States (The Occupational Safety & Health Administration, ). The data from this study indicate that achieving a 2-log reduction in virus inactivation within a short period (less than 10 min) would exceed the permissible exposure limits for ozone gas and chlorine dioxide. Therefore, the application of these chemicals may be limited in environmental spaces where humans are present. Although no permissible exposure limit has been defined for hypochlorous acid, the standard for chlorine gas was applied, as chlorine rapidly converts to hypochlorous acid on mucous membranes (Fukuzaki, ). The permissible exposure limit for chlorine gas is defined as 1 ppm for 8 h/day (40 h/week) by OSHA (The Occupational Safety & Health Administration, ). Consequently, the condition at 0.02 ppm, which showed a 2-log reduction in airborne SARS-CoV-2 and influenza virus A within 10 min, is considered an adaptable concentration for real-world settings where humans are present. This study has some limitations. First, we evaluated the virucidal activities of hypochlorous acid, chlorine dioxide, and ozone against one strain each of SARS-CoV-2 and influenza A virus. There may be differences in susceptibility to virucidal agents among viral strains, particularly the emerging strains of SARS-CoV-2. Second, although the experiment was performed at 23 ± 1 °C and 50 ± 5% RH, which are typical ambient conditions, temperature and humidity influence the stability of the virus particles, and SARS-CoV-2 is more stable at lower temperatures and humidity levels on aerosol and nonporous surface (Biryukov et al., ; Haddrell et al., ). Furthermore, the virucidal activity of the evaluated chemical agents is dependent on humidity, and the inactivation effect is reduced, particularly under low-humidity conditions (Murata et al., ; Nishimura et al., ). Therefore, to validate the airborne virus inactivation effects of these chemical agents, further experiments at low and high humidity levels, along with the moderate humidity levels set in this study, are necessary. In conclusion, the evaluated chemical agents fogging indicated concentration and time-dependent airborne virus inactivation effects on both viruses under ambient temperature and moderate humidity conditions. In actual living environments, the virus inactivation effects caused by the dispersion of chemical substances on airborne viruses are likely to vary depending on environmental factors such as the presence of humans, air circulation, degree of ventilation, and presence of household items such as wallpaper and sofas. However, we believe that the basic data obtained from this study will contribute to further investigations into infection control measures using chemical fogging for airborne viruses. |
Homogeneous Biosensing Based on Magnetic Particle Labels | 0034c7ab-1daf-49b0-b3c8-dc9fc0effb24 | 4934254 | Pathology[mh] | In the recent years, the growing availability and technical maturity of high-throughput technologies for molecular sample analysis has led to an ever increasing number of biomarkers reported in the literature . Applications of these biomarkers include, for example, the diagnoses of Alzheimer’s disease , chronic kidney disease , different types of cancer , diabetes , liver diseases , tuberculosis , atherosclerotic vascular disease , cardiovascular diseases or sepsis . While the translation of the discovered biomarkers into clinical practice still lags behind , some biomarkers are already successfully applied, for example the S100B protein for improving patient stratification in cases of traumatic brain injury . With the increasing focus on clinical links and quality control of current biomarker studies , it can be expected that the number of clinically relevant biomarker panels will substantially rise in the near future, thus creating an ever stronger need for biosensing technologies that allow fast and sensitive quantification of the respective biomarkers. In order to classify the numerous biosensing technologies reported in the literature, we first discriminate between measurement principles that can be applied in vivo from those measuring in vitro . The present review focuses on methods that are employed in vitro . Another basic distinction can be drawn between biosensors that make use of heterogeneous measurement principles and those that measure signals generated within the entire homogeneous sample solution phase. Heterogeneous biosensors rely on diffusion of the analyte molecules within the sample solution volume towards the sensor surface for signal generation. A well-established state of the art example of heterogeneous immunoassays is the enzyme-linked immunosorbent assay (ELISA). While heterogeneous assay principles generally display high sensitivity and wide dynamic range, labor intensive sample preparation steps that usually comprise multiple washing and incubation steps are disadvantages that limit their applicability . This especially accounts for point-of-care (PoC) testing settings which necessitate sensor principles that can be applied outside clinical laboratories, e.g., at the patient’s home for bedside monitoring or at the doctor’s office. Therefore, PoC testing requires robust, rapid and automated sensor systems . Homogeneous immunoassay principles rely on signal generation within the whole sample volume . Usually, the signal generating probes are mixed with the sample solution, and measurements are carried out on this complex mixture. Such simple “mix and measure” techniques offer great advantages for PoC testing applications since sample preparation requirements are drastically reduced. Furthermore, the three dimensional diffusion of both, analyte molecules and capture probes, leads to reduced total assay times compared to heterogeneous assay principles, where the analyte molecules have to diffuse to a two dimensional capture surface before being detected . In this paper, we review homogeneous measurement principles only. Homogeneous measurement methods can be further subdivided into measurement methods that make use of particles and particle-free approaches. Current state of the art examples of the latter include fluorescence polarization , fluorescence correlation spectroscopy , Förster resonance energy transfer (FRET) , molecular beacon-based sensor principles , or thermophoresis . Here, the fluorescence polarization measurement technique can also be employed by making use of particles . Sensing techniques that can be conducted homogeneously and are based on particle labels include surface-enhanced Raman spectroscopy , particle agglutination-based assays , and sensor principles based on magnetic particles. In this review, we focus on homogeneous biosensors that make use of magnetic particle labels. Generally, magnetic particles are already widely employed in biology and medicine . For instance, magnetic resonance imaging (MRI) can be improved by applying magnetic nanoparticles (NPs) as contrast agents, and the NPs can further be functionalized to specifically target the tissue of interest . MRI contrast agents affect the imaging signal that is generated within the tissue surrounding the nanoparticle . An alternative imaging modality is magnetic particle imaging (MPI), where the measurement signal directly stems directly from the magnetization of the NP labels. MPI offers the advantage of high contrast at short measurement times . Besides medical imaging, magnetic NPs are also used for therapeutic applications, e.g., for magnetic hyperthermia treatment . Here, the NPs are continuously re-magnetized in an external alternating magnetic field, and the resulting energy dissipation leads to a local temperature rise in the tissue surrounding the NPs. Thus, by targeting the NPs to cancer tissue, it is possible to specifically cause necrosis of cancer cells . Localization of magnetic NPs to a defined tissue volume is also key to magnetic drug delivery, which makes use of external magnetic gradient fields to generate forces that concentrate drug loaded magnetic NPs to the targeted destination for drug release . Once the magnetic particles are fixed to specific cell types, these can be tracked in vivo or separated ex vivo on a chip for further analysis . Additionally, magnetic NPs are also applied for electrochemical, optical or piezoelectric sensor principles . The detection of biomarkers in vitro by magnetic particle labels is in the central focus of the current review. A key advantage of magnetic particle labels is given by the possibility to manipulate and actuate the particles by applying tailored magnetic fields, which can be employed to accelerate incubation processes or enable frequency-selective analysis for improving the signal-to-noise-ratio of the measurement signal. Biosensing principles which employ magnetic particles for concentration, separation or washing steps only are excluded from this review as well as chip-based measurement approaches involving microfluidics. Here, we refer to the existing review literature . A wide range of different methods to synthesize magnetic particles is reported in the literature . The most common techniques are hydrothermal synthesis, sol-gel-based fabrication, microemulsion-based methods, high temperature decomposition of organometallic precursors, electrochemical synthesis routes, co-precipitation, and strategies based on physical condensation. Magnetic particles for biochemical applications require specific surface modifications to ensure applicability in solutions of physiological conditions (salt concentration and pH value) as well as to enable surface functionalization for specific recognition of target molecules . In summary, the current review focuses on in vitro homogeneous biosensing approaches that make use of magnetic particle labels and magnetic actuation. To that end, we first review methods that detect the particle labels magnetically , and later move on to optical detection methods .
In this section, we review in vitro homogenous biosensing principles that apply magnetic particle labels and make use of magnetic detection methods. We distinguish between techniques that detect the presence of magnetic particles by permeability measurements (see ), methods that rely on measuring changes of the hydrodynamic particle volume (see ) and approaches that are based on sensing the environment surrounding the particle labels by T 2 relaxation nuclear magnetic resonance (see ). Measurement approaches relying on surface binding of magnetic particle labels are not taken into account here. Examples for such methods include Hall sensors , magnetoresistance based techniques , or on-chip detection of magnetic flux density changes upon magnetic particle binding . In addition, methods relying on magnetic separation of particle labels in a microfluidic channel are also out of scope. 2.1. Magnetic Permeability Measurements Magnetic permeability sensing is based on measuring the concentration of magnetic particles in a sample. Fundamental to this approach is the substantially higher value of the relative magnetic permeability of ferromagnetic materials compared to other substances, which allows to quantify the number of magnetic particle labels within a given sample volume . The method has initially been introduced for bio-assay measurements by Kriz et al. , who demonstrated an experimental setup for measuring changes of the sample’s magnetic permeability in the presence of magnetic particles . The technical realization is based on inserting the sample into a coil and measuring the inductance L , which is given by: (1) L = μ 0 μ r AN 2 l with the relative magnetic permeability of the material inside the coil μ r , the vacuum permeability μ 0 , the cross section area A , the length of the applied coil l , and its number of windings N . The inductance is determined by applying the coil in a Maxwell bridge with two variable resistances and by balancing the bridge at a driving AC current . The setup can be employed for homogeneous bio-assay measurements by introducing magnetic NP labels which bind to carrier microparticles via analyte molecules (see sketch in ) . While free magnetic NPs remain dispersed, the microparticles sediment and, therefore, enrich the concentration of magnetic NPs at the bottom of the sample vial in the presence of analyte molecules. The method can also be carried out as heterogeneous assay by implementing further washing steps. In their initial report, Kriz et al. demonstrated detection of glucose by magnetic permeability sensing . Later, making use of a further developed measurement method, the authors showed the detection of concanavalin A (ConA) protein by unspecific binding processes to the particle surfaces . Another demonstrated application concerns a one-step assay for measuring C-reactive protein (CRP) in both human and canine samples for point-of-care applications . Measurements of whole blood samples from 50 patients have been compared to results obtained by different reference methods (a turbidimetric immunoassay and two commercially available PoC instruments), and good correlation to the magnetic permeability measurement method has been reported . The measurement method is well-suited for PoC testing due to its one-step assay procedure and fast analysis time of only about 5.5 min . Additionally, homogenous measurements of CRP from whole blood canine samples have been carried out in comparison to ELISA reference data . Heterogeneous assay magnetic permeability measurements have been performed by detecting unbound magnetic NPs after a filtration step and correlating their concentration to the concentration of present CRP . This assay has been integrated into a portable instrument that can be applied for PoC sensing . Quantitative magnetic permeability measurements have been reported with a limit of detection (LoD) of 8 mg/L CRP, while reference ELISA reached a LoD of 3 mg/L . Correlation analysis of the measured CRP biomarker concentrations in 47 canine serum samples to a commercial ELISA kit resulted in an excellent coefficient of determination of 98%, and the average values of about 6% for the intra- and inter-assay imprecision are also comparable to commercial ELISA kits . Homogeneous analysis of albumin in urine samples of 149 individuals by magnetic permeability measurements has been reported by Lu et al. . Albumin in urine serves as a protein biomarker for diabetes and hypertension patients who have a higher risk for developing a nephropathy . Comparisons with the results of a turbidimetric immunoassay that serves as the hospital’s reference method show a good correlation with the magnetic permeability measurement results . Finally, a proof of principle for the detection of DNA has been reported by Abrahamsson et al. by applying the magnetic permeability measurement method . 2.2. Detecting Variations of Hydrodynamic Properties of Magnetic Particle Labels In this section, we discuss measurement methods for which the signal originates from changes in the hydrodynamic properties of the magnetic particle labels following binding of analyte molecules. illustrates magnetic particles with surface-immobilized recognition molecules ( a) along with the two possible effects on the dispersion of the magnetic particles upon addition of analyte molecules. Binding of analyte molecules always directly alters the hydrodynamic volumes of the particles ( b), but can also lead to aggregation of magnetic particles into clusters in cases where the analyte molecule has more than one epitope available for binding to the recognition elements immobilized onto the particle surfaces ( c). Different methods and techniques are reported for measuring changes of the hydrodynamic properties of magnetic particle labels. The first approach involves aligning the magnetic particles in the direction of an external uniaxial static magnetic field and monitoring the time decay of the sample’s mean magnetization after switching off the aligning field (see ). For magnetic particles with Néel relaxation times substantially larger than Brownian relaxation times, the recorded magnetization relaxation relates to the rotational diffusion of the particles back into their randomized state. The measured decay time depends on the actual hydrodynamic magnetic particle volume, which is altered on binding of analyte molecules. The second approach is based on dynamic agitation of the magnetic particle labels by external linear AC magnetic fields (see ). Here, we further distinguish between techniques that measure the AC susceptibility of the magnetic particle label ensemble by analyzing frequency sweeps of the agitation field, methods based on a mixed-frequency detection approach, and methods that focus on studying the phase lag between the external magnetic field and the magnetization of the sample as main parameter. The third approach is similar to the second one, but makes use of rotating magnetic fields instead of linear AC magnetic fields to agitate the magnetic particle labels (see ). Studies making use of this approach usually analyze the data with regard to the phase lag between the external magnetic field and the magnetization of the sample. In the following, we will discuss the measurement techniques particularly with regard to their historical development and their potential application areas by giving examples of different published bioassays. 2.2.1. Magnetorelaxation Measurements In an external magnetic field, the magnetic moments of particle labels dispersed in the sample solution experience a magnetic torque, resulting in a net sample magnetization in field direction governed by the Langevin equation . Once the external field is switched off, the magnetic torque vanishes, and the sample’s net magnetization relaxes back to zero. At the particle label scale, two different relaxation processes can be distinguished, which are the Néel relaxation and Brownian relaxation. Néel relaxation describes an internal decay of the magnetic moment of the particle labels, while Brownian relaxation designates thermal rotational diffusion of the particle labels. Both processes can be described by characteristic relaxation times. The Brownian relaxation time τ B is defined by: (2) τ B = ψ 2 k B T with temperature T , the Boltzmann constant k B and a rotational drag coefficient ψ . Here, the latter in case of a spherical particle is given by: (3) ψ = 6 η V h with the dynamic viscosity of the sample fluid η and the hydrodynamic NP volume V h . Thus, for spherical particles the Brownian relaxation time τ B can be written as: (4) τ B = 3 η V h k B T The dependence of the Brownian relaxation time on the hydrodynamic particle volume paves the way for homogeneous biosensing applications, as particle clustering or binding of analyte molecules induces changes in the relaxation times. This is schematically sketched in a, which shows the Brownian relaxation time plotted against the magnetic core diameter of a spherical NP. The NPs applied for calculating the relaxation times comprise a magnetic core with a magnetic anisotropy energy density K = 20 KJ/m 3 (corresponding to magnetite Fe 3 O 4 ) and a hydrodynamic shell around the magnetic core of thickness t (indicated in a by the grey area) . Water at room temperature is assumed as the sample medium. Obviously, large changes in particle volume resulting from particle clustering induce substantial changes in the relaxation time that can be easily detected, while smaller changes due to analyte molecule binding onto the particle surface have to be measured at a shorter time scale. Moreover, it can be seen that an increase in hydrodynamic shell thickness induces changes of the Brownian relaxation time for small particle core diameters only. Thus, for measurements of changes of the hydrodynamic shell thickness upon binding of analyte molecules, small initial nanoprobes have to be applied, while methods based on detecting particle agglomeration are less sensitive on the initial nanoprobe size. The Néel relaxation time τ N is also called inverse flipping frequency of the magnetization and can be written as: (5) τ N = τ 0 e x p ( K V m k B T ) where τ 0 is usually in the range of 10 −9 s, K is the particle’s magnetic anisotropy energy density, and V m denotes the particle’s magnetic volume . Both relaxation times can be combined to an effective relaxation time τ eff of the form : (6) τ e f f = τ B τ N τ B + τ N b shows both individual relaxation times and the effective relaxation time against the magnetic particle core diameter. All parameters employed for the calculation are the same as applied for the calculation of the Brownian relaxation time in a, except that b assumes a hydrodynamic shell around the magnetic core of a constant thickness of 10 nm. Due to the rapid increase of the Néel relaxation time with increasing magnetic volume, the Brownian relaxation mechanism dominates for larger core diameters. Relaxation time determination has first been applied for biosensing in a heterogeneous sandwich-type assay format making use of magnetic NP labels that bind specifically to a solid surface via bound analyte molecules . For the chosen NPs, the Brownian relaxation time is substantially smaller than the response time of the superconducting quantum interference device (SQUID) instrumentation applied to detect the sample magnetization. As the magnetic moment of surface-bound magnetic NP labels can only relax by the slow Néel mechanism once the external magnetizing field is switched off, only bound NP labels contribute to the signal, while the magnetization of non-bound NP labels remaining freely in the sample solution has already decayed via the fast Brownian relaxation mechanism. This measurement principle resulted in the development of the so called magnetic relaxation immunoassay (MARIA), which is comparable to ELISA, but makes use of magnetic NPs as labels instead of applying an enzymatic reaction for signal generation . Here, Lange et al. showed detection of human immunoglobulin G (IgG) protein by employing a low T c -SQUID instrument and by measuring Néel relaxation. Furthermore, the remanence of the bound magnetic NPs has been measured to deduce the analyte concentration as shown by Kötitz et al. . Here, they applied functionalized magnetic NPs, which are bound to a flat surface via the analyte molecule and measured the magnetization of the immobilized magnetic NPs to evaluate the analyte concentration. For the measurement of the magnetic remanence, again a static magnetic field is applied and the magnetization is observed over time with the difference that the sample is removed during the measurement period so that the resulting change in the measured magnetization signal can be related to the remanence of the fixed magnetic NPs. Unbound magnetic NPs are already relaxed in their orientation by Brownian relaxation. Magnetorelaxation measurements (MRX, magnetorelaxometry) have also been applied to homogeneous biosensing. For example, Kötitz et al. performed magnetic induction measurements of the entire sample volume to record the relaxation signal of biotinylated iron oxide NP labels . In this case, the signal is a mixture of both, Néel and Brownian relaxation, but as the Néel relaxation time of the applied particle labels is substantially larger, the Brownian mechanism dominates. Thus, measured changes in the relaxation time can be associated to changes in the hydrodynamic particle volumes (see Equation (4)), which are induced by addition of avidin model analyte, thus inducing particle label clustering . Eberbeck et al. applied homogenous magnetorelaxometry to study unspecific binding reaction kinetics of magnetic NPs onto latex micro beads and onto yeast cells . This has later been expanded to specific binding reactions of the biotin-streptavidin model system by coupling streptavidin and anti-biotin antibody functionalized magnetic NPs to biotinylated micro agarose beads . Yang et al. also employed the biotin-avidin model system to evaluate the amount of avidin added to a solution of biotinylated magnetic NP labels, which caused clustering of the particles and, consequently, a change in the measured relaxation time . Studies of the Brownian magnetization relaxation curves measured homogeneously for magnetic particle labels has also been performed by Enpuku et al. , who showed detection of Candida albicans fungi . Specifically, they applied a sandwich-type assay making use of biotinylated antibodies to target the fungi and avidin-coated magnetic NPs that bind to the biotinylated antibodies . Contrary to the short relaxation times of unbound magnetic markers of about 0.4 ms, particles immobilized onto fungi show an increased relaxation time of about 24,000 ms due to the fungi size . The same group established similar bioassays based on analyte molecule induced binding of magnetic NPs to polymer microbeads . By using this measurement technique, biotin model analyte has been detected down to a concentration of 0.95 fM . Another possible application of the magnetorelaxometry technique is the quantification of the uptake of magnetic NPs by cells . MRX measurements have also been performed by applying fluxgate magnetometers, which offer the possibility to make use of miniaturized measurement instruments applicable to point-of-care settings. This is hardly possible for SQUID magnetometers, which require extensive hardware for cooling. Ludwig et al. applied fluxgate magnetometers to study the relaxation behavior of magnetic particle labels . The same group improved the detection technique by making use of two fluxgates in a differential configuration, which allows to perform measurements in a magnetically unshielded environment . A homogeneous MRX bioassay making use of fluxgate magnetometers was demonstrated for the study of binding reactions of streptavidin functionalized magnetic NP labels to biotinylated agarose microbeads and to biotinylated bovine serum albumin (BSA) protein . 2.2.2. Dynamic Agitation by Linear AC Magnetic Fields The biosensing principles discussed in this section are based on analyzing the frequency dependence of the sample’s magnetic susceptibility. For sufficiently small amplitudes, the induced magnetization of a sample fluid containing magnetic particle labels is a linear function of the magnetizing field strength, and it can be characterized by the complex magnetic susceptibility χ = χ’ − iχ’’. The dependence of the susceptibility on the frequency of the external field is given by: (7) χ ( ω ) = χ 0 1 + i ω τ e f f with the angular frequency ω , the corresponding susceptibility in an external DC magnetic field χ 0 , and the effective relaxation time τ eff . The real (Equation (8)) and the imaginary (Equation (9)) parts of the susceptibility follow as : (8) χ ′ ( ω ) = χ 0 1 + ( ω τ e f f ) 2 (9) χ ″ ( ω ) = χ 0 ω τ e f f 1 + ( ω τ e f f ) 2 These formulas indicate that the real part of the magnetic susceptibility decreases with increasing frequency, while the imaginary part shows a maximum at ωτ eff = 1, which allows to deduce the Brownian relaxation time and, consequently, the hydrodynamic particle volume (see Equations (4) and (6)) . Plots of both parts of the magnetic susceptibility χ are shown in against ωτ eff with ω = 2πf and the magnetic excitation field frequency f . An alternative representation of the AC susceptibility measurement signal is given by the phase lag ϕ , which is a measure of the phase between the exciting external magnetic field and the magnetization of the sample and can be expressed by the real and the imaginary part of the AC susceptibility according to : (10) φ = a r c t a n ( χ ″ χ ′ ) Different measurement approaches have been employed in the literature to detect changes of the hydrodynamic particle properties by dynamic linear AC magnetic field excitation, which will be discussed in the following subsections. Frequency Sweep AC Susceptibility Measurements These types of methods are based on measuring the impedance of an induction coil into which a vial containing the sample fluid with added magnetic particle labels is placed . To that end, the magnetic particles are excited by an external linear AC magnetic field of variable frequency generated by the induction coil . While the coil’s inductance depends on the real part of the sample’s susceptibility, the coil’s resistance is directly related to the imaginary susceptibility part of the sample . By analyzing frequency sweeps of the applied AC magnetic field, changes of the measured complex susceptibility of the sample can be directly related to the hydrodynamic volume of the magnetic particle labels and, thus, to the binding of analyte molecules. Some applications of this method to biosensing have been presented by Astalan et al. and by Chung et al. The former group verified the sensor principle by detecting prostate specific antigen (PSA) in buffer solutions employing magnetite NPs labels functionalized by specific antibodies . The latter group demonstrated binding reactions of biotinylated S-protein to avidin functionalized magnetite NP labels . AC susceptibility measurements have also been shown by employing cobalt NPs dispersed in water as labels . Fornara et al. presented the synthesis of magnetite single core NPs with optimized performance for AC susceptibility measurements . Following functionalization of the magnetic NPs, the authors could show detection of specific antibodies in untreated serum samples of cows infected by Brucella bacteria with a limit of detection of about 0.05 µg/mL of Brucella antibodies . Further AC susceptibility biosensing-related studies include the analysis of binding reactions of avidin-coated iron oxide magnetic NP labels with biotin-coated polymer microbeads to gain information on size distributions of the magnetic particle labels, the signal’s dependence on the concentration of the applied polymer particles and the effect of different incubation times . The AC susceptibility measurement method has also been applied to the detection of DNA by the so called “volume-amplified magnetic nanobead detection assay”. It has been shown that specific DNA strands can be detected following rolling circle amplification steps . This technique has been extended to multiplexed detection of DNA sequences and was also adopted into portable measurement instruments . It has been applied to detect Bacillus globigii spores and bacterial DNA originating from Vibrio cholerae and Escherichia coli ( E. coli ) . In addition, AC susceptibility biosensing can also be carried out in a multiplexed format by making use of the distinct spectral positions of the imaginary part of the complex susceptibility of differently sized magnetic particle labels, thus enabling simultaneous detection of different types of analyte molecules within the same sample solution . Öisjöen et al. studied the increase of the hydrodynamic volume of magnetic NP labels upon analyte molecule binding by analyzing AC susceptibility frequency sweeps in combination with magnetorelaxation measurements, i.e. , monitoring of the decay of the sample’s magnetization after an aligning magnetic field is switched off . While data fits to AC susceptibility spectra reveal the actual size distribution of the applied magnetic particle labels, the magnetorelaxation data allows to observe real-time kinetics of binding events. Both measurements are done by making use of a SQUID magnetometer, and a LoD of 10 µg/mL of the applied streptavidin-coated multi-core CoFe 2 O 4 magnetic particle labels was obtained . As model analyte, the authors employed PSA targeting biotinylated antibodies and demonstrated a LoD of 0.7 nM . Mixed-Frequency AC Susceptibility Measurements A change in the dynamics of magnetic particle labels upon an increase in hydrodynamic volume can also be measured by the magnetic susceptibility reduction method. Here, the magnetic susceptibility of the sample reduces upon analyte molecule binding due to the growing hydrodynamic volume or clustering of the magnetic particle labels. The immunomagnetic reduction (IMR) method is based on detecting this reduced susceptibility by applying a mixed-frequency read-out technique . To that end, the magnetic particle labels are excited by two linear AC magnetic fields of different frequency, which are generated by two distinct excitation coils (see a for a schematic measurement setup) . The measurement signal is the sample’s magnetization, which is detected by a pick-up coil or, for higher sensitivity, by a SQUID magnetometer . The excitation frequencies are chosen high enough, so that only single magnetic particle labels can follow, while clusters of magnetic particle labels are not affected. Therefore, the measured susceptibility originates from single particles only . The reduction in measurement signal can be directly related to the amount of bound analyte molecules . Applying an excitation mode with two different frequencies f 1 and f 2 allows to detect the magnetic susceptibility χ AC not only at the excitation frequencies but also at mixed frequencies of the form mf 1 + nf 2 with integers for m and n . This leads to an improved signal-to-background ratio as the single excitation frequencies are effectively suppressed from the measurement signal . Bioassay measurement based on this method have been reported by Hong et al. , who showed detection of CRP in serum samples . The same group further developed the IMR technique by employing a SQUID-based measurement setup for more sensitive detection of magnetic particle labels, and they achieved a CRP limit of detection of 10 −6 mg/L, which presents an improvement in sensitivity of five orders of magnitude compared to their previous publication . The group has also shown that the dependence of the detected signal on the analyte molecule concentration follows a logistic function (see b) , which is discussed in more detail in a separate publication . The logistic function is commonly applied as a valuable tool for the interpretation of IMR measurement results. Recent publications have demonstrated the feasibility of the IMR method for the detection of different proteins in clinically relevant settings. Here, examples include the detection of CRP in buffer and in human serum samples or the detection of the insulin-like growth factor binding protein-1 (IGF-1) in cervicovaginal secretions of pregnant women for the diagnosis of preterm premature rupture of membranes . Molecular diagnosis of cancer by detecting protein biomarkers in serum samples has been reported for the des-γ-carboxyprothrombin protein in rat serum, and it was shown that the concentration of the protein biomarker correlates with the tumor size in hepatocellular carcinoma . Furthermore, the concentration of the α-fetoprotein (AFP) was evaluated in human serum samples of both healthy individuals and patients with liver tumors . Finally, the vascular endothelial growth factor protein has been employed as analyte molecule in human serum for the distinction of healthy individuals and tumor patients with colorectal or hepatocellular cancer . Specific proteins like β-amyloid-40 (Aβ-40), Aβ-42 and the tau-protein serve as the most prominent biomarkers for research on Alzheimer’s disease and mild cognitive impairment. IMR measurements of these proteins in buffer solutions to give a first proof-of-principle have been reported and previously the detection has been shown in human plasma samples . In addition to proteins, IMR has also been applied for the sensing of small molecules like hormones, as it has been reported by Chen et al. for the detection of the β-subunit of human chorionic gonadotropin in urine samples of pregnant women . Furthermore, a general proof for the successful detection of DNA by IMR measurements can be found in the publication of Yang et al. . Moreover, IMR has been reported for virus bioassays as well. Examples include the detection of two types of orchid viruses by magnetic NP labels functionalized by an antibody to target the virus particles , the detection of the avian virus H5N1 , and swine influenza A viruses . Finally, the IMR measurement technique has been employed in the field of veterinary research and for food control. Specifically, an assay for detecting shrimp white spot disease caused by white spot syndrome virus has been developed, and the detection of antibiotics in shrimp has been achieved by direct binding of the chloramphenicol drug to antibodies on the particle label surface . Additionally, an IMR assay has been developed for sensing of the nervous necrosis virus extracted from aquaculture groupers . Phase Lag AC Susceptibility Measurements An alternative approach for analyzing the dynamics of magnetic particle labels is to examine the phase lag between the AC magnetic excitation field and the magnetization of the sample fluid (see Equation (10)), which allows detecting the signal of interest at a single frequency. Liao et al. introduced this measurement mode employing dextran-coated superparamagnetic Fe 3 O 4 particles with core diameters of 12 nm as magnetic particle labels . For bioassay measurements, the applied particles were functionalized by antibodies targeting the CRP protein, and particle clustering was induced by CRP analyte . Particle clustering affects the total effective relaxation time and, thus, the AC susceptibility and the measured phase lag . Liao et al. demonstrated CRP detection down to approximately 40 nM in buffer solution . The same group also examined detection of AFP in buffer solution and obtained a LoD of about 1 nM . Excitation and detection is experimentally realized by a respective coil arrangement, and a Lock-In amplifier is employed for the phase lag determination . Here, the observed phase lag differences upon analyte addition with respect to samples without analyte molecules reach about 0.3–2°, while the absolute phase lags amount to about 3° . Tu et al. developed a measurement mode which combines the mixed-frequency detection technique as discussed above with observations of the phase lag between the magnetization of the sample and the external magnetic field . Specifically, the magnetic particle labels are simultaneously excited by two linear magnetic fields of different frequency, and the signal to be detected is the phase lag of the resulting sample magnetization with respect to the excitation fields. In their experiments, one frequency is kept fixed, while the other frequency is scanned, and the phase lag is recorded in dependence of the variable frequency . 2.2.3. Dynamic Agitation by Rotating Magnetic Fields Instead of applying linear AC magnetic fields, actuation of the magnetic particle labels can also be achieved by applying rotating magnetic fields. It has been shown that rotating magnetic field actuation leads to higher signal values compared to linear AC magnetic field actuation . As described in the previous section, the hydrodynamic properties of the particle labels can be represented by the phase lag of the sample magnetization to the applied magnetic field. A schematic illustration of the measurement method is shown in . When the Néel relaxation time of the applied magnetic particle labels is substantially larger than the period of the exciting rotating magnetic field, the magnetic particle moment follows the rotating magnetic field by Brownian rotation. Due to the hydrodynamic drag the particle label experiences within the sample fluid, this rotation is delayed by a steady-state phase lag ϕ , which rises when the hydrodynamic diameter d hydro of the particle label increases due to binding of analyte molecules. A first proof-of-principle of magnetic particle label agitation by rotating magnetic fields and magnetic detection by fluxgate magnetometers has been given by Dieckhoff et al. . The authors demonstrated detection of binding processes of IgG antibodies to magnetic NP labels functionalized by protein G and analyzed the dependence of the measurement signal on the analyte molecule concentration . It has also been reported that the binding kinetics of analyte molecules to the magnetic NP labels can be interpreted according to the law of mass . Absolute phase lag values of up to 60° and phase lag differences between samples with and without added analyte molecules of up to 20° were observed . 2.3. Nuclear Magnetic Resonance Measurements Nuclear magnetic resonance (NMR) measurements of water protons in conjunction with magnetic particles can be applied for biosensing of a variety of different analytes, as will be shown in the following. Usually, superparamagnetic NPs are employed to modify the precession of the nuclear spins of water protons in the proximity of the NPs, which in turn alters the measured relaxation times , but application of paramagnetic particles has also been reported . Adding superparamagnetic NPs to samples that are measured by NMR leads to the creation of local magnetic dipole fields that cause inhomogeneities of the applied external static magnetic field, which results in differences of nuclear spin precession of protons close to the NPs and protons of the bulk sample material (dephasing of proton spins) . An important property of superparamagnetic NPs employed for NMR measurements is their relaxivity, which is defined as their capacity to alter the relaxation rate constants, both longitudinal (parallel to the external static magnetic field) and transverse (perpendicular to the external static magnetic field) . The relaxivity depends of the single NP size and the concentration of the NP ensemble . The relaxation rate constants are inverse functions of the relaxation times (R = 1/T), so that the relaxivity directly correlates to changes of the relaxation times and, thus, to the signal enhancement achieved by the employed magnetic NPs . The time associated to transverse relaxivity is denoted as T 2 , and T 1 is associated to the longitudinal relaxivity . As the longitudinal relaxivity is smaller than the transverse relaxivity for commonly employed magnetic NPs, measurements of the latter are usually employed for biosensing . This way, lower concentrations of magnetic NPs need to be applied, which increases the assay’s sensitivity and lowers the amount of required reagents . If the NPs are functionalized to bind to specific target molecules, two distinct measurement modes can be applied for biosensing, as described below . In the first measurement mode, the biomarkers of interest are labeled by the magnetic NPs, and the excess of unbound NPs is removed . The remaining NPs induce changes of the sample’s relaxation times due to the added magnetic field inhomogeneities, which are proportional to the number of residual magnetic NPs . This measurement mode is used for detecting larger targets like cells and bacteria, which can easily be separated mechanically from unbound free NPs . In those cases, the magnetic NPs bind to biomarkers on the cell surface . The second measurement mode relies on clustering of the magnetic NPs due to cross-linking by analyte molecules that specifically bind to the functional groups immobilized onto the NP surfaces . A difference of the T 2 relaxation time between single-dispersed NPs and agglomerated NPs is the fundamental effect on which this measurement approach is based . Applications of this method include the sensing of small molecules (e.g., drugs), oligonucleotides and proteins . By using enzymes, competitive binding processes or changes of the pH value and of the temperature, the assay can be performed backwards as well, i.e. , starting from particle agglomerates and ending at single-dispersed particles . This dual-direction biosensing capability is termed magnetic relaxation switching (MRSw), which describes changes of the organizational state (single-dispersed vs. agglomerated) of the magnetic NPs in solution . The principle of the MRSw measurement method is sketched in . The formation of magnetic NP agglomerates results in a decrease of the measured relaxation time, and vice versa if particle agglomerates are dispersed into single NPs. The observation of reduced relaxation times upon magnetic NP agglomeration can be explained by the outer-sphere theory. General comprehensive summaries of the outer-sphere theory can be found in , while a more detailed description is given in the . Briefly summarized, the relaxivity is directly proportional to the geometric cross section of the NP . Additionally, a particle cluster consisting of single NPs can be seen as an equivalent of an enlarged single NP, which has been shown to be true regardless of the cluster’s fractal dimension . Thus, the formation of a NP cluster can be described by a single NP of increasing size, which means that upon NP agglomeration, the relaxivity increases and the measured relaxation time decreases . Here, the effective cross section of a NP agglomerate is larger than the sum of the contributing single NPs up to a certain limit of agglomerate size (>100 nm diameter) . The relaxivity increases with agglomerate size up to a plateau, which is then followed by a decrease . The decrease in relaxivity can be explained qualitatively by the increasing distance between NP agglomerates so that less water protons are affected by the generated magnetic field inhomogeneities, which is related to the limited translational diffusion behavior of water molecules during the time scale of a MRSw experiment (less protons diffuse into the inhomogeneous regions of the static magnetic field within the duration of an experiment) . A detailed introduction and also an extension of the outer-sphere theory is given in . Furthermore, a set of mathematical equations that allow to model the behavior of MRSw experiments and to calculate assay sensitivities and dynamic ranges has been published by Min et al. . A wide range of different applications of NMR measurements making use of superparamagnetic NPs can be found in literature and is already partly listed in . The following paragraphs give an introduction into the broad area of potential applications. Josephson, Perez and Weissleder have been the first ones who discovered the biosensing potential of NMR measurements assisted by superparamagnetic NPs . Here, they employed oligonucleotide functionalized NPs, which were cross-linked by complementary oligonucleotide strands to induce NP clustering, thus leading to a decrease of the observed transversal relaxation time . The backward direction of the MRSw sensing approach has first been demonstrated by Perez et al. , who showed that the transversal relaxation time increases when NPs connected by double stranded DNA are separated from each other by applying DNA-cleaving agents . NMR measurements have also been used to detect polymerase chain reaction (PCR) products , which has been applied for the diagnosis of tuberculosis . The first experimental results on the detection of protein-protein interactions by applying green fluorescent protein antibody functionalized NPs to detect the corresponding proteins have been presented by Perez et al. , who in the same publication also presented results on enzyme activity sensing achieved by reversing the MRSw assay direction (enzymatic cleaving of NP binding to yield single-dispersed NPs in solution) . Additionally, several enzymes have been tested by applying the MRSw sensing principle. Exemplary, avidin functionalized NPs can be cross-linked by applying a bi-biotinylated peptide, which in the following can be cleaved by the protease enzyme to generate a change in measured relaxation time . Other examples are lysozymes, which have been tested in human serum samples with a LoD in the lower nanomolar regime , and measurements of the telomerase activity by employing different telomerase inhibitors . Measurements of the T 2 relaxation time by nuclear magnetic resonance have also been applied for determining dissociation constants between proteins and associated ligands . Larger molecules have also been examined, e.g., viral particles of the herpes simplex virus and the adenovirus , S. enterica bacteria in milk samples or cancer cells that have been detected and profiled by MRSw sensing . On the other end of the scale bar, also very small molecules have been detected in various sample solutions. For example, hormone-like bisphenol A molecules have been tested in drinking water with a LoD of 400 pg/mL , enantiomeric impurities in solutions of the amino acid phenylalanine have been examined , and the salbutamol drug has been measured in swine urine samples . Identification of inhibitors for toxins released by the Anthrax bacterium by measurements of the T 2 relaxation time has also been reported . In a suitable measurement setting, MRSw can also be applied to detect ions in solution, which has been shown by Atanasijevic et al. , who detected calcium ions by applying calcium dependent protein-protein interactions to induce magnetic NP agglomeration . Further developments of the measurement principle concern miniaturization of the experimental setup and the development of implantable MRSw systems, which have been tested up to now for the detection of both cancer and cardiac biomarkers .
Magnetic permeability sensing is based on measuring the concentration of magnetic particles in a sample. Fundamental to this approach is the substantially higher value of the relative magnetic permeability of ferromagnetic materials compared to other substances, which allows to quantify the number of magnetic particle labels within a given sample volume . The method has initially been introduced for bio-assay measurements by Kriz et al. , who demonstrated an experimental setup for measuring changes of the sample’s magnetic permeability in the presence of magnetic particles . The technical realization is based on inserting the sample into a coil and measuring the inductance L , which is given by: (1) L = μ 0 μ r AN 2 l with the relative magnetic permeability of the material inside the coil μ r , the vacuum permeability μ 0 , the cross section area A , the length of the applied coil l , and its number of windings N . The inductance is determined by applying the coil in a Maxwell bridge with two variable resistances and by balancing the bridge at a driving AC current . The setup can be employed for homogeneous bio-assay measurements by introducing magnetic NP labels which bind to carrier microparticles via analyte molecules (see sketch in ) . While free magnetic NPs remain dispersed, the microparticles sediment and, therefore, enrich the concentration of magnetic NPs at the bottom of the sample vial in the presence of analyte molecules. The method can also be carried out as heterogeneous assay by implementing further washing steps. In their initial report, Kriz et al. demonstrated detection of glucose by magnetic permeability sensing . Later, making use of a further developed measurement method, the authors showed the detection of concanavalin A (ConA) protein by unspecific binding processes to the particle surfaces . Another demonstrated application concerns a one-step assay for measuring C-reactive protein (CRP) in both human and canine samples for point-of-care applications . Measurements of whole blood samples from 50 patients have been compared to results obtained by different reference methods (a turbidimetric immunoassay and two commercially available PoC instruments), and good correlation to the magnetic permeability measurement method has been reported . The measurement method is well-suited for PoC testing due to its one-step assay procedure and fast analysis time of only about 5.5 min . Additionally, homogenous measurements of CRP from whole blood canine samples have been carried out in comparison to ELISA reference data . Heterogeneous assay magnetic permeability measurements have been performed by detecting unbound magnetic NPs after a filtration step and correlating their concentration to the concentration of present CRP . This assay has been integrated into a portable instrument that can be applied for PoC sensing . Quantitative magnetic permeability measurements have been reported with a limit of detection (LoD) of 8 mg/L CRP, while reference ELISA reached a LoD of 3 mg/L . Correlation analysis of the measured CRP biomarker concentrations in 47 canine serum samples to a commercial ELISA kit resulted in an excellent coefficient of determination of 98%, and the average values of about 6% for the intra- and inter-assay imprecision are also comparable to commercial ELISA kits . Homogeneous analysis of albumin in urine samples of 149 individuals by magnetic permeability measurements has been reported by Lu et al. . Albumin in urine serves as a protein biomarker for diabetes and hypertension patients who have a higher risk for developing a nephropathy . Comparisons with the results of a turbidimetric immunoassay that serves as the hospital’s reference method show a good correlation with the magnetic permeability measurement results . Finally, a proof of principle for the detection of DNA has been reported by Abrahamsson et al. by applying the magnetic permeability measurement method .
In this section, we discuss measurement methods for which the signal originates from changes in the hydrodynamic properties of the magnetic particle labels following binding of analyte molecules. illustrates magnetic particles with surface-immobilized recognition molecules ( a) along with the two possible effects on the dispersion of the magnetic particles upon addition of analyte molecules. Binding of analyte molecules always directly alters the hydrodynamic volumes of the particles ( b), but can also lead to aggregation of magnetic particles into clusters in cases where the analyte molecule has more than one epitope available for binding to the recognition elements immobilized onto the particle surfaces ( c). Different methods and techniques are reported for measuring changes of the hydrodynamic properties of magnetic particle labels. The first approach involves aligning the magnetic particles in the direction of an external uniaxial static magnetic field and monitoring the time decay of the sample’s mean magnetization after switching off the aligning field (see ). For magnetic particles with Néel relaxation times substantially larger than Brownian relaxation times, the recorded magnetization relaxation relates to the rotational diffusion of the particles back into their randomized state. The measured decay time depends on the actual hydrodynamic magnetic particle volume, which is altered on binding of analyte molecules. The second approach is based on dynamic agitation of the magnetic particle labels by external linear AC magnetic fields (see ). Here, we further distinguish between techniques that measure the AC susceptibility of the magnetic particle label ensemble by analyzing frequency sweeps of the agitation field, methods based on a mixed-frequency detection approach, and methods that focus on studying the phase lag between the external magnetic field and the magnetization of the sample as main parameter. The third approach is similar to the second one, but makes use of rotating magnetic fields instead of linear AC magnetic fields to agitate the magnetic particle labels (see ). Studies making use of this approach usually analyze the data with regard to the phase lag between the external magnetic field and the magnetization of the sample. In the following, we will discuss the measurement techniques particularly with regard to their historical development and their potential application areas by giving examples of different published bioassays. 2.2.1. Magnetorelaxation Measurements In an external magnetic field, the magnetic moments of particle labels dispersed in the sample solution experience a magnetic torque, resulting in a net sample magnetization in field direction governed by the Langevin equation . Once the external field is switched off, the magnetic torque vanishes, and the sample’s net magnetization relaxes back to zero. At the particle label scale, two different relaxation processes can be distinguished, which are the Néel relaxation and Brownian relaxation. Néel relaxation describes an internal decay of the magnetic moment of the particle labels, while Brownian relaxation designates thermal rotational diffusion of the particle labels. Both processes can be described by characteristic relaxation times. The Brownian relaxation time τ B is defined by: (2) τ B = ψ 2 k B T with temperature T , the Boltzmann constant k B and a rotational drag coefficient ψ . Here, the latter in case of a spherical particle is given by: (3) ψ = 6 η V h with the dynamic viscosity of the sample fluid η and the hydrodynamic NP volume V h . Thus, for spherical particles the Brownian relaxation time τ B can be written as: (4) τ B = 3 η V h k B T The dependence of the Brownian relaxation time on the hydrodynamic particle volume paves the way for homogeneous biosensing applications, as particle clustering or binding of analyte molecules induces changes in the relaxation times. This is schematically sketched in a, which shows the Brownian relaxation time plotted against the magnetic core diameter of a spherical NP. The NPs applied for calculating the relaxation times comprise a magnetic core with a magnetic anisotropy energy density K = 20 KJ/m 3 (corresponding to magnetite Fe 3 O 4 ) and a hydrodynamic shell around the magnetic core of thickness t (indicated in a by the grey area) . Water at room temperature is assumed as the sample medium. Obviously, large changes in particle volume resulting from particle clustering induce substantial changes in the relaxation time that can be easily detected, while smaller changes due to analyte molecule binding onto the particle surface have to be measured at a shorter time scale. Moreover, it can be seen that an increase in hydrodynamic shell thickness induces changes of the Brownian relaxation time for small particle core diameters only. Thus, for measurements of changes of the hydrodynamic shell thickness upon binding of analyte molecules, small initial nanoprobes have to be applied, while methods based on detecting particle agglomeration are less sensitive on the initial nanoprobe size. The Néel relaxation time τ N is also called inverse flipping frequency of the magnetization and can be written as: (5) τ N = τ 0 e x p ( K V m k B T ) where τ 0 is usually in the range of 10 −9 s, K is the particle’s magnetic anisotropy energy density, and V m denotes the particle’s magnetic volume . Both relaxation times can be combined to an effective relaxation time τ eff of the form : (6) τ e f f = τ B τ N τ B + τ N b shows both individual relaxation times and the effective relaxation time against the magnetic particle core diameter. All parameters employed for the calculation are the same as applied for the calculation of the Brownian relaxation time in a, except that b assumes a hydrodynamic shell around the magnetic core of a constant thickness of 10 nm. Due to the rapid increase of the Néel relaxation time with increasing magnetic volume, the Brownian relaxation mechanism dominates for larger core diameters. Relaxation time determination has first been applied for biosensing in a heterogeneous sandwich-type assay format making use of magnetic NP labels that bind specifically to a solid surface via bound analyte molecules . For the chosen NPs, the Brownian relaxation time is substantially smaller than the response time of the superconducting quantum interference device (SQUID) instrumentation applied to detect the sample magnetization. As the magnetic moment of surface-bound magnetic NP labels can only relax by the slow Néel mechanism once the external magnetizing field is switched off, only bound NP labels contribute to the signal, while the magnetization of non-bound NP labels remaining freely in the sample solution has already decayed via the fast Brownian relaxation mechanism. This measurement principle resulted in the development of the so called magnetic relaxation immunoassay (MARIA), which is comparable to ELISA, but makes use of magnetic NPs as labels instead of applying an enzymatic reaction for signal generation . Here, Lange et al. showed detection of human immunoglobulin G (IgG) protein by employing a low T c -SQUID instrument and by measuring Néel relaxation. Furthermore, the remanence of the bound magnetic NPs has been measured to deduce the analyte concentration as shown by Kötitz et al. . Here, they applied functionalized magnetic NPs, which are bound to a flat surface via the analyte molecule and measured the magnetization of the immobilized magnetic NPs to evaluate the analyte concentration. For the measurement of the magnetic remanence, again a static magnetic field is applied and the magnetization is observed over time with the difference that the sample is removed during the measurement period so that the resulting change in the measured magnetization signal can be related to the remanence of the fixed magnetic NPs. Unbound magnetic NPs are already relaxed in their orientation by Brownian relaxation. Magnetorelaxation measurements (MRX, magnetorelaxometry) have also been applied to homogeneous biosensing. For example, Kötitz et al. performed magnetic induction measurements of the entire sample volume to record the relaxation signal of biotinylated iron oxide NP labels . In this case, the signal is a mixture of both, Néel and Brownian relaxation, but as the Néel relaxation time of the applied particle labels is substantially larger, the Brownian mechanism dominates. Thus, measured changes in the relaxation time can be associated to changes in the hydrodynamic particle volumes (see Equation (4)), which are induced by addition of avidin model analyte, thus inducing particle label clustering . Eberbeck et al. applied homogenous magnetorelaxometry to study unspecific binding reaction kinetics of magnetic NPs onto latex micro beads and onto yeast cells . This has later been expanded to specific binding reactions of the biotin-streptavidin model system by coupling streptavidin and anti-biotin antibody functionalized magnetic NPs to biotinylated micro agarose beads . Yang et al. also employed the biotin-avidin model system to evaluate the amount of avidin added to a solution of biotinylated magnetic NP labels, which caused clustering of the particles and, consequently, a change in the measured relaxation time . Studies of the Brownian magnetization relaxation curves measured homogeneously for magnetic particle labels has also been performed by Enpuku et al. , who showed detection of Candida albicans fungi . Specifically, they applied a sandwich-type assay making use of biotinylated antibodies to target the fungi and avidin-coated magnetic NPs that bind to the biotinylated antibodies . Contrary to the short relaxation times of unbound magnetic markers of about 0.4 ms, particles immobilized onto fungi show an increased relaxation time of about 24,000 ms due to the fungi size . The same group established similar bioassays based on analyte molecule induced binding of magnetic NPs to polymer microbeads . By using this measurement technique, biotin model analyte has been detected down to a concentration of 0.95 fM . Another possible application of the magnetorelaxometry technique is the quantification of the uptake of magnetic NPs by cells . MRX measurements have also been performed by applying fluxgate magnetometers, which offer the possibility to make use of miniaturized measurement instruments applicable to point-of-care settings. This is hardly possible for SQUID magnetometers, which require extensive hardware for cooling. Ludwig et al. applied fluxgate magnetometers to study the relaxation behavior of magnetic particle labels . The same group improved the detection technique by making use of two fluxgates in a differential configuration, which allows to perform measurements in a magnetically unshielded environment . A homogeneous MRX bioassay making use of fluxgate magnetometers was demonstrated for the study of binding reactions of streptavidin functionalized magnetic NP labels to biotinylated agarose microbeads and to biotinylated bovine serum albumin (BSA) protein . 2.2.2. Dynamic Agitation by Linear AC Magnetic Fields The biosensing principles discussed in this section are based on analyzing the frequency dependence of the sample’s magnetic susceptibility. For sufficiently small amplitudes, the induced magnetization of a sample fluid containing magnetic particle labels is a linear function of the magnetizing field strength, and it can be characterized by the complex magnetic susceptibility χ = χ’ − iχ’’. The dependence of the susceptibility on the frequency of the external field is given by: (7) χ ( ω ) = χ 0 1 + i ω τ e f f with the angular frequency ω , the corresponding susceptibility in an external DC magnetic field χ 0 , and the effective relaxation time τ eff . The real (Equation (8)) and the imaginary (Equation (9)) parts of the susceptibility follow as : (8) χ ′ ( ω ) = χ 0 1 + ( ω τ e f f ) 2 (9) χ ″ ( ω ) = χ 0 ω τ e f f 1 + ( ω τ e f f ) 2 These formulas indicate that the real part of the magnetic susceptibility decreases with increasing frequency, while the imaginary part shows a maximum at ωτ eff = 1, which allows to deduce the Brownian relaxation time and, consequently, the hydrodynamic particle volume (see Equations (4) and (6)) . Plots of both parts of the magnetic susceptibility χ are shown in against ωτ eff with ω = 2πf and the magnetic excitation field frequency f . An alternative representation of the AC susceptibility measurement signal is given by the phase lag ϕ , which is a measure of the phase between the exciting external magnetic field and the magnetization of the sample and can be expressed by the real and the imaginary part of the AC susceptibility according to : (10) φ = a r c t a n ( χ ″ χ ′ ) Different measurement approaches have been employed in the literature to detect changes of the hydrodynamic particle properties by dynamic linear AC magnetic field excitation, which will be discussed in the following subsections. Frequency Sweep AC Susceptibility Measurements These types of methods are based on measuring the impedance of an induction coil into which a vial containing the sample fluid with added magnetic particle labels is placed . To that end, the magnetic particles are excited by an external linear AC magnetic field of variable frequency generated by the induction coil . While the coil’s inductance depends on the real part of the sample’s susceptibility, the coil’s resistance is directly related to the imaginary susceptibility part of the sample . By analyzing frequency sweeps of the applied AC magnetic field, changes of the measured complex susceptibility of the sample can be directly related to the hydrodynamic volume of the magnetic particle labels and, thus, to the binding of analyte molecules. Some applications of this method to biosensing have been presented by Astalan et al. and by Chung et al. The former group verified the sensor principle by detecting prostate specific antigen (PSA) in buffer solutions employing magnetite NPs labels functionalized by specific antibodies . The latter group demonstrated binding reactions of biotinylated S-protein to avidin functionalized magnetite NP labels . AC susceptibility measurements have also been shown by employing cobalt NPs dispersed in water as labels . Fornara et al. presented the synthesis of magnetite single core NPs with optimized performance for AC susceptibility measurements . Following functionalization of the magnetic NPs, the authors could show detection of specific antibodies in untreated serum samples of cows infected by Brucella bacteria with a limit of detection of about 0.05 µg/mL of Brucella antibodies . Further AC susceptibility biosensing-related studies include the analysis of binding reactions of avidin-coated iron oxide magnetic NP labels with biotin-coated polymer microbeads to gain information on size distributions of the magnetic particle labels, the signal’s dependence on the concentration of the applied polymer particles and the effect of different incubation times . The AC susceptibility measurement method has also been applied to the detection of DNA by the so called “volume-amplified magnetic nanobead detection assay”. It has been shown that specific DNA strands can be detected following rolling circle amplification steps . This technique has been extended to multiplexed detection of DNA sequences and was also adopted into portable measurement instruments . It has been applied to detect Bacillus globigii spores and bacterial DNA originating from Vibrio cholerae and Escherichia coli ( E. coli ) . In addition, AC susceptibility biosensing can also be carried out in a multiplexed format by making use of the distinct spectral positions of the imaginary part of the complex susceptibility of differently sized magnetic particle labels, thus enabling simultaneous detection of different types of analyte molecules within the same sample solution . Öisjöen et al. studied the increase of the hydrodynamic volume of magnetic NP labels upon analyte molecule binding by analyzing AC susceptibility frequency sweeps in combination with magnetorelaxation measurements, i.e. , monitoring of the decay of the sample’s magnetization after an aligning magnetic field is switched off . While data fits to AC susceptibility spectra reveal the actual size distribution of the applied magnetic particle labels, the magnetorelaxation data allows to observe real-time kinetics of binding events. Both measurements are done by making use of a SQUID magnetometer, and a LoD of 10 µg/mL of the applied streptavidin-coated multi-core CoFe 2 O 4 magnetic particle labels was obtained . As model analyte, the authors employed PSA targeting biotinylated antibodies and demonstrated a LoD of 0.7 nM . Mixed-Frequency AC Susceptibility Measurements A change in the dynamics of magnetic particle labels upon an increase in hydrodynamic volume can also be measured by the magnetic susceptibility reduction method. Here, the magnetic susceptibility of the sample reduces upon analyte molecule binding due to the growing hydrodynamic volume or clustering of the magnetic particle labels. The immunomagnetic reduction (IMR) method is based on detecting this reduced susceptibility by applying a mixed-frequency read-out technique . To that end, the magnetic particle labels are excited by two linear AC magnetic fields of different frequency, which are generated by two distinct excitation coils (see a for a schematic measurement setup) . The measurement signal is the sample’s magnetization, which is detected by a pick-up coil or, for higher sensitivity, by a SQUID magnetometer . The excitation frequencies are chosen high enough, so that only single magnetic particle labels can follow, while clusters of magnetic particle labels are not affected. Therefore, the measured susceptibility originates from single particles only . The reduction in measurement signal can be directly related to the amount of bound analyte molecules . Applying an excitation mode with two different frequencies f 1 and f 2 allows to detect the magnetic susceptibility χ AC not only at the excitation frequencies but also at mixed frequencies of the form mf 1 + nf 2 with integers for m and n . This leads to an improved signal-to-background ratio as the single excitation frequencies are effectively suppressed from the measurement signal . Bioassay measurement based on this method have been reported by Hong et al. , who showed detection of CRP in serum samples . The same group further developed the IMR technique by employing a SQUID-based measurement setup for more sensitive detection of magnetic particle labels, and they achieved a CRP limit of detection of 10 −6 mg/L, which presents an improvement in sensitivity of five orders of magnitude compared to their previous publication . The group has also shown that the dependence of the detected signal on the analyte molecule concentration follows a logistic function (see b) , which is discussed in more detail in a separate publication . The logistic function is commonly applied as a valuable tool for the interpretation of IMR measurement results. Recent publications have demonstrated the feasibility of the IMR method for the detection of different proteins in clinically relevant settings. Here, examples include the detection of CRP in buffer and in human serum samples or the detection of the insulin-like growth factor binding protein-1 (IGF-1) in cervicovaginal secretions of pregnant women for the diagnosis of preterm premature rupture of membranes . Molecular diagnosis of cancer by detecting protein biomarkers in serum samples has been reported for the des-γ-carboxyprothrombin protein in rat serum, and it was shown that the concentration of the protein biomarker correlates with the tumor size in hepatocellular carcinoma . Furthermore, the concentration of the α-fetoprotein (AFP) was evaluated in human serum samples of both healthy individuals and patients with liver tumors . Finally, the vascular endothelial growth factor protein has been employed as analyte molecule in human serum for the distinction of healthy individuals and tumor patients with colorectal or hepatocellular cancer . Specific proteins like β-amyloid-40 (Aβ-40), Aβ-42 and the tau-protein serve as the most prominent biomarkers for research on Alzheimer’s disease and mild cognitive impairment. IMR measurements of these proteins in buffer solutions to give a first proof-of-principle have been reported and previously the detection has been shown in human plasma samples . In addition to proteins, IMR has also been applied for the sensing of small molecules like hormones, as it has been reported by Chen et al. for the detection of the β-subunit of human chorionic gonadotropin in urine samples of pregnant women . Furthermore, a general proof for the successful detection of DNA by IMR measurements can be found in the publication of Yang et al. . Moreover, IMR has been reported for virus bioassays as well. Examples include the detection of two types of orchid viruses by magnetic NP labels functionalized by an antibody to target the virus particles , the detection of the avian virus H5N1 , and swine influenza A viruses . Finally, the IMR measurement technique has been employed in the field of veterinary research and for food control. Specifically, an assay for detecting shrimp white spot disease caused by white spot syndrome virus has been developed, and the detection of antibiotics in shrimp has been achieved by direct binding of the chloramphenicol drug to antibodies on the particle label surface . Additionally, an IMR assay has been developed for sensing of the nervous necrosis virus extracted from aquaculture groupers . Phase Lag AC Susceptibility Measurements An alternative approach for analyzing the dynamics of magnetic particle labels is to examine the phase lag between the AC magnetic excitation field and the magnetization of the sample fluid (see Equation (10)), which allows detecting the signal of interest at a single frequency. Liao et al. introduced this measurement mode employing dextran-coated superparamagnetic Fe 3 O 4 particles with core diameters of 12 nm as magnetic particle labels . For bioassay measurements, the applied particles were functionalized by antibodies targeting the CRP protein, and particle clustering was induced by CRP analyte . Particle clustering affects the total effective relaxation time and, thus, the AC susceptibility and the measured phase lag . Liao et al. demonstrated CRP detection down to approximately 40 nM in buffer solution . The same group also examined detection of AFP in buffer solution and obtained a LoD of about 1 nM . Excitation and detection is experimentally realized by a respective coil arrangement, and a Lock-In amplifier is employed for the phase lag determination . Here, the observed phase lag differences upon analyte addition with respect to samples without analyte molecules reach about 0.3–2°, while the absolute phase lags amount to about 3° . Tu et al. developed a measurement mode which combines the mixed-frequency detection technique as discussed above with observations of the phase lag between the magnetization of the sample and the external magnetic field . Specifically, the magnetic particle labels are simultaneously excited by two linear magnetic fields of different frequency, and the signal to be detected is the phase lag of the resulting sample magnetization with respect to the excitation fields. In their experiments, one frequency is kept fixed, while the other frequency is scanned, and the phase lag is recorded in dependence of the variable frequency . 2.2.3. Dynamic Agitation by Rotating Magnetic Fields Instead of applying linear AC magnetic fields, actuation of the magnetic particle labels can also be achieved by applying rotating magnetic fields. It has been shown that rotating magnetic field actuation leads to higher signal values compared to linear AC magnetic field actuation . As described in the previous section, the hydrodynamic properties of the particle labels can be represented by the phase lag of the sample magnetization to the applied magnetic field. A schematic illustration of the measurement method is shown in . When the Néel relaxation time of the applied magnetic particle labels is substantially larger than the period of the exciting rotating magnetic field, the magnetic particle moment follows the rotating magnetic field by Brownian rotation. Due to the hydrodynamic drag the particle label experiences within the sample fluid, this rotation is delayed by a steady-state phase lag ϕ , which rises when the hydrodynamic diameter d hydro of the particle label increases due to binding of analyte molecules. A first proof-of-principle of magnetic particle label agitation by rotating magnetic fields and magnetic detection by fluxgate magnetometers has been given by Dieckhoff et al. . The authors demonstrated detection of binding processes of IgG antibodies to magnetic NP labels functionalized by protein G and analyzed the dependence of the measurement signal on the analyte molecule concentration . It has also been reported that the binding kinetics of analyte molecules to the magnetic NP labels can be interpreted according to the law of mass . Absolute phase lag values of up to 60° and phase lag differences between samples with and without added analyte molecules of up to 20° were observed .
In an external magnetic field, the magnetic moments of particle labels dispersed in the sample solution experience a magnetic torque, resulting in a net sample magnetization in field direction governed by the Langevin equation . Once the external field is switched off, the magnetic torque vanishes, and the sample’s net magnetization relaxes back to zero. At the particle label scale, two different relaxation processes can be distinguished, which are the Néel relaxation and Brownian relaxation. Néel relaxation describes an internal decay of the magnetic moment of the particle labels, while Brownian relaxation designates thermal rotational diffusion of the particle labels. Both processes can be described by characteristic relaxation times. The Brownian relaxation time τ B is defined by: (2) τ B = ψ 2 k B T with temperature T , the Boltzmann constant k B and a rotational drag coefficient ψ . Here, the latter in case of a spherical particle is given by: (3) ψ = 6 η V h with the dynamic viscosity of the sample fluid η and the hydrodynamic NP volume V h . Thus, for spherical particles the Brownian relaxation time τ B can be written as: (4) τ B = 3 η V h k B T The dependence of the Brownian relaxation time on the hydrodynamic particle volume paves the way for homogeneous biosensing applications, as particle clustering or binding of analyte molecules induces changes in the relaxation times. This is schematically sketched in a, which shows the Brownian relaxation time plotted against the magnetic core diameter of a spherical NP. The NPs applied for calculating the relaxation times comprise a magnetic core with a magnetic anisotropy energy density K = 20 KJ/m 3 (corresponding to magnetite Fe 3 O 4 ) and a hydrodynamic shell around the magnetic core of thickness t (indicated in a by the grey area) . Water at room temperature is assumed as the sample medium. Obviously, large changes in particle volume resulting from particle clustering induce substantial changes in the relaxation time that can be easily detected, while smaller changes due to analyte molecule binding onto the particle surface have to be measured at a shorter time scale. Moreover, it can be seen that an increase in hydrodynamic shell thickness induces changes of the Brownian relaxation time for small particle core diameters only. Thus, for measurements of changes of the hydrodynamic shell thickness upon binding of analyte molecules, small initial nanoprobes have to be applied, while methods based on detecting particle agglomeration are less sensitive on the initial nanoprobe size. The Néel relaxation time τ N is also called inverse flipping frequency of the magnetization and can be written as: (5) τ N = τ 0 e x p ( K V m k B T ) where τ 0 is usually in the range of 10 −9 s, K is the particle’s magnetic anisotropy energy density, and V m denotes the particle’s magnetic volume . Both relaxation times can be combined to an effective relaxation time τ eff of the form : (6) τ e f f = τ B τ N τ B + τ N b shows both individual relaxation times and the effective relaxation time against the magnetic particle core diameter. All parameters employed for the calculation are the same as applied for the calculation of the Brownian relaxation time in a, except that b assumes a hydrodynamic shell around the magnetic core of a constant thickness of 10 nm. Due to the rapid increase of the Néel relaxation time with increasing magnetic volume, the Brownian relaxation mechanism dominates for larger core diameters. Relaxation time determination has first been applied for biosensing in a heterogeneous sandwich-type assay format making use of magnetic NP labels that bind specifically to a solid surface via bound analyte molecules . For the chosen NPs, the Brownian relaxation time is substantially smaller than the response time of the superconducting quantum interference device (SQUID) instrumentation applied to detect the sample magnetization. As the magnetic moment of surface-bound magnetic NP labels can only relax by the slow Néel mechanism once the external magnetizing field is switched off, only bound NP labels contribute to the signal, while the magnetization of non-bound NP labels remaining freely in the sample solution has already decayed via the fast Brownian relaxation mechanism. This measurement principle resulted in the development of the so called magnetic relaxation immunoassay (MARIA), which is comparable to ELISA, but makes use of magnetic NPs as labels instead of applying an enzymatic reaction for signal generation . Here, Lange et al. showed detection of human immunoglobulin G (IgG) protein by employing a low T c -SQUID instrument and by measuring Néel relaxation. Furthermore, the remanence of the bound magnetic NPs has been measured to deduce the analyte concentration as shown by Kötitz et al. . Here, they applied functionalized magnetic NPs, which are bound to a flat surface via the analyte molecule and measured the magnetization of the immobilized magnetic NPs to evaluate the analyte concentration. For the measurement of the magnetic remanence, again a static magnetic field is applied and the magnetization is observed over time with the difference that the sample is removed during the measurement period so that the resulting change in the measured magnetization signal can be related to the remanence of the fixed magnetic NPs. Unbound magnetic NPs are already relaxed in their orientation by Brownian relaxation. Magnetorelaxation measurements (MRX, magnetorelaxometry) have also been applied to homogeneous biosensing. For example, Kötitz et al. performed magnetic induction measurements of the entire sample volume to record the relaxation signal of biotinylated iron oxide NP labels . In this case, the signal is a mixture of both, Néel and Brownian relaxation, but as the Néel relaxation time of the applied particle labels is substantially larger, the Brownian mechanism dominates. Thus, measured changes in the relaxation time can be associated to changes in the hydrodynamic particle volumes (see Equation (4)), which are induced by addition of avidin model analyte, thus inducing particle label clustering . Eberbeck et al. applied homogenous magnetorelaxometry to study unspecific binding reaction kinetics of magnetic NPs onto latex micro beads and onto yeast cells . This has later been expanded to specific binding reactions of the biotin-streptavidin model system by coupling streptavidin and anti-biotin antibody functionalized magnetic NPs to biotinylated micro agarose beads . Yang et al. also employed the biotin-avidin model system to evaluate the amount of avidin added to a solution of biotinylated magnetic NP labels, which caused clustering of the particles and, consequently, a change in the measured relaxation time . Studies of the Brownian magnetization relaxation curves measured homogeneously for magnetic particle labels has also been performed by Enpuku et al. , who showed detection of Candida albicans fungi . Specifically, they applied a sandwich-type assay making use of biotinylated antibodies to target the fungi and avidin-coated magnetic NPs that bind to the biotinylated antibodies . Contrary to the short relaxation times of unbound magnetic markers of about 0.4 ms, particles immobilized onto fungi show an increased relaxation time of about 24,000 ms due to the fungi size . The same group established similar bioassays based on analyte molecule induced binding of magnetic NPs to polymer microbeads . By using this measurement technique, biotin model analyte has been detected down to a concentration of 0.95 fM . Another possible application of the magnetorelaxometry technique is the quantification of the uptake of magnetic NPs by cells . MRX measurements have also been performed by applying fluxgate magnetometers, which offer the possibility to make use of miniaturized measurement instruments applicable to point-of-care settings. This is hardly possible for SQUID magnetometers, which require extensive hardware for cooling. Ludwig et al. applied fluxgate magnetometers to study the relaxation behavior of magnetic particle labels . The same group improved the detection technique by making use of two fluxgates in a differential configuration, which allows to perform measurements in a magnetically unshielded environment . A homogeneous MRX bioassay making use of fluxgate magnetometers was demonstrated for the study of binding reactions of streptavidin functionalized magnetic NP labels to biotinylated agarose microbeads and to biotinylated bovine serum albumin (BSA) protein .
The biosensing principles discussed in this section are based on analyzing the frequency dependence of the sample’s magnetic susceptibility. For sufficiently small amplitudes, the induced magnetization of a sample fluid containing magnetic particle labels is a linear function of the magnetizing field strength, and it can be characterized by the complex magnetic susceptibility χ = χ’ − iχ’’. The dependence of the susceptibility on the frequency of the external field is given by: (7) χ ( ω ) = χ 0 1 + i ω τ e f f with the angular frequency ω , the corresponding susceptibility in an external DC magnetic field χ 0 , and the effective relaxation time τ eff . The real (Equation (8)) and the imaginary (Equation (9)) parts of the susceptibility follow as : (8) χ ′ ( ω ) = χ 0 1 + ( ω τ e f f ) 2 (9) χ ″ ( ω ) = χ 0 ω τ e f f 1 + ( ω τ e f f ) 2 These formulas indicate that the real part of the magnetic susceptibility decreases with increasing frequency, while the imaginary part shows a maximum at ωτ eff = 1, which allows to deduce the Brownian relaxation time and, consequently, the hydrodynamic particle volume (see Equations (4) and (6)) . Plots of both parts of the magnetic susceptibility χ are shown in against ωτ eff with ω = 2πf and the magnetic excitation field frequency f . An alternative representation of the AC susceptibility measurement signal is given by the phase lag ϕ , which is a measure of the phase between the exciting external magnetic field and the magnetization of the sample and can be expressed by the real and the imaginary part of the AC susceptibility according to : (10) φ = a r c t a n ( χ ″ χ ′ ) Different measurement approaches have been employed in the literature to detect changes of the hydrodynamic particle properties by dynamic linear AC magnetic field excitation, which will be discussed in the following subsections. Frequency Sweep AC Susceptibility Measurements These types of methods are based on measuring the impedance of an induction coil into which a vial containing the sample fluid with added magnetic particle labels is placed . To that end, the magnetic particles are excited by an external linear AC magnetic field of variable frequency generated by the induction coil . While the coil’s inductance depends on the real part of the sample’s susceptibility, the coil’s resistance is directly related to the imaginary susceptibility part of the sample . By analyzing frequency sweeps of the applied AC magnetic field, changes of the measured complex susceptibility of the sample can be directly related to the hydrodynamic volume of the magnetic particle labels and, thus, to the binding of analyte molecules. Some applications of this method to biosensing have been presented by Astalan et al. and by Chung et al. The former group verified the sensor principle by detecting prostate specific antigen (PSA) in buffer solutions employing magnetite NPs labels functionalized by specific antibodies . The latter group demonstrated binding reactions of biotinylated S-protein to avidin functionalized magnetite NP labels . AC susceptibility measurements have also been shown by employing cobalt NPs dispersed in water as labels . Fornara et al. presented the synthesis of magnetite single core NPs with optimized performance for AC susceptibility measurements . Following functionalization of the magnetic NPs, the authors could show detection of specific antibodies in untreated serum samples of cows infected by Brucella bacteria with a limit of detection of about 0.05 µg/mL of Brucella antibodies . Further AC susceptibility biosensing-related studies include the analysis of binding reactions of avidin-coated iron oxide magnetic NP labels with biotin-coated polymer microbeads to gain information on size distributions of the magnetic particle labels, the signal’s dependence on the concentration of the applied polymer particles and the effect of different incubation times . The AC susceptibility measurement method has also been applied to the detection of DNA by the so called “volume-amplified magnetic nanobead detection assay”. It has been shown that specific DNA strands can be detected following rolling circle amplification steps . This technique has been extended to multiplexed detection of DNA sequences and was also adopted into portable measurement instruments . It has been applied to detect Bacillus globigii spores and bacterial DNA originating from Vibrio cholerae and Escherichia coli ( E. coli ) . In addition, AC susceptibility biosensing can also be carried out in a multiplexed format by making use of the distinct spectral positions of the imaginary part of the complex susceptibility of differently sized magnetic particle labels, thus enabling simultaneous detection of different types of analyte molecules within the same sample solution . Öisjöen et al. studied the increase of the hydrodynamic volume of magnetic NP labels upon analyte molecule binding by analyzing AC susceptibility frequency sweeps in combination with magnetorelaxation measurements, i.e. , monitoring of the decay of the sample’s magnetization after an aligning magnetic field is switched off . While data fits to AC susceptibility spectra reveal the actual size distribution of the applied magnetic particle labels, the magnetorelaxation data allows to observe real-time kinetics of binding events. Both measurements are done by making use of a SQUID magnetometer, and a LoD of 10 µg/mL of the applied streptavidin-coated multi-core CoFe 2 O 4 magnetic particle labels was obtained . As model analyte, the authors employed PSA targeting biotinylated antibodies and demonstrated a LoD of 0.7 nM . Mixed-Frequency AC Susceptibility Measurements A change in the dynamics of magnetic particle labels upon an increase in hydrodynamic volume can also be measured by the magnetic susceptibility reduction method. Here, the magnetic susceptibility of the sample reduces upon analyte molecule binding due to the growing hydrodynamic volume or clustering of the magnetic particle labels. The immunomagnetic reduction (IMR) method is based on detecting this reduced susceptibility by applying a mixed-frequency read-out technique . To that end, the magnetic particle labels are excited by two linear AC magnetic fields of different frequency, which are generated by two distinct excitation coils (see a for a schematic measurement setup) . The measurement signal is the sample’s magnetization, which is detected by a pick-up coil or, for higher sensitivity, by a SQUID magnetometer . The excitation frequencies are chosen high enough, so that only single magnetic particle labels can follow, while clusters of magnetic particle labels are not affected. Therefore, the measured susceptibility originates from single particles only . The reduction in measurement signal can be directly related to the amount of bound analyte molecules . Applying an excitation mode with two different frequencies f 1 and f 2 allows to detect the magnetic susceptibility χ AC not only at the excitation frequencies but also at mixed frequencies of the form mf 1 + nf 2 with integers for m and n . This leads to an improved signal-to-background ratio as the single excitation frequencies are effectively suppressed from the measurement signal . Bioassay measurement based on this method have been reported by Hong et al. , who showed detection of CRP in serum samples . The same group further developed the IMR technique by employing a SQUID-based measurement setup for more sensitive detection of magnetic particle labels, and they achieved a CRP limit of detection of 10 −6 mg/L, which presents an improvement in sensitivity of five orders of magnitude compared to their previous publication . The group has also shown that the dependence of the detected signal on the analyte molecule concentration follows a logistic function (see b) , which is discussed in more detail in a separate publication . The logistic function is commonly applied as a valuable tool for the interpretation of IMR measurement results. Recent publications have demonstrated the feasibility of the IMR method for the detection of different proteins in clinically relevant settings. Here, examples include the detection of CRP in buffer and in human serum samples or the detection of the insulin-like growth factor binding protein-1 (IGF-1) in cervicovaginal secretions of pregnant women for the diagnosis of preterm premature rupture of membranes . Molecular diagnosis of cancer by detecting protein biomarkers in serum samples has been reported for the des-γ-carboxyprothrombin protein in rat serum, and it was shown that the concentration of the protein biomarker correlates with the tumor size in hepatocellular carcinoma . Furthermore, the concentration of the α-fetoprotein (AFP) was evaluated in human serum samples of both healthy individuals and patients with liver tumors . Finally, the vascular endothelial growth factor protein has been employed as analyte molecule in human serum for the distinction of healthy individuals and tumor patients with colorectal or hepatocellular cancer . Specific proteins like β-amyloid-40 (Aβ-40), Aβ-42 and the tau-protein serve as the most prominent biomarkers for research on Alzheimer’s disease and mild cognitive impairment. IMR measurements of these proteins in buffer solutions to give a first proof-of-principle have been reported and previously the detection has been shown in human plasma samples . In addition to proteins, IMR has also been applied for the sensing of small molecules like hormones, as it has been reported by Chen et al. for the detection of the β-subunit of human chorionic gonadotropin in urine samples of pregnant women . Furthermore, a general proof for the successful detection of DNA by IMR measurements can be found in the publication of Yang et al. . Moreover, IMR has been reported for virus bioassays as well. Examples include the detection of two types of orchid viruses by magnetic NP labels functionalized by an antibody to target the virus particles , the detection of the avian virus H5N1 , and swine influenza A viruses . Finally, the IMR measurement technique has been employed in the field of veterinary research and for food control. Specifically, an assay for detecting shrimp white spot disease caused by white spot syndrome virus has been developed, and the detection of antibiotics in shrimp has been achieved by direct binding of the chloramphenicol drug to antibodies on the particle label surface . Additionally, an IMR assay has been developed for sensing of the nervous necrosis virus extracted from aquaculture groupers . Phase Lag AC Susceptibility Measurements An alternative approach for analyzing the dynamics of magnetic particle labels is to examine the phase lag between the AC magnetic excitation field and the magnetization of the sample fluid (see Equation (10)), which allows detecting the signal of interest at a single frequency. Liao et al. introduced this measurement mode employing dextran-coated superparamagnetic Fe 3 O 4 particles with core diameters of 12 nm as magnetic particle labels . For bioassay measurements, the applied particles were functionalized by antibodies targeting the CRP protein, and particle clustering was induced by CRP analyte . Particle clustering affects the total effective relaxation time and, thus, the AC susceptibility and the measured phase lag . Liao et al. demonstrated CRP detection down to approximately 40 nM in buffer solution . The same group also examined detection of AFP in buffer solution and obtained a LoD of about 1 nM . Excitation and detection is experimentally realized by a respective coil arrangement, and a Lock-In amplifier is employed for the phase lag determination . Here, the observed phase lag differences upon analyte addition with respect to samples without analyte molecules reach about 0.3–2°, while the absolute phase lags amount to about 3° . Tu et al. developed a measurement mode which combines the mixed-frequency detection technique as discussed above with observations of the phase lag between the magnetization of the sample and the external magnetic field . Specifically, the magnetic particle labels are simultaneously excited by two linear magnetic fields of different frequency, and the signal to be detected is the phase lag of the resulting sample magnetization with respect to the excitation fields. In their experiments, one frequency is kept fixed, while the other frequency is scanned, and the phase lag is recorded in dependence of the variable frequency .
These types of methods are based on measuring the impedance of an induction coil into which a vial containing the sample fluid with added magnetic particle labels is placed . To that end, the magnetic particles are excited by an external linear AC magnetic field of variable frequency generated by the induction coil . While the coil’s inductance depends on the real part of the sample’s susceptibility, the coil’s resistance is directly related to the imaginary susceptibility part of the sample . By analyzing frequency sweeps of the applied AC magnetic field, changes of the measured complex susceptibility of the sample can be directly related to the hydrodynamic volume of the magnetic particle labels and, thus, to the binding of analyte molecules. Some applications of this method to biosensing have been presented by Astalan et al. and by Chung et al. The former group verified the sensor principle by detecting prostate specific antigen (PSA) in buffer solutions employing magnetite NPs labels functionalized by specific antibodies . The latter group demonstrated binding reactions of biotinylated S-protein to avidin functionalized magnetite NP labels . AC susceptibility measurements have also been shown by employing cobalt NPs dispersed in water as labels . Fornara et al. presented the synthesis of magnetite single core NPs with optimized performance for AC susceptibility measurements . Following functionalization of the magnetic NPs, the authors could show detection of specific antibodies in untreated serum samples of cows infected by Brucella bacteria with a limit of detection of about 0.05 µg/mL of Brucella antibodies . Further AC susceptibility biosensing-related studies include the analysis of binding reactions of avidin-coated iron oxide magnetic NP labels with biotin-coated polymer microbeads to gain information on size distributions of the magnetic particle labels, the signal’s dependence on the concentration of the applied polymer particles and the effect of different incubation times . The AC susceptibility measurement method has also been applied to the detection of DNA by the so called “volume-amplified magnetic nanobead detection assay”. It has been shown that specific DNA strands can be detected following rolling circle amplification steps . This technique has been extended to multiplexed detection of DNA sequences and was also adopted into portable measurement instruments . It has been applied to detect Bacillus globigii spores and bacterial DNA originating from Vibrio cholerae and Escherichia coli ( E. coli ) . In addition, AC susceptibility biosensing can also be carried out in a multiplexed format by making use of the distinct spectral positions of the imaginary part of the complex susceptibility of differently sized magnetic particle labels, thus enabling simultaneous detection of different types of analyte molecules within the same sample solution . Öisjöen et al. studied the increase of the hydrodynamic volume of magnetic NP labels upon analyte molecule binding by analyzing AC susceptibility frequency sweeps in combination with magnetorelaxation measurements, i.e. , monitoring of the decay of the sample’s magnetization after an aligning magnetic field is switched off . While data fits to AC susceptibility spectra reveal the actual size distribution of the applied magnetic particle labels, the magnetorelaxation data allows to observe real-time kinetics of binding events. Both measurements are done by making use of a SQUID magnetometer, and a LoD of 10 µg/mL of the applied streptavidin-coated multi-core CoFe 2 O 4 magnetic particle labels was obtained . As model analyte, the authors employed PSA targeting biotinylated antibodies and demonstrated a LoD of 0.7 nM .
A change in the dynamics of magnetic particle labels upon an increase in hydrodynamic volume can also be measured by the magnetic susceptibility reduction method. Here, the magnetic susceptibility of the sample reduces upon analyte molecule binding due to the growing hydrodynamic volume or clustering of the magnetic particle labels. The immunomagnetic reduction (IMR) method is based on detecting this reduced susceptibility by applying a mixed-frequency read-out technique . To that end, the magnetic particle labels are excited by two linear AC magnetic fields of different frequency, which are generated by two distinct excitation coils (see a for a schematic measurement setup) . The measurement signal is the sample’s magnetization, which is detected by a pick-up coil or, for higher sensitivity, by a SQUID magnetometer . The excitation frequencies are chosen high enough, so that only single magnetic particle labels can follow, while clusters of magnetic particle labels are not affected. Therefore, the measured susceptibility originates from single particles only . The reduction in measurement signal can be directly related to the amount of bound analyte molecules . Applying an excitation mode with two different frequencies f 1 and f 2 allows to detect the magnetic susceptibility χ AC not only at the excitation frequencies but also at mixed frequencies of the form mf 1 + nf 2 with integers for m and n . This leads to an improved signal-to-background ratio as the single excitation frequencies are effectively suppressed from the measurement signal . Bioassay measurement based on this method have been reported by Hong et al. , who showed detection of CRP in serum samples . The same group further developed the IMR technique by employing a SQUID-based measurement setup for more sensitive detection of magnetic particle labels, and they achieved a CRP limit of detection of 10 −6 mg/L, which presents an improvement in sensitivity of five orders of magnitude compared to their previous publication . The group has also shown that the dependence of the detected signal on the analyte molecule concentration follows a logistic function (see b) , which is discussed in more detail in a separate publication . The logistic function is commonly applied as a valuable tool for the interpretation of IMR measurement results. Recent publications have demonstrated the feasibility of the IMR method for the detection of different proteins in clinically relevant settings. Here, examples include the detection of CRP in buffer and in human serum samples or the detection of the insulin-like growth factor binding protein-1 (IGF-1) in cervicovaginal secretions of pregnant women for the diagnosis of preterm premature rupture of membranes . Molecular diagnosis of cancer by detecting protein biomarkers in serum samples has been reported for the des-γ-carboxyprothrombin protein in rat serum, and it was shown that the concentration of the protein biomarker correlates with the tumor size in hepatocellular carcinoma . Furthermore, the concentration of the α-fetoprotein (AFP) was evaluated in human serum samples of both healthy individuals and patients with liver tumors . Finally, the vascular endothelial growth factor protein has been employed as analyte molecule in human serum for the distinction of healthy individuals and tumor patients with colorectal or hepatocellular cancer . Specific proteins like β-amyloid-40 (Aβ-40), Aβ-42 and the tau-protein serve as the most prominent biomarkers for research on Alzheimer’s disease and mild cognitive impairment. IMR measurements of these proteins in buffer solutions to give a first proof-of-principle have been reported and previously the detection has been shown in human plasma samples . In addition to proteins, IMR has also been applied for the sensing of small molecules like hormones, as it has been reported by Chen et al. for the detection of the β-subunit of human chorionic gonadotropin in urine samples of pregnant women . Furthermore, a general proof for the successful detection of DNA by IMR measurements can be found in the publication of Yang et al. . Moreover, IMR has been reported for virus bioassays as well. Examples include the detection of two types of orchid viruses by magnetic NP labels functionalized by an antibody to target the virus particles , the detection of the avian virus H5N1 , and swine influenza A viruses . Finally, the IMR measurement technique has been employed in the field of veterinary research and for food control. Specifically, an assay for detecting shrimp white spot disease caused by white spot syndrome virus has been developed, and the detection of antibiotics in shrimp has been achieved by direct binding of the chloramphenicol drug to antibodies on the particle label surface . Additionally, an IMR assay has been developed for sensing of the nervous necrosis virus extracted from aquaculture groupers .
An alternative approach for analyzing the dynamics of magnetic particle labels is to examine the phase lag between the AC magnetic excitation field and the magnetization of the sample fluid (see Equation (10)), which allows detecting the signal of interest at a single frequency. Liao et al. introduced this measurement mode employing dextran-coated superparamagnetic Fe 3 O 4 particles with core diameters of 12 nm as magnetic particle labels . For bioassay measurements, the applied particles were functionalized by antibodies targeting the CRP protein, and particle clustering was induced by CRP analyte . Particle clustering affects the total effective relaxation time and, thus, the AC susceptibility and the measured phase lag . Liao et al. demonstrated CRP detection down to approximately 40 nM in buffer solution . The same group also examined detection of AFP in buffer solution and obtained a LoD of about 1 nM . Excitation and detection is experimentally realized by a respective coil arrangement, and a Lock-In amplifier is employed for the phase lag determination . Here, the observed phase lag differences upon analyte addition with respect to samples without analyte molecules reach about 0.3–2°, while the absolute phase lags amount to about 3° . Tu et al. developed a measurement mode which combines the mixed-frequency detection technique as discussed above with observations of the phase lag between the magnetization of the sample and the external magnetic field . Specifically, the magnetic particle labels are simultaneously excited by two linear magnetic fields of different frequency, and the signal to be detected is the phase lag of the resulting sample magnetization with respect to the excitation fields. In their experiments, one frequency is kept fixed, while the other frequency is scanned, and the phase lag is recorded in dependence of the variable frequency .
Instead of applying linear AC magnetic fields, actuation of the magnetic particle labels can also be achieved by applying rotating magnetic fields. It has been shown that rotating magnetic field actuation leads to higher signal values compared to linear AC magnetic field actuation . As described in the previous section, the hydrodynamic properties of the particle labels can be represented by the phase lag of the sample magnetization to the applied magnetic field. A schematic illustration of the measurement method is shown in . When the Néel relaxation time of the applied magnetic particle labels is substantially larger than the period of the exciting rotating magnetic field, the magnetic particle moment follows the rotating magnetic field by Brownian rotation. Due to the hydrodynamic drag the particle label experiences within the sample fluid, this rotation is delayed by a steady-state phase lag ϕ , which rises when the hydrodynamic diameter d hydro of the particle label increases due to binding of analyte molecules. A first proof-of-principle of magnetic particle label agitation by rotating magnetic fields and magnetic detection by fluxgate magnetometers has been given by Dieckhoff et al. . The authors demonstrated detection of binding processes of IgG antibodies to magnetic NP labels functionalized by protein G and analyzed the dependence of the measurement signal on the analyte molecule concentration . It has also been reported that the binding kinetics of analyte molecules to the magnetic NP labels can be interpreted according to the law of mass . Absolute phase lag values of up to 60° and phase lag differences between samples with and without added analyte molecules of up to 20° were observed .
Nuclear magnetic resonance (NMR) measurements of water protons in conjunction with magnetic particles can be applied for biosensing of a variety of different analytes, as will be shown in the following. Usually, superparamagnetic NPs are employed to modify the precession of the nuclear spins of water protons in the proximity of the NPs, which in turn alters the measured relaxation times , but application of paramagnetic particles has also been reported . Adding superparamagnetic NPs to samples that are measured by NMR leads to the creation of local magnetic dipole fields that cause inhomogeneities of the applied external static magnetic field, which results in differences of nuclear spin precession of protons close to the NPs and protons of the bulk sample material (dephasing of proton spins) . An important property of superparamagnetic NPs employed for NMR measurements is their relaxivity, which is defined as their capacity to alter the relaxation rate constants, both longitudinal (parallel to the external static magnetic field) and transverse (perpendicular to the external static magnetic field) . The relaxivity depends of the single NP size and the concentration of the NP ensemble . The relaxation rate constants are inverse functions of the relaxation times (R = 1/T), so that the relaxivity directly correlates to changes of the relaxation times and, thus, to the signal enhancement achieved by the employed magnetic NPs . The time associated to transverse relaxivity is denoted as T 2 , and T 1 is associated to the longitudinal relaxivity . As the longitudinal relaxivity is smaller than the transverse relaxivity for commonly employed magnetic NPs, measurements of the latter are usually employed for biosensing . This way, lower concentrations of magnetic NPs need to be applied, which increases the assay’s sensitivity and lowers the amount of required reagents . If the NPs are functionalized to bind to specific target molecules, two distinct measurement modes can be applied for biosensing, as described below . In the first measurement mode, the biomarkers of interest are labeled by the magnetic NPs, and the excess of unbound NPs is removed . The remaining NPs induce changes of the sample’s relaxation times due to the added magnetic field inhomogeneities, which are proportional to the number of residual magnetic NPs . This measurement mode is used for detecting larger targets like cells and bacteria, which can easily be separated mechanically from unbound free NPs . In those cases, the magnetic NPs bind to biomarkers on the cell surface . The second measurement mode relies on clustering of the magnetic NPs due to cross-linking by analyte molecules that specifically bind to the functional groups immobilized onto the NP surfaces . A difference of the T 2 relaxation time between single-dispersed NPs and agglomerated NPs is the fundamental effect on which this measurement approach is based . Applications of this method include the sensing of small molecules (e.g., drugs), oligonucleotides and proteins . By using enzymes, competitive binding processes or changes of the pH value and of the temperature, the assay can be performed backwards as well, i.e. , starting from particle agglomerates and ending at single-dispersed particles . This dual-direction biosensing capability is termed magnetic relaxation switching (MRSw), which describes changes of the organizational state (single-dispersed vs. agglomerated) of the magnetic NPs in solution . The principle of the MRSw measurement method is sketched in . The formation of magnetic NP agglomerates results in a decrease of the measured relaxation time, and vice versa if particle agglomerates are dispersed into single NPs. The observation of reduced relaxation times upon magnetic NP agglomeration can be explained by the outer-sphere theory. General comprehensive summaries of the outer-sphere theory can be found in , while a more detailed description is given in the . Briefly summarized, the relaxivity is directly proportional to the geometric cross section of the NP . Additionally, a particle cluster consisting of single NPs can be seen as an equivalent of an enlarged single NP, which has been shown to be true regardless of the cluster’s fractal dimension . Thus, the formation of a NP cluster can be described by a single NP of increasing size, which means that upon NP agglomeration, the relaxivity increases and the measured relaxation time decreases . Here, the effective cross section of a NP agglomerate is larger than the sum of the contributing single NPs up to a certain limit of agglomerate size (>100 nm diameter) . The relaxivity increases with agglomerate size up to a plateau, which is then followed by a decrease . The decrease in relaxivity can be explained qualitatively by the increasing distance between NP agglomerates so that less water protons are affected by the generated magnetic field inhomogeneities, which is related to the limited translational diffusion behavior of water molecules during the time scale of a MRSw experiment (less protons diffuse into the inhomogeneous regions of the static magnetic field within the duration of an experiment) . A detailed introduction and also an extension of the outer-sphere theory is given in . Furthermore, a set of mathematical equations that allow to model the behavior of MRSw experiments and to calculate assay sensitivities and dynamic ranges has been published by Min et al. . A wide range of different applications of NMR measurements making use of superparamagnetic NPs can be found in literature and is already partly listed in . The following paragraphs give an introduction into the broad area of potential applications. Josephson, Perez and Weissleder have been the first ones who discovered the biosensing potential of NMR measurements assisted by superparamagnetic NPs . Here, they employed oligonucleotide functionalized NPs, which were cross-linked by complementary oligonucleotide strands to induce NP clustering, thus leading to a decrease of the observed transversal relaxation time . The backward direction of the MRSw sensing approach has first been demonstrated by Perez et al. , who showed that the transversal relaxation time increases when NPs connected by double stranded DNA are separated from each other by applying DNA-cleaving agents . NMR measurements have also been used to detect polymerase chain reaction (PCR) products , which has been applied for the diagnosis of tuberculosis . The first experimental results on the detection of protein-protein interactions by applying green fluorescent protein antibody functionalized NPs to detect the corresponding proteins have been presented by Perez et al. , who in the same publication also presented results on enzyme activity sensing achieved by reversing the MRSw assay direction (enzymatic cleaving of NP binding to yield single-dispersed NPs in solution) . Additionally, several enzymes have been tested by applying the MRSw sensing principle. Exemplary, avidin functionalized NPs can be cross-linked by applying a bi-biotinylated peptide, which in the following can be cleaved by the protease enzyme to generate a change in measured relaxation time . Other examples are lysozymes, which have been tested in human serum samples with a LoD in the lower nanomolar regime , and measurements of the telomerase activity by employing different telomerase inhibitors . Measurements of the T 2 relaxation time by nuclear magnetic resonance have also been applied for determining dissociation constants between proteins and associated ligands . Larger molecules have also been examined, e.g., viral particles of the herpes simplex virus and the adenovirus , S. enterica bacteria in milk samples or cancer cells that have been detected and profiled by MRSw sensing . On the other end of the scale bar, also very small molecules have been detected in various sample solutions. For example, hormone-like bisphenol A molecules have been tested in drinking water with a LoD of 400 pg/mL , enantiomeric impurities in solutions of the amino acid phenylalanine have been examined , and the salbutamol drug has been measured in swine urine samples . Identification of inhibitors for toxins released by the Anthrax bacterium by measurements of the T 2 relaxation time has also been reported . In a suitable measurement setting, MRSw can also be applied to detect ions in solution, which has been shown by Atanasijevic et al. , who detected calcium ions by applying calcium dependent protein-protein interactions to induce magnetic NP agglomeration . Further developments of the measurement principle concern miniaturization of the experimental setup and the development of implantable MRSw systems, which have been tested up to now for the detection of both cancer and cardiac biomarkers .
In this section, we review biosensor concepts that rely on magnetic agitation and optical detection of magnetic particles. Optical detection has the advantage that the distance between the sample and the detector usually is not a very crucial parameter (especially when measuring in transmission geometry), whereas the fast decay of the particle’s magnetic stray field with distance usually requires close proximity of the detector to the sample for magnetic detection methods, which limits the flexibility in the design of biosensing setups. In addition, by spectral tuning of the optical response of the particle labels, multiplex analyte detection formats can be designed . Optical detection of magnetically induced orientation changes of the particles in the sample solution requires that the particles display some sort of optical anisotropy, which can be the result of either clustering of intrinsically optically isotropic particles (see ), or can follow from an intrinsic optical anisotropy of the particles (see ). 3.1. Detection by Clustering of Intrinsically Optically Isotropic Magnetic Particles In this section, we discuss biosensing concepts where optical detection of the particles relies on an optical anisotropy that is induced by assembly of initially optically isotropic particles into doublets, chains or clusters. When these particle assemblies are agitated by an applied magnetic field, their optical signal is modulated, which allows to quantify the concentration and the average size of the magnetic particle clusters. 3.1.1. Sandwich Assays on Magnetically Rotated Particle Clusters In an applied magnetic field, the magnetic moment of individual particle labels aligns in field direction, and the magnetic dipolar interaction between particles can lead to formation of particle chains along the field lines. In this way, it is possible to conduct standard sandwich immunoassays on the surface of the magnetic particles and read out analyte concentration dependent signals directly in the sample solution without requiring washing. The concept was introduced by Anker et al. and is sketched in . The observed fluorescence intensity of fluorophores bound to the surfaces of the magnetic particles can be modulated by varying the orientation of the particle chains by the applied magnetic field, which changes the relative number of visible (dark stars in ) to non-visible (light stars in ) fluorophores. In a demonstration experiment, Anker et al. applied biotin-labeled fluorophores directly to streptavidin-coated magnetic particles (870 nm mean diameter by Bangs Laboratories Inc., Fishers, IN, USA), and could demonstrate the detection of bound fluorophores above the large background of unbound labels by magnetic modulation . A similar demonstration experiment was later carried out by Petkus et al. , who showed detection of fluorophore-labeled cortisol analyte by magnetic particles (1.6 µm mean diameter BioMag particles by Polysciences Inc., Warrington, PA, USA) functionalized by monoclonal cortisol antibodies . They could achieve a cortisol detection limit of 300 pM in buffer solution by magnetically rotating particle clusters and analyzing the modulated fluorescent intensity by a lock-in amplifier . They later extended their analysis to the cardiac protein biomarker myglobin, and compared immunoassays both in competitive and sandwich (non-competitive) format performed in buffer and serum . They achieved similar detection limits in buffer and serum of about 2.5 nM for the competitive format, and about 50 pM for the sandwich format . Here, however, the only assay that was performed strictly without any washing step was the competitive format type assay in buffer, while all other experiments included at least one washing step . This is also true for the most recent study by the group, where following further refinement of their image analysis procedure , they demonstrate highly sensitive detection of three cardiac biomarkers (detection limits: myoglobin ~360 aM, heart-type fatty acid binding protein (H-FABP) ~67 fM, troponin I ~42 fM) . The biomarkers are spiked into buffer solutions and are detected by a sandwich immunoassay format performed on magnetically rotated particle clusters . In summary, while immunoassays on magnetically rotated particle clusters can be in principle applied in a strictly homogeneous format , up to now the most sensitive results do involve washing steps , and it has yet to be demonstrated that the method can also be performed directly in unprocessed sample material with sufficient sensitivity and specificity. 3.1.2. Particle Clustering Mediated by Analyte Molecule Binding Another approach of using magnetic particle clustering is to induce binding between magnetic particles in an external magnetic field via bound analyte molecules, thus creating particle doublets, multiplets, chains or clusters that are also retained once an applied magnetic field is removed again. Here, a prerequisite is that the analyte molecule possesses multiple binding sites to receptors immobilized onto the magnetic particles, thus enabling cluster-formation of particles. If this is not the case, as usually encountered for small molecule detection, a competitive assay format can also be chosen where clustering of particles is reduced by analyte interaction (see , for example). In the following, different concepts are presented which are based on optical detection of analyte-specific clustering of magnetic particles that make use of magnetic fields to accelerate cluster formation and/or to induce periodic variations in the optical signal. Detection of Magnetically Accelerated Particle Dimer Formation The most basic realization of particle clustering mediated by analyte molecule binding was introduced by Baudry et al. and is sketched in . It is based on optical density measurements of magnetic particle dispersions functionalized by either polyclonal or two different types of monoclonal antibodies against the target antigen. The antigens are then captured at the surfaces of the particles, and the formation of particle chains in an applied magnetic field accelerates the creation of particle doublets via bound analyte molecules, which are also retained once the magnetic field is switched off again. As particle doublets scatter light differently than two single particles, the concentration of dimers, and, thus, analyte molecules can be quantified by turbidimetric (extinction) measurements . In an initial demonstration experiment, Baudry et al. showed a detection limit of about 1 pM for ovalbumin model analyte in buffer by magnetic particle labels (200 nm diameter by Ademtech SA, Pessac, France) functionalized by polyclonal ovalbumin antibodies with a total cycle time of five minutes, which includes application of a 20 mT strong magnetic field for one minute to accelerate dimer formation . From experiments without the magnetic field incubation step, the authors extrapolate that achieving the same density of dimers without magnetic field acceleration would take more than eight hours . Thus, compared to long-established similar immunoassays based on agglomeration of latex particles , the magnetic agitation step makes this simple method both fast and sensitive. Following detailed analysis of the theory of ligand-receptor interaction in chains of magnetic particles and experimental investigations of the kinetics for analyte molecules with different tether lengths and numbers of binding sites , the group also demonstrated the method to be capable of detecting C-reactive protein (CRP) directly from serum samples with a detection limit of about 1 pM and a dynamic range of three orders of magnitude with a total cycle time of one minute . Finally, the group also introduced an advanced measurement method, where the concentration of dimers is no longer determined in a randomized state, but the extinction difference of the dimers for magnetic-field induced alignment parallel and perpendicular to the optical axis is used, which further increases the signal and achievable sensitivity . Here, the trick is to apply the aligning magnetic field pulse at a magnitude sufficient to rotate the particle doublets created by analyte molecule interaction in the field direction, but insufficient to induce re-chaining of particle labels by magnetic dipolar interactions, which would lead to a false unspecific signal . For their chosen experimental conditions, the authors determined a field magnitude of 5 mT as good compromise between particle doublet alignment rate and prevention of re-chaining . Magnetically Rotated Particle Chain Detection A similar measurement method as described by Baudry et al. has been introduced by Park et al. , but instead of following a multi-stage protocol, they carry out a one-step procedure that comprises continuous application of a rotating magnetic field (RMF) . Here, the RMF induces formation of magnetic particle chains that follow the applied field rotation, which also leads to modulation of the transmitted light intensity (see a) . The length of the particle chains is limited by the balance between the hydrodynamic force due to the viscosity of the solution and the total strength of the attractive force between the particles. For particles with bound analyte molecules, their binding strength adds to the attractive magnetic dipolar interaction force between particles, thus leading to an increasing average particle chain length with analyte molecule concentration (see b) . As the modulation intensity of the transmitted light also depends on the average length of the rotating particle chains, the amplitude of the transmitted light intensity is a measure of the analyte concentration in the sample solution . Applying biotinylated magnetic particles with a mean diameter of 250 nm, Park et al. demonstrated this method for direct one-step detection of the model analyte avidin with a detection limit of about 100 pM within a measurement time of less than 30 s . While detection of actual biomarkers in real samples still needs to be demonstrated, this method represents a fast and simple homogeneous analysis of biomarkers. Scattering Detection of Particle Cluster Magnetorotation The principal multi-step measurement procedure introduced by Baudry et al. , which comprises incubation of the samples with functionalized magnetic particle labels, acceleration of particle clustering via bound analyte molecules by inducing chain formation in an applied magnetic field and optical detection of the formed particle clusters, has been refined with regard to the final detection step by Ranzoni et al. . While the quantity of interest, which is the concentration of particle clusters, is measured above a large background signal of non-agglomerated particles by the extinction measurements performed by Baudry et al. , Ranzoni et al. introduced a method specific to particle clusters based on scattering measurements in a rotating magnetic field (RMF) . shows a sketch of their measurement setup, where the optical path is along the z-axis, the RMF is applied in the xz-plane, and the scattered light is picked up at an angle of ~30° from the z-axis. Due to their characteristic magnetic and optical anisotropy, particle doublets rotate with the applied magnetic field and induce a modulation of the scattered light intensity at twice the frequency of the RMF, while the contribution of single particles to the optical signal modulation is negligible . The measurement signal represents the magnitude of the 2nd harmonic of the optical scattering intensity as analyzed by fast Fourier transformation (FFT). When the frequency of the RMF is increased, the particle doublets first follow the RMF synchronously with increasing phase lag up to a critical frequency, which is defined by equal magnetic and drag torques, while at higher frequencies, alternating forward and backward rotations of the doublets occur . By analyzing the resulting frequency dependence of the particle magnetorotation, Ranzoni et al. demonstrated direct quantification of the concentrations of particle doublets as well as the average values and variations of the magnetic susceptibilities of magnetic particles (particles with mean diameters of 300 nm and 500 nm by Ademtech SA, Pessac, France) . To demonstrate the applicability of their method for homogeneous biosensing, they carried out detection of spiked biotinylated bovine serum albumin (BSA) model analyte by streptavidin functionalized particle labels and showed a detection limit of about 400 fM in buffer and 5 pM in plasma . By optimizing the molecular surface architecture of the magnetic label antibody functionalization, the group could also demonstrate detection of the cancer biomarker prostate-specific antigen (PSA) directly in blood plasma, achieving hereby a detection limit of about 500 fM for a total assay time of 14 min (160 fM in buffer) . In the analysis of the measured analyte dose-response curves, the authors observed two plateaus, which, by modeling of the dependence of the optical signal on the degree of cluster formation, they could attribute to a low analyte concentration regime where only particle singlets and doublets exists, and a higher analyte concentration regime where particle multiplets are also formed . Detection of Bead Assembly Magnetorotation Instead of adjusting the experimental parameters to a regime where mainly formation of particle doublets occurs, another approach is to analyze larger particle clusters. To that end, Kinnunen et al. realized a biosensor based on measuring the magnetorotation of magnetic particles that assemble into a cluster at the bottom of a hanging droplet (see a) . The droplet is illuminated by a laser or LED light source from above, and the particle cluster is observed from below either by an inverted microscope or by a photodetector . Here, the droplet also serves as a lens to magnify the shadow image of the particle cluster 100-fold . The particle cluster is rotated in the image plane by an applied RMF, and the frequency of the RMF is chosen well above the critical frequency of the particle cluster . The critical frequency is defined as the maximum rotation frequency at which a magnetic particle (or particle cluster) can still follow the applied RMF synchronously ( i.e. , equality of magnetic and hydrodynamic drag torque) . Above this critical frequency, the particle (or particle cluster) experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . By performing a FFT of the optical signal, the net rotation rate of the particle cluster is determined, and changes in the particle cluster assembly (e.g., cluster expansion or volume increase) or the local fluid viscosity alter the net rotation rate of the particle cluster (see b) . By employing this measurement principle and magnetic particles (2.8 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by E. coli antibodies, Kinnunen et al. performed E. coli bacteria growth studies, including determination of the minimum inhibitory concentration of the two antibiotics streptomycin and gentamicin . Here, bacteria growth on the particle cluster caused an increase of the cluster volume, thus leading to an increase of its rotational period . In the following, the group expanded their analysis to the blood coagulation factor thrombin by observing clusters of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by two different thrombin-specificaptamers . The main effect of thrombin target protein binding to the particles was an expansion of the gaps between the particles, thus leading to larger cluster volumes and increased rotational periods . The authors also determined the dependence of the fractal dimension of the particle clusters on the thrombin concentration by optical microscopy, which showed a good agreement to the magnetorotation period analysis . In buffer, the authors demonstrated a thrombin detection limit as low as 80 fM , which, however, increases to about 7.5 nM in serum (see SI of ), which the authors mainly attribute to the low specificity of the aptamer receptors . Lately, the group also presented a prototype version of their measurement principle, which no longer requires a microscope or hanging droplets, but is realized on three stacked 384-well plates and enables 48-plex detection . The middle plate contains the sample and the particle cluster, while the top and bottom plate incorporate the optics (LED light sources and photodiode detectors, respectively) . The authors demonstrated detection of E. coli bacteria (LoD 5000 cfu/mL) within a total analysis time of about 90 min and also determined the minimum inhibitory concentration of the antibiotic gentamicin . Optomagnetic Detection Incorporating Blu-ray Optics A highly integrated optomagnetic device for measuring the response of magnetic particle clusters to an applied magnetic field that makes use of Blu-ray optical components and a microfluidic disk has lately been introduced by Donolato et al. . displays a sketch of the most recent version of the employed setup, where the magnetic particle labels within the detection chamber are excited by a linear AC magnetic field generated by electromagnets placed above and below the microfluidic disk . The dynamic response of the particle labels to the AC magnetic field is determined optically by transmission measurements of light emitted from a Blu-Ray laser diode and picked up by a photodetector . The measurement signal is given by the 2nd harmonic of the photodetector signal, which is usually recorded as a function of the frequency of the applied AC magnetic field (2nd harmonic spectrum) . As larger magnetic clusters are formed by analyte-induced binding, the hydrodynamic drag of the clusters increases, resulting in an altered magnitude and frequency of the peak in the 2nd harmonic spectrum . As an initial proof-of-concept of the method, Donolato et al. demonstrated DNA-based detection of E. coli bacteria following isothermal rolling circle amplification (RCA), employing magnetic particles (100 nm diameter by Micromod, Rostock, Germany) functionalized by oligonucleotide detection probes that bind to the DNA coils produced by the RCA, and demonstrated a detection limit of about 10 pM of DNA coils in buffer solution . In the following, the group evaluated different sensing geometries, and found out that a configuration with perpendicular alignment of the AC magnetic field to the optical axis and parallel alignment of the linear polarization direction of the incident light to the AC magnetic field gives the largest signal, which, in addition to the already previously introduced E. coli bacteria detection via RCA products , they demonstrated for the detection of biotinylated BSA model analyte by streptavidin-functionalized magnetic particle labels (obtained detection limit in buffer ~100 pM) . By adding an incubation step in a sufficiently strong static magnetic field to accelerate particle clustering via bound analyte molecules prior to data acquisition (see permanent magnets in ) and digesting the DNA coil RCA products into monomers, the group demonstrated simultaneous detection of three different bacteria causing urinary tract infection ( E. coli , Proteus mirabilis and Pseudomonas aeruginosa ) . In addition, they showed identification of E. coli bacteria from 28 urine samples with 100% specificity compared to standard clinical laboratory plate culture data . The group also adapted their method to a competitive assay format for the detection of the small molecule adenosine triphosphate (ATP), showing a detection limit of about of 74 µM in buffer and a dynamic range of ~0.1–10 mM, which conforms well to the clinically relevant ATP concentration range . Next, the group showed direct detection of Salmonella bacteria by a competitive assay incorporating two types of magnetic particles, i.e. , large capture particles (5 µm diameter by Micromod, Rostock, Germany) and small detection particles (100 nm diameter by Micromod, Rostock, Germany) . Following a sedimentation step of the large capture particles, the concentration of the remaining detection particles is measured, which due to the competitive assay format scales with the concentration of bacteria, resulting in a detection limit of about 80,000 cfu/mL in buffer . The latest application demonstrated by the group concerns quantification of the dengue fever protein biomarker NS1 by magnetic particle labels (170 nm diameter by Merck, Darmstadt, Germany) functionalized by two different monoclonal NS1 antibodies, resulting in a detection limit of 25 ng/mL (corresponds to ~500 pM at a NS1 molecular weight of 46–55 kDa ) measured directly in spiked serum samples . Naked Eye Detection of Particle Clusters The easiest way to optically sense the formation of particle clusters in an applied magnetic field, is, of course, by naked-eye detection. This detection modality has been introduced by Leslie et al. , who applied a rotating magnetic field (RMF) to magnetic particles dispersed in a microfluidic well to detect DNA via particle cluster formation, which is quantified by digital image analysis . shows a sketch of the group’s latest setup , which in addition to the RMF also incorporates agitation of the particles by a vortexer (‘dual-force’ ) to enhance the homogeneity of cluster formation across multiple neighboring wells (12 wells demonstrated), but also to speed up the required incubation time and to enhance the detection limit . The images in show the distribution of magnetic particles following the agitated incubation for a control without analyte DNA (−) and a sample with analyte DNA , the presence of which induces agglomeration of particle labels visible to the naked eye . In their initial work using RMF agitation only, the authors demonstrated total DNA concentration detection by aggregation of magnetic particles (8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) for direct white blood cell count from human whole blood samples . This ‘chaotrope-driven aggregation’ (CDA, ) is caused by unspecific adsorption of DNA onto particles with silica surface driven by DNA dehydration, which is induced by the addition of chaotropic salts . Furthermore, the authors could also achieve detection of specific DNA sequences (synthetic 26-base target) by ‘hybridization induced aggregation’ (HIA, ) recognition of magnetic particles (1 µm diameter Dynabeads by Invitrogen, Waltham, MA, USA) functionalized by two different oligonucleotides complementary to the 5′ and 3′ end of the target sequence . Later, still making use of the RMF-only setup, the group extended their total DNA concentration CDA analysis to microbial growth testing ( E. coli detection) as well as differentiation of CD4+ T-Cells, the latter achieved by adding an immunomagnetic separation step up-front . Following introduction of the dual-force setup , the group systematically analyzed the influence of different target sequence parameters on the HIA efficiency of the target to oligonucleotide-functionalized magnetic particles, also including differentiation of one, two and three base mismatches . The latter analysis was further advanced for detecting single nucleotide polymorphism mutation of the KRAS gene from pancreatic and lung cancer cell lines by the dual-force setup, demonstrating efficient HIA discrimination of mutant and wild-type KRAS genes following polymerase chain reaction (PCR) amplification to a minimum number of 10 12 copies . While the CDA approach is intrinsically non-specific, it can also be turned into a specific detection by performing sequence-specific DNA amplification reactions up-front. However, efficient CDA requires DNA lengths of at least 10 kilo-base-pairs (kbp), while the products of amplification reactions are usually much shorter . By introducing a competitive assay format, where rising concentrations of the amplification product increasingly inhibit magnetic particle agglomeration that is induced by addition of a fixed concentration of 48 kbp long λ-phage DNA, DuVall et al. demonstrated successful detection of the food-borne pathogens E. coli and Salmonella as well as the Rift Valley fever virus by CDA following loop-mediated isothermal amplification (LAMP) . An even simpler CDA analysis procedure called ‘pipette, aggregate and blot’ (PAB) was introduced by Li et al. . Here, the magnetic particles and the sample are sequentially picked up by a pipette, and the mixture within the pipette tip is exposed to a static magnetic field to induce DNA-mediated formation of aggregates . Next, the fluid is dispensed onto a filter paper (‘blotting’), on which the degree of particle aggregate formation is determined by digital photography and image analysis, i.e. , a process that can also be accomplished by any smart phone . The authors demonstrated detection of human genomic DNA from purified whole blood by the PAB technique and showed that the achievable detection limit depends on the size of the employed magnetic particles (800 ng/mL for 1 µm diameter by Invitrogen, Waltham, MA, USA, and 6.4 µg/mL for 8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) . While this does not reach the detection limit of 250 pg/mL demonstrated for genomic DNA detection by CDA analysis using the dual-force setup , the PAB approach has an advantage with regard to its simplicity. A very similar procedure was followed by Lin et al , who exposed mixtures of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) and the sample solution to multiple sequences of aggregation (application of a static magnetic field) and re-suspension . Following dispensing of the mixture onto a filter paper, the degree of particle clustering is determined by digital image analysis of the filter paper . The authors demonstrated their method for the detection of the human papilloma virus type 18 gene following rolling circle amplification (RCA), and could successfully distinguish positive samples (genomic DNA isolated from HeLa cells) from negative control samples (genomic DNA isolated from human hepatoma cells) . With the exception of total DNA content determination by CDA (, the CDA part of Reference and the white blood cell analysis part of ), all naked-eye detection papers presented above do not strictly fall into the category of one-step homogeneous detection, as they do involve some sort of upfront sample preparation, i.e. , immunomagnetic separation (CD4+ T-Cell detection part of ), DNA amplification or DNA purification (, HIA part of ). A true one-step analysis procedure comprising analyte-mediated formation of particle clusters in an applied magnetic field has lately been introduced by Chen et al. . shows a schematic representation of the measurement principle employed by the authors, which they designate as ‘immunomagnetic aggregation’ (IMA) . Here, a static magnetic field is applied that attracts the magnetic particles (immunomagnetic beads, IMB) to the side wall of the sample tube, and the structure of the resulting agglomerate depends on the presence of target molecules in the solution . The reason is the increased diameter and decreased net magnetization of an IMB-target complex as compared to blank IMBs, which influences the balance between the attractive magnetic force component tangential to the wall and the friction force, thus leading to an expanded arc-shaped aggregation of IMB-target complexes along the tube wall as opposed to a compact stripe-shaped form for blank IMBs (see top view representation in ) . The authors compare their naked-eye IMA detection results to gold lateral flow strip (GLFS) references . In addition, by analysis of digital images taken from the sample tubes, the authors extract an average grey scale value that semi-quantitatively depends on the target molecule concentration and can be used to compare the IMA data with dose-response curves obtained from enzyme-linked immunosorbent assay (ELISA) based reference detection . Employing magnetic beads (200 nm diameter Estapor particles by Merck, Darmstadt, Germany) functionalized by polyclonal E. coli antibodies, the authors demonstrate a detection limit of about 10 4 cfu/mL for the direct detection of E. coli bacteria in spiked river water samples within 15 min, which is one order of magnitude more sensitive than reference GLFS detection, and about ten times faster than reference ELISA detection . Besides, the authors likewise confirm correct IMA-based identification of E. coli contamination of non-spiked water samples obtained from a livestock farm . In addition to bacteria, the authors also show detection of the cancer biomarker proteins alpha fetoprotein (AFP) and carcino-embryonic antigen (CEA) directly in spiked urine samples using magnetic particles functionalized by pairs of respective monoclonal antibodies, and achieve a detection limit of about 2.5 ng/mL for AFP and 2.0 ng/mL for CEA, both of which are well below the clinical cut-off values . Finally, the authors successfully discriminate AFP and CRP positive from negative patients by IMA-analysis using non-spiked clinical serum samples . 3.2. Detection by Intrinsically Optically Anisotropic Magnetic Labels An alternative to generating optical anisotropy by inducing clustering of intrinsically optically isotropic particles (see ) is to make use of magnetic particle labels that display an intrinsic optical anisotropy. To that end, three main approaches have been followed. One possibility is to make use of magneto-optical effects ( i.e. , the Faraday or the Cotton-Mouton effect) as source of optical anisotropy, which usually result in changes of the polarization state of the incident light as optical measurement signal (see ). Alternatively, optical anisotropy can be created by hemi-spherical coating of initially optically isotropic spherical particles (see ) or by employing particle labels with shape anisotropy (e.g., rod-shaped particles, see ). In the latter two cases, the optical measurement signal usually comprises a change in the transmission or scattering intensity of the particle labels. 3.2.1. Magneto-Optical Detection of Magnetic Particle Labels When an external magnetic field is applied to a suspension of magnetic particles, their magnetic moments align parallel to the applied field, and the suspension becomes birefringent and dichroic. As the dichroism induced in magnetic particle suspensions is usually much smaller than the birefringence , it is normally neglected in the analysis. Both, the Faraday effect (magnetic circular birefringence, magnetic field applied parallel to the direction of light propagation, ) and the Cotton-Mouton effect (magnetic linear birefringence; magnetic field applied perpendicular to the direction of light propagation, ) have been exploited to magneto-optically characterize magnetic particles. Regarding the measurement modes, linearly polarized light is incident onto the sample, and the magnetic field amplitude either varies sinusoidal with time (AC susceptibility mode, ), or is applied as a step function (magnetorelaxation (MRX) mode, ). Magneto-optical methods are sensitive to changes in the Brownian relaxation time of magnetic particle suspensions, and, consequently, have been applied to study hydrodynamic particle diameter distributions or medium viscosities . A typical setup, as it is employed to magneto-optically (Cotton-Mouton effect) measure the relaxation of the magnetization of a particle ensemble after an externally applied uniaxial magnetizing field is turned off (MRX mode), is sketched in a . It comprises a laser light source that is linearly polarized by a polarizer aligned at −45° relative to the orientation of the magnetic field, which is oriented perpendicular to the propagation direction of the light and is generated by a Helmholtz coil. In the center of the Helmholtz coil, the sample containing the particle dispersion within a non-birefringent cuvette is positioned. When the magnetic moments of the particles are aligned by the applied magnetic field, the suspension becomes birefringent, and the transmitted light gets elliptically polarized . The physical origin of the optical anisotropy can be related to crystalline or shape anisotropy of the particle cores, but for the commonly applied iron-oxide NPs mostly arises from surface magnetic anisotropy . After passing the quarter wave plate, which is aligned with its slow axis parallel to the polarizer, the light is again linearly polarized, but shifted in polarization by a birefringence-proportional phase lag . As a result, some light can pass the analyzer, which is oriented at +45° relative to the magnetic field ( i.e. , perpendicular to the polarizer), and, consequently, blocks the incident light if no birefringence is induced in the sample ( i.e. , the particles are randomly oriented) . The transmitted light is measured by a photodiode detector, which in this configuration is proportional to the induced birefringence . b schematically shows the time dependence of the measured light intensity for a setup as the one described in a. When the magnetic field is turned on, birefringence in the sample is induced, and the measured intensity reaches a stationary value I 0 . When the magnetizing field is turned off, the magnetic particles transit back to a random state. For particles that predominantly relax their net magnetization via Brownian rotational motion, the measured intensity exponentially decays to zero with a time constant given by the Brownian relaxation time of the particles, which is proportional to the cube of their hydrodynamic diameter . Since analyte molecules bound to the particle surfaces increase their hydrodynamic radii, the measured intensity of analyte-carrying particles (red curve) decays slower than for plain reference particles (green curve). By fitting the measured intensity by exponential decay curves and integrating across the particle diameter , the hydrodynamic diameter distribution of the particle ensemble can be deduced. Alternatively, the intensity curve can also be fitted by a stretched exponential, where the size distribution of the particles is described by a polydispersity index . Owing to their high sensitivity to changes in the hydrodynamic shell thickness, magneto-optical methods are well suited as homogeneous particle-based biosensors that can be applied also to studies in dense and highly scattering media, which makes them advantageous to other techniques such as dynamic light scattering (DLS). For example, Köber et al. demonstrated in-situ evaluation of the hydrodynamic diameter distribution of magnetite NPs with three different surface coatings (plain PMAO polymer, galactose and PEG) directly within the agarose carrier matrix used for gel electrophoresis, and the obtained diameters have been shown to be independent on fluctuations of the NP concentration along the gel . Stepwise increases in the mean hydrodynamic diameters of carboxylated magnetite NPs on the covalent attachment of avidin, followed by functionalization with biotinylated immunoglobulin G (IgG) antibodies and binding of IgG antigen has been demonstrated by Ku et al. , and they showed that the measured NP diameter increases are well in line with the expected hydrodynamic sizes of the respective molecules . Lartigue et al. carried out magneto-optical characterization of the formation of protein coronas around maghemite NPs for three different NP coatings (carboxylic moieties, glucose and citrate) by incubating them with different concentrations of both BSA and whole blood rat plasma . They showed that the formation of the protein corona depends both, on the NP surface coating and the plasma concentration . Here, the glucose coating efficiently prevents further adhesion of plasma proteins, while citrate-coated NPs and NPs with carboxylic moieties first undergo cluster formation at low plasma concentrations (10%–20%), while larger plasma concentrations lead to single particle stabilization with a mean protein corona thickness of 8.8 nm . The largest signal in magneto-optical biosensing can be achieved when the analyte molecule contains multiple binding sites and, consequently, induces cross-linking of the particles. This is demonstrated by Glöckl et al. , who carried out a direct comparison of multicore maghemite NPs functionalized by monoclonal antibodies against PSA and by polyclonal antibodies against IgG . They observed a significant increase in the relaxation time of the NPs only for IgG analyte, which they explained by the analyte-induced formation of NP clusters functionalized by polyclonal antibodies . For the detection of carcinoembryonic antigen (CEA), however, the group obtained cluster formation both for NPs (same type as employed in ) functionalized by monoclonal and polyclonal antibodies, and a detection limit for CEA in buffer in the lower nanomolar regime could be demonstrated . Employing magnetic NPs functionalized by polyclonal antibodies (same type as employed in ), the group also investigated the detection of immunoglobulin M (IgM), IgG, eotaxin, CEA and insulin as well as insulin-like growth factor 1 (IGF-1) , and they could demonstrate a detection limit in the lower nanomolar regime for CEA and IGF-1 and in the picomolar regime for IgG . Furthermore, on the basis of linear chain formation model, the group derived a distribution function of particle clusters, and by fitting the measured intensity curves to this model, they could determine the time evolution of the relative number of monomers, dimers, trimers, etc. . In addition, from the analysis of the time dependence of the measured relaxation curves for different analyte concentrations, the group determined the kinetic parameters for the binding of eotaxin , CEA and IGF-1 to NPs functionalized by respective antibodies, and compared the results to surface plasmon resonance (SPR) data . Similarly, the binding of the lectin concanavalin A (ConA) to carbohydrate-functionalized magnetite NPs was analyzed by Köber et al. . They applied the Hill equation to study the analyte-driven formation of clusters, and directly determined the association and dissociation rate constants by homogeneous magneto-optical measurements by first adding varying concentrations (nanomolar range) of ConA analyte (association), and later adding excess amounts of free carbohydrates (50 millimolar of mannose or glucose) that practically completely dissociates the analyte from the NPs . The demonstrated detection limit for ConA was in the lower nanomolar range . 3.2.2. Hemispherically Coated Spherical Particle Labels Particles with asymmetric properties are commonly designated as ‘Janus’ particles in reference to the two-faced Roman God Janus, a term that has been promoted by P.G. de Gennes in his Nobel Prize address in 1991 . A number of comprehensive reviews have been published within the past decade that detail the different variants, fabrication strategies and applications of Janus particles . Specifically relevant to this review article are magnetic Janus particles for in vitro diagnostic applications as they have been introduced by the term magnetically modulated optical nanoprobe (MagMOON) by the Kopelman group. In its initial realization, Anker et al. employed magnetic microspheres (particles by Spherotech, Lake Forest, IL, USA) that have been coated on one hemisphere by a sputter-deposited gold layer that blocks excitation and detection of fluorophores bound to the non-coated streptavidin-functionalized hemisphere . Consequently, by controlling the alignment of the MagMOONs in the solution by an applied magnetic field, the observed fluorescence intensity can be modulated (see ) . In a demonstration experiment, the authors mixed the MagMOONs with two different biotinylated fluorophores and showed concentration-dependent detection of the fluorophores bound to the MagMOON particles at their respective wavelengths above the large background of non-bound fluorophores by magnetically modulating the particle orientation in the solution . Similarly to the particle chains described in , the MagMOONs can, therefore, be employed as substrates with magnetically modulated fluorescence contrast to directly carry out sandwich immunoassays in the homogeneous sample solution phase without requiring washing . The group also demonstrated detection of single E. coli bacteria by microscopically observing the magnetorotation of individual MagMOONs ( E. coli antibody functionalized magnetic particles with a diameter of 2 µm by Spherotech, Lake Forest, IL, USA, which are hemispherically coated by a 50 nm thick aluminum layer) . The authors applied a rotating magnetic field (RMF) at a frequency well above the critical frequency of the MagMOON , i.e. , the limiting frequency at which a magnetic particle can still follow the applied RMF synchronously . Above this critical frequency, the particle experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . The authors could show that due to the increasing hydrodynamic drag, the measured net rotation rate of the MagMOONs sensitively depends on the number of bound E. coli bacteria, thus providing a tool for homogenous and label-free quantification of bacteria concentrations . Ensemble measurements of MagMOONs, however, are hampered by the rather inhomogeneous magnetization of most available magnetic microspheres . This problem has been addressed by hemispherically coating homogeneous size standard polystyrene particles (diameters of 1, 2, 10 and 100 µm by Polysciences Inc., Warrington, PA, USA) by a nickel layer, thereby reducing the magnetic response variability of the MagMOONs by up to almost one order of magnitude compared to previous results using coated magnetic microspheres . An increase in the throughput of biosensing by observing the magnetorotation of MagMOONs can be accomplished by a droplet-based microfluidic analysis platform, which Sinn et al. introduced and demonstrated for E. coli bacteria growth studies, including fast determination of the minimum inhibitory concentration of the antibiotic gentamicin . Furthermore, the group also demonstrated a stand-alone prototype instrument that no longer requires an optical microscope setup, but measures the magnetorotation of individual MagMOONs by a compact optical setup consisting of a laser diode source and a photodiode detector . Combining such compact optics and high throughput droplet microfluidics, MagMOON magnetorotation as well as the related methodology of ‘label acquired magnetorotation’ have the potential to also find applications beyond research tools. 3.2.3. Magnetic Labels with Optical Shape Anisotropy In this section, we review methods that make use of an intrinsic optical anisotropy of rod-shaped particle labels (nanorods) to optically monitor their orientation in the sample solution. This is enabled by differences of the optical polarizability of nanorods along their principal axes in linearly polarized light . In the following, we discuss a biosensing principle based on this effect as it has been introduced by Schrittwieser et al. . Two distinct types of magnetic nanorods are presented, i.e. , nickel (Ni) nanorods and noble metal shell coated cobalt (Co) nanorods . The measurement method can be applied for detection as well as analysis of proteins in solution. Measurement Principle Nanorods consisting of a ferromagnetic core and an antibody-functionalized noble metal shell are optimal probes for this method , which is based on detecting an increase of the hydrodynamic nanoprobe volume upon binding of target molecules (see sketch of the method in ) . The nanoprobes immersed in the sample solution are excited by an external rotating magnetic field (RMF), which they follow coherently due to their permanent magnetic moment that is fixed along the nanorod axis as a consequence of the magnetic shape anisotropy . The rotational behavior depends on the hydrodynamic nanoprobe drag, which causes the nanoprobe orientation to lag behind the momentary direction of the RMF by a specific phase lag α (see ). Binding of target proteins increases the hydrodynamic nanoprobe volume and drag, thus leading to an increase of the phase lag α. This change in the phase lag represents the measurement signal for this method. To detect these phase lag changes, the anisotropic absorption and scattering properties of the nanorods in linearly polarized light are exploited. Specifically, the detected optical signal intensity depends on the actual orientation of the nanoprobes with respect to the direction of polarization of the incoming light . For measurements performed in transmission geometry, nanoprobes aligned perpendicularly to the polarization show a maximum of transmission, and vice versa . Therefore, it is possible to deduce the momentary orientation of the nanoprobes by analyzing the optical signal. Comparison of the actual magnetic field orientation with the momentary nanoprobe orientation allows deducing the phase lag α, i.e. , the measurement signal of interest. The experimental setup for biosensing measurements by this method consists of two pairs of Helmholtz coils aligned perpendicularly to each other, which are fed by two sinusoidal currents that are phase-shifted by 90°. By adjusting the current amplitudes, a uniform rotating magnetic field is generated, with the sample placed in the center of the coil pair arrangement. The optical part of the setup simply consists of a laser diode, a polarizer, and a photodetector arranged in transmission geometry. A Lock-in amplifier is applied to compare the magnetic signal (specifically: voltage drop across a shunt resistor) with the optical signal. Details on the measurement setup can be found in literature , Due to the symmetry of the applied cylindrical nanorods, the optical signal is frequency doubled with respect to the magnetic excitation. Actual measurements can be carried out under variation of the frequency of the externally applied RMF (phase lag spectra), or at a single frequency for rapid analysis. Ni Nanorod Protein Binding Results Nickel nanorods were synthesized by electrochemical deposition into porous alumina templates . In a two-step anodization process , aluminum foils are anodized in sulfuric acid, which results in the formation of a porous alumina surface layer. The two-step anodization process is necessary to obtain ordered homogeneous porous surface layers of small thickness . Next, the non-conductive oxide layer at the pore bottom was thinned by voltage limited anodization and diameter fluctuations of the pores were reduced by immersion of the foils in phosphoric acid . The so created pores were filled with Ni in a Watts bath by pulsed electrodeposition . Negative and positive voltage pulses were applied periodically to yield homogeneous nanorod growth (see for details). Finally, the nanorod-enclosing aluminum oxide was dissolved in sodium hydroxide with the addition of polyvinylpyrrolidone (PVP) with a molecular weight of 3500 Da as surfactant for nanorod dispersion stabilization. Washing with water and re-dispersion of the nanorods was done by repetitive precipitation in the centrifuge and sonication. a shows a transmission electron microscopy (TEM) image of the final single-particle dispersed nanorod solution. The mean values and standard deviations of the Ni nanorods lengths and diameters were determined by TEM image analysis, and the mean particle magnetic moment was obtained by vibrating sample magnetometry (VSM) measurements . Protein binding to the surface of the Ni nanorods was examined by recording and comparing phase lag spectra of nanorod solutions with and without added protein. BSA was chosen as model protein that binds nonspecifically to the nanorod surface . To quantify the protein shell thickness, a recently developed theoretical model was applied to carry out model fits of the measurement results. Ni nanorods were employed together with a BSA concentration sufficient for at least five times full protein coverage of the nanorod surface . Note that however for similar coatings no more than a monolayer of proteins can be adsorbed . b shows the measured phase lag spectra of Ni nanorods at an external magnetic field strength of 1 mT . Here, the dots represent measured values, while the lines correspond to the results of the fitting procedure. Absolute phase lags of plain nanorods without bound protein (black) and nanorods with bound BSA protein (grey) are plotted against the left y-axis, while the phase lag difference (blue) between the two NP states is plotted against the right y-axis. At each state, the nanorods show a specific hydrodynamic shell thickness on top of the bare metal nanorod surface, which for plain nanorods comprises the PVP surfactant layer and the stagnant surface layer, while for nanorods with bound BSA, the thickness of the protein shell is added to the total shell thickness. By fitting the measured phase lag spectra at both nanorod states by the empirical equations derived from the respective theoretical model , the authors determined an added protein shell thickness of about 22 nm . Noble Metal Coated Co Nanorod Protein Binding Results The here presented Co nanorods possess a small diameter of ~5 nm, which means that surface oxidation easily affects the entire volume. Thus, a precondition of applying Co nanorods for the presented measurement method is the protection of the magnetic core against degradation. This was achieved by a noble metal shell synthesized on top of the magnetic Co core. In the first main step, bare Co nanorods were synthesized, which were covered in the second main step by a noble metal shell of platinum (Pt) and gold (Au) via an interlayer of tin (Sn) (Co@SnPtAu nanorods). Both synthesis steps are described in detail in literature (see ). In brief, bare Co nanorods were fabricated by decomposing a cobalt coordination precursor in the presence of different ligands in anisole solution under a hydrogen atmosphere at elevated temperature. In a next step, a Sn containing layer was grown on top of the nanorod surface to reduce the interface energy between the Co core and the following noble metal shell compounds. The first noble metal shell coating was done with Pt by reacting a Pt precursor with the nanorod surface when immersed in toluene under hydrogen atmosphere, which was then followed by a Au coating process under similar conditions, finally resulting in Co@SnPtAu nanorods. Co core noble metal shell nanorods that have been prepared as outlined above are stable against oxidation and degradation of the magnetic core. a shows a TEM image of a nanorod batch with resulting mean particle lengths of 75 ± 6 nm and diameters of about 9.0 ± 4.5 nm . The polycrystalline nature of the nanorod shell is illustrated by the high resolution transmission electron microscopy (HRTEM) image in b. An elemental map of such a nanorod obtained by scanning transmission electron microscopy energy-dispersive X-ray spectroscopy (STEM-EDX) is shown in the c–f. Here, the different metals are represented by different colors. It can be seen that the growth of the noble metal shell materials takes place on different sections on the nanorod surface. Both shell metals together form a continuous layer that protects the magnetic Co core from oxidation, which was also shown by VSM measurements before and after exposure to air and water . The Co@SnPtAu nanorods are synthesized in organic solvents, so they have to be transferred to aqueous solution to be applicable for any kind of biological measurement. To that end, the nanorods were coated by an amphiphilic polymer consisting of a hydrophilic backbone and hydrophobic side chains . Stabilization of the NPs in water was achieved by charged carboxy groups of the hydrophilic polymer backbone on the nanorod surface . The advantage of these nanorods compared to the Ni nanorods is the presence of the carboxy groups, which can be employed for further surface modifications. This was accomplished by linking antibodies to the nanorods to target a specific protein in a sample solution (contrary to the unspecific adhesion of BSA to the Ni nanorods as described above). The analyte protein to be detected was the soluble domain of the human epidermal growth factor receptor 2 (sHER2) and the antibody protein immobilized onto the nanorods was the monoclonal IgG antibody trastuzumab. Both proteins are clinically applied for the detection and the treatment of breast cancer . a shows the phase lag α spectra recorded at an external magnetic field strength of 5 mT in buffer solution for nanorods without antibody functionalization (nanoreagent—black markers), nanorods including the antibody shell (nanoprobe—red markers) and for nanoprobes fully coated by the target protein (blue markers) . Fitting of the experimental data (solid lines in the figure) by the respective theoretical model resulted in hydrodynamic shell thicknesses of 15 ± 9.5 nm for the antibody shell and of 25 ± 13 nm for the antibody shell including bound target protein (both measured on top of the nanoreagents). These values are in good agreement with respective protein sizes reported in the literature . Here, the target protein sHER2 was added in saturation (200 nM) to ensure full nanoprobe coverage . Addition of BSA protein to the nanoprobes at an even higher concentration (15 µM) did not result in a detectable change in phase lag (green markers), thus demonstrating specific binding of the sHER2 target protein. To detect the concentration of the target protein in solution, it is sufficient to measure the phase lag difference Δα of the nanoprobes to reference nanoprobes without added sHER2 at a single frequency. To that end, a separate experimental setup was chosen to generate a higher magnetic field strength of 10 mT at a fixed rotational frequency of 1000 Hz . The respective sHER2 assay results are shown in b . The sensitivity of the assay was determined by fitting the data by a logistic function , which results in a limit of detection of 440 pM .
In this section, we discuss biosensing concepts where optical detection of the particles relies on an optical anisotropy that is induced by assembly of initially optically isotropic particles into doublets, chains or clusters. When these particle assemblies are agitated by an applied magnetic field, their optical signal is modulated, which allows to quantify the concentration and the average size of the magnetic particle clusters. 3.1.1. Sandwich Assays on Magnetically Rotated Particle Clusters In an applied magnetic field, the magnetic moment of individual particle labels aligns in field direction, and the magnetic dipolar interaction between particles can lead to formation of particle chains along the field lines. In this way, it is possible to conduct standard sandwich immunoassays on the surface of the magnetic particles and read out analyte concentration dependent signals directly in the sample solution without requiring washing. The concept was introduced by Anker et al. and is sketched in . The observed fluorescence intensity of fluorophores bound to the surfaces of the magnetic particles can be modulated by varying the orientation of the particle chains by the applied magnetic field, which changes the relative number of visible (dark stars in ) to non-visible (light stars in ) fluorophores. In a demonstration experiment, Anker et al. applied biotin-labeled fluorophores directly to streptavidin-coated magnetic particles (870 nm mean diameter by Bangs Laboratories Inc., Fishers, IN, USA), and could demonstrate the detection of bound fluorophores above the large background of unbound labels by magnetic modulation . A similar demonstration experiment was later carried out by Petkus et al. , who showed detection of fluorophore-labeled cortisol analyte by magnetic particles (1.6 µm mean diameter BioMag particles by Polysciences Inc., Warrington, PA, USA) functionalized by monoclonal cortisol antibodies . They could achieve a cortisol detection limit of 300 pM in buffer solution by magnetically rotating particle clusters and analyzing the modulated fluorescent intensity by a lock-in amplifier . They later extended their analysis to the cardiac protein biomarker myglobin, and compared immunoassays both in competitive and sandwich (non-competitive) format performed in buffer and serum . They achieved similar detection limits in buffer and serum of about 2.5 nM for the competitive format, and about 50 pM for the sandwich format . Here, however, the only assay that was performed strictly without any washing step was the competitive format type assay in buffer, while all other experiments included at least one washing step . This is also true for the most recent study by the group, where following further refinement of their image analysis procedure , they demonstrate highly sensitive detection of three cardiac biomarkers (detection limits: myoglobin ~360 aM, heart-type fatty acid binding protein (H-FABP) ~67 fM, troponin I ~42 fM) . The biomarkers are spiked into buffer solutions and are detected by a sandwich immunoassay format performed on magnetically rotated particle clusters . In summary, while immunoassays on magnetically rotated particle clusters can be in principle applied in a strictly homogeneous format , up to now the most sensitive results do involve washing steps , and it has yet to be demonstrated that the method can also be performed directly in unprocessed sample material with sufficient sensitivity and specificity. 3.1.2. Particle Clustering Mediated by Analyte Molecule Binding Another approach of using magnetic particle clustering is to induce binding between magnetic particles in an external magnetic field via bound analyte molecules, thus creating particle doublets, multiplets, chains or clusters that are also retained once an applied magnetic field is removed again. Here, a prerequisite is that the analyte molecule possesses multiple binding sites to receptors immobilized onto the magnetic particles, thus enabling cluster-formation of particles. If this is not the case, as usually encountered for small molecule detection, a competitive assay format can also be chosen where clustering of particles is reduced by analyte interaction (see , for example). In the following, different concepts are presented which are based on optical detection of analyte-specific clustering of magnetic particles that make use of magnetic fields to accelerate cluster formation and/or to induce periodic variations in the optical signal. Detection of Magnetically Accelerated Particle Dimer Formation The most basic realization of particle clustering mediated by analyte molecule binding was introduced by Baudry et al. and is sketched in . It is based on optical density measurements of magnetic particle dispersions functionalized by either polyclonal or two different types of monoclonal antibodies against the target antigen. The antigens are then captured at the surfaces of the particles, and the formation of particle chains in an applied magnetic field accelerates the creation of particle doublets via bound analyte molecules, which are also retained once the magnetic field is switched off again. As particle doublets scatter light differently than two single particles, the concentration of dimers, and, thus, analyte molecules can be quantified by turbidimetric (extinction) measurements . In an initial demonstration experiment, Baudry et al. showed a detection limit of about 1 pM for ovalbumin model analyte in buffer by magnetic particle labels (200 nm diameter by Ademtech SA, Pessac, France) functionalized by polyclonal ovalbumin antibodies with a total cycle time of five minutes, which includes application of a 20 mT strong magnetic field for one minute to accelerate dimer formation . From experiments without the magnetic field incubation step, the authors extrapolate that achieving the same density of dimers without magnetic field acceleration would take more than eight hours . Thus, compared to long-established similar immunoassays based on agglomeration of latex particles , the magnetic agitation step makes this simple method both fast and sensitive. Following detailed analysis of the theory of ligand-receptor interaction in chains of magnetic particles and experimental investigations of the kinetics for analyte molecules with different tether lengths and numbers of binding sites , the group also demonstrated the method to be capable of detecting C-reactive protein (CRP) directly from serum samples with a detection limit of about 1 pM and a dynamic range of three orders of magnitude with a total cycle time of one minute . Finally, the group also introduced an advanced measurement method, where the concentration of dimers is no longer determined in a randomized state, but the extinction difference of the dimers for magnetic-field induced alignment parallel and perpendicular to the optical axis is used, which further increases the signal and achievable sensitivity . Here, the trick is to apply the aligning magnetic field pulse at a magnitude sufficient to rotate the particle doublets created by analyte molecule interaction in the field direction, but insufficient to induce re-chaining of particle labels by magnetic dipolar interactions, which would lead to a false unspecific signal . For their chosen experimental conditions, the authors determined a field magnitude of 5 mT as good compromise between particle doublet alignment rate and prevention of re-chaining . Magnetically Rotated Particle Chain Detection A similar measurement method as described by Baudry et al. has been introduced by Park et al. , but instead of following a multi-stage protocol, they carry out a one-step procedure that comprises continuous application of a rotating magnetic field (RMF) . Here, the RMF induces formation of magnetic particle chains that follow the applied field rotation, which also leads to modulation of the transmitted light intensity (see a) . The length of the particle chains is limited by the balance between the hydrodynamic force due to the viscosity of the solution and the total strength of the attractive force between the particles. For particles with bound analyte molecules, their binding strength adds to the attractive magnetic dipolar interaction force between particles, thus leading to an increasing average particle chain length with analyte molecule concentration (see b) . As the modulation intensity of the transmitted light also depends on the average length of the rotating particle chains, the amplitude of the transmitted light intensity is a measure of the analyte concentration in the sample solution . Applying biotinylated magnetic particles with a mean diameter of 250 nm, Park et al. demonstrated this method for direct one-step detection of the model analyte avidin with a detection limit of about 100 pM within a measurement time of less than 30 s . While detection of actual biomarkers in real samples still needs to be demonstrated, this method represents a fast and simple homogeneous analysis of biomarkers. Scattering Detection of Particle Cluster Magnetorotation The principal multi-step measurement procedure introduced by Baudry et al. , which comprises incubation of the samples with functionalized magnetic particle labels, acceleration of particle clustering via bound analyte molecules by inducing chain formation in an applied magnetic field and optical detection of the formed particle clusters, has been refined with regard to the final detection step by Ranzoni et al. . While the quantity of interest, which is the concentration of particle clusters, is measured above a large background signal of non-agglomerated particles by the extinction measurements performed by Baudry et al. , Ranzoni et al. introduced a method specific to particle clusters based on scattering measurements in a rotating magnetic field (RMF) . shows a sketch of their measurement setup, where the optical path is along the z-axis, the RMF is applied in the xz-plane, and the scattered light is picked up at an angle of ~30° from the z-axis. Due to their characteristic magnetic and optical anisotropy, particle doublets rotate with the applied magnetic field and induce a modulation of the scattered light intensity at twice the frequency of the RMF, while the contribution of single particles to the optical signal modulation is negligible . The measurement signal represents the magnitude of the 2nd harmonic of the optical scattering intensity as analyzed by fast Fourier transformation (FFT). When the frequency of the RMF is increased, the particle doublets first follow the RMF synchronously with increasing phase lag up to a critical frequency, which is defined by equal magnetic and drag torques, while at higher frequencies, alternating forward and backward rotations of the doublets occur . By analyzing the resulting frequency dependence of the particle magnetorotation, Ranzoni et al. demonstrated direct quantification of the concentrations of particle doublets as well as the average values and variations of the magnetic susceptibilities of magnetic particles (particles with mean diameters of 300 nm and 500 nm by Ademtech SA, Pessac, France) . To demonstrate the applicability of their method for homogeneous biosensing, they carried out detection of spiked biotinylated bovine serum albumin (BSA) model analyte by streptavidin functionalized particle labels and showed a detection limit of about 400 fM in buffer and 5 pM in plasma . By optimizing the molecular surface architecture of the magnetic label antibody functionalization, the group could also demonstrate detection of the cancer biomarker prostate-specific antigen (PSA) directly in blood plasma, achieving hereby a detection limit of about 500 fM for a total assay time of 14 min (160 fM in buffer) . In the analysis of the measured analyte dose-response curves, the authors observed two plateaus, which, by modeling of the dependence of the optical signal on the degree of cluster formation, they could attribute to a low analyte concentration regime where only particle singlets and doublets exists, and a higher analyte concentration regime where particle multiplets are also formed . Detection of Bead Assembly Magnetorotation Instead of adjusting the experimental parameters to a regime where mainly formation of particle doublets occurs, another approach is to analyze larger particle clusters. To that end, Kinnunen et al. realized a biosensor based on measuring the magnetorotation of magnetic particles that assemble into a cluster at the bottom of a hanging droplet (see a) . The droplet is illuminated by a laser or LED light source from above, and the particle cluster is observed from below either by an inverted microscope or by a photodetector . Here, the droplet also serves as a lens to magnify the shadow image of the particle cluster 100-fold . The particle cluster is rotated in the image plane by an applied RMF, and the frequency of the RMF is chosen well above the critical frequency of the particle cluster . The critical frequency is defined as the maximum rotation frequency at which a magnetic particle (or particle cluster) can still follow the applied RMF synchronously ( i.e. , equality of magnetic and hydrodynamic drag torque) . Above this critical frequency, the particle (or particle cluster) experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . By performing a FFT of the optical signal, the net rotation rate of the particle cluster is determined, and changes in the particle cluster assembly (e.g., cluster expansion or volume increase) or the local fluid viscosity alter the net rotation rate of the particle cluster (see b) . By employing this measurement principle and magnetic particles (2.8 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by E. coli antibodies, Kinnunen et al. performed E. coli bacteria growth studies, including determination of the minimum inhibitory concentration of the two antibiotics streptomycin and gentamicin . Here, bacteria growth on the particle cluster caused an increase of the cluster volume, thus leading to an increase of its rotational period . In the following, the group expanded their analysis to the blood coagulation factor thrombin by observing clusters of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by two different thrombin-specificaptamers . The main effect of thrombin target protein binding to the particles was an expansion of the gaps between the particles, thus leading to larger cluster volumes and increased rotational periods . The authors also determined the dependence of the fractal dimension of the particle clusters on the thrombin concentration by optical microscopy, which showed a good agreement to the magnetorotation period analysis . In buffer, the authors demonstrated a thrombin detection limit as low as 80 fM , which, however, increases to about 7.5 nM in serum (see SI of ), which the authors mainly attribute to the low specificity of the aptamer receptors . Lately, the group also presented a prototype version of their measurement principle, which no longer requires a microscope or hanging droplets, but is realized on three stacked 384-well plates and enables 48-plex detection . The middle plate contains the sample and the particle cluster, while the top and bottom plate incorporate the optics (LED light sources and photodiode detectors, respectively) . The authors demonstrated detection of E. coli bacteria (LoD 5000 cfu/mL) within a total analysis time of about 90 min and also determined the minimum inhibitory concentration of the antibiotic gentamicin . Optomagnetic Detection Incorporating Blu-ray Optics A highly integrated optomagnetic device for measuring the response of magnetic particle clusters to an applied magnetic field that makes use of Blu-ray optical components and a microfluidic disk has lately been introduced by Donolato et al. . displays a sketch of the most recent version of the employed setup, where the magnetic particle labels within the detection chamber are excited by a linear AC magnetic field generated by electromagnets placed above and below the microfluidic disk . The dynamic response of the particle labels to the AC magnetic field is determined optically by transmission measurements of light emitted from a Blu-Ray laser diode and picked up by a photodetector . The measurement signal is given by the 2nd harmonic of the photodetector signal, which is usually recorded as a function of the frequency of the applied AC magnetic field (2nd harmonic spectrum) . As larger magnetic clusters are formed by analyte-induced binding, the hydrodynamic drag of the clusters increases, resulting in an altered magnitude and frequency of the peak in the 2nd harmonic spectrum . As an initial proof-of-concept of the method, Donolato et al. demonstrated DNA-based detection of E. coli bacteria following isothermal rolling circle amplification (RCA), employing magnetic particles (100 nm diameter by Micromod, Rostock, Germany) functionalized by oligonucleotide detection probes that bind to the DNA coils produced by the RCA, and demonstrated a detection limit of about 10 pM of DNA coils in buffer solution . In the following, the group evaluated different sensing geometries, and found out that a configuration with perpendicular alignment of the AC magnetic field to the optical axis and parallel alignment of the linear polarization direction of the incident light to the AC magnetic field gives the largest signal, which, in addition to the already previously introduced E. coli bacteria detection via RCA products , they demonstrated for the detection of biotinylated BSA model analyte by streptavidin-functionalized magnetic particle labels (obtained detection limit in buffer ~100 pM) . By adding an incubation step in a sufficiently strong static magnetic field to accelerate particle clustering via bound analyte molecules prior to data acquisition (see permanent magnets in ) and digesting the DNA coil RCA products into monomers, the group demonstrated simultaneous detection of three different bacteria causing urinary tract infection ( E. coli , Proteus mirabilis and Pseudomonas aeruginosa ) . In addition, they showed identification of E. coli bacteria from 28 urine samples with 100% specificity compared to standard clinical laboratory plate culture data . The group also adapted their method to a competitive assay format for the detection of the small molecule adenosine triphosphate (ATP), showing a detection limit of about of 74 µM in buffer and a dynamic range of ~0.1–10 mM, which conforms well to the clinically relevant ATP concentration range . Next, the group showed direct detection of Salmonella bacteria by a competitive assay incorporating two types of magnetic particles, i.e. , large capture particles (5 µm diameter by Micromod, Rostock, Germany) and small detection particles (100 nm diameter by Micromod, Rostock, Germany) . Following a sedimentation step of the large capture particles, the concentration of the remaining detection particles is measured, which due to the competitive assay format scales with the concentration of bacteria, resulting in a detection limit of about 80,000 cfu/mL in buffer . The latest application demonstrated by the group concerns quantification of the dengue fever protein biomarker NS1 by magnetic particle labels (170 nm diameter by Merck, Darmstadt, Germany) functionalized by two different monoclonal NS1 antibodies, resulting in a detection limit of 25 ng/mL (corresponds to ~500 pM at a NS1 molecular weight of 46–55 kDa ) measured directly in spiked serum samples . Naked Eye Detection of Particle Clusters The easiest way to optically sense the formation of particle clusters in an applied magnetic field, is, of course, by naked-eye detection. This detection modality has been introduced by Leslie et al. , who applied a rotating magnetic field (RMF) to magnetic particles dispersed in a microfluidic well to detect DNA via particle cluster formation, which is quantified by digital image analysis . shows a sketch of the group’s latest setup , which in addition to the RMF also incorporates agitation of the particles by a vortexer (‘dual-force’ ) to enhance the homogeneity of cluster formation across multiple neighboring wells (12 wells demonstrated), but also to speed up the required incubation time and to enhance the detection limit . The images in show the distribution of magnetic particles following the agitated incubation for a control without analyte DNA (−) and a sample with analyte DNA , the presence of which induces agglomeration of particle labels visible to the naked eye . In their initial work using RMF agitation only, the authors demonstrated total DNA concentration detection by aggregation of magnetic particles (8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) for direct white blood cell count from human whole blood samples . This ‘chaotrope-driven aggregation’ (CDA, ) is caused by unspecific adsorption of DNA onto particles with silica surface driven by DNA dehydration, which is induced by the addition of chaotropic salts . Furthermore, the authors could also achieve detection of specific DNA sequences (synthetic 26-base target) by ‘hybridization induced aggregation’ (HIA, ) recognition of magnetic particles (1 µm diameter Dynabeads by Invitrogen, Waltham, MA, USA) functionalized by two different oligonucleotides complementary to the 5′ and 3′ end of the target sequence . Later, still making use of the RMF-only setup, the group extended their total DNA concentration CDA analysis to microbial growth testing ( E. coli detection) as well as differentiation of CD4+ T-Cells, the latter achieved by adding an immunomagnetic separation step up-front . Following introduction of the dual-force setup , the group systematically analyzed the influence of different target sequence parameters on the HIA efficiency of the target to oligonucleotide-functionalized magnetic particles, also including differentiation of one, two and three base mismatches . The latter analysis was further advanced for detecting single nucleotide polymorphism mutation of the KRAS gene from pancreatic and lung cancer cell lines by the dual-force setup, demonstrating efficient HIA discrimination of mutant and wild-type KRAS genes following polymerase chain reaction (PCR) amplification to a minimum number of 10 12 copies . While the CDA approach is intrinsically non-specific, it can also be turned into a specific detection by performing sequence-specific DNA amplification reactions up-front. However, efficient CDA requires DNA lengths of at least 10 kilo-base-pairs (kbp), while the products of amplification reactions are usually much shorter . By introducing a competitive assay format, where rising concentrations of the amplification product increasingly inhibit magnetic particle agglomeration that is induced by addition of a fixed concentration of 48 kbp long λ-phage DNA, DuVall et al. demonstrated successful detection of the food-borne pathogens E. coli and Salmonella as well as the Rift Valley fever virus by CDA following loop-mediated isothermal amplification (LAMP) . An even simpler CDA analysis procedure called ‘pipette, aggregate and blot’ (PAB) was introduced by Li et al. . Here, the magnetic particles and the sample are sequentially picked up by a pipette, and the mixture within the pipette tip is exposed to a static magnetic field to induce DNA-mediated formation of aggregates . Next, the fluid is dispensed onto a filter paper (‘blotting’), on which the degree of particle aggregate formation is determined by digital photography and image analysis, i.e. , a process that can also be accomplished by any smart phone . The authors demonstrated detection of human genomic DNA from purified whole blood by the PAB technique and showed that the achievable detection limit depends on the size of the employed magnetic particles (800 ng/mL for 1 µm diameter by Invitrogen, Waltham, MA, USA, and 6.4 µg/mL for 8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) . While this does not reach the detection limit of 250 pg/mL demonstrated for genomic DNA detection by CDA analysis using the dual-force setup , the PAB approach has an advantage with regard to its simplicity. A very similar procedure was followed by Lin et al , who exposed mixtures of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) and the sample solution to multiple sequences of aggregation (application of a static magnetic field) and re-suspension . Following dispensing of the mixture onto a filter paper, the degree of particle clustering is determined by digital image analysis of the filter paper . The authors demonstrated their method for the detection of the human papilloma virus type 18 gene following rolling circle amplification (RCA), and could successfully distinguish positive samples (genomic DNA isolated from HeLa cells) from negative control samples (genomic DNA isolated from human hepatoma cells) . With the exception of total DNA content determination by CDA (, the CDA part of Reference and the white blood cell analysis part of ), all naked-eye detection papers presented above do not strictly fall into the category of one-step homogeneous detection, as they do involve some sort of upfront sample preparation, i.e. , immunomagnetic separation (CD4+ T-Cell detection part of ), DNA amplification or DNA purification (, HIA part of ). A true one-step analysis procedure comprising analyte-mediated formation of particle clusters in an applied magnetic field has lately been introduced by Chen et al. . shows a schematic representation of the measurement principle employed by the authors, which they designate as ‘immunomagnetic aggregation’ (IMA) . Here, a static magnetic field is applied that attracts the magnetic particles (immunomagnetic beads, IMB) to the side wall of the sample tube, and the structure of the resulting agglomerate depends on the presence of target molecules in the solution . The reason is the increased diameter and decreased net magnetization of an IMB-target complex as compared to blank IMBs, which influences the balance between the attractive magnetic force component tangential to the wall and the friction force, thus leading to an expanded arc-shaped aggregation of IMB-target complexes along the tube wall as opposed to a compact stripe-shaped form for blank IMBs (see top view representation in ) . The authors compare their naked-eye IMA detection results to gold lateral flow strip (GLFS) references . In addition, by analysis of digital images taken from the sample tubes, the authors extract an average grey scale value that semi-quantitatively depends on the target molecule concentration and can be used to compare the IMA data with dose-response curves obtained from enzyme-linked immunosorbent assay (ELISA) based reference detection . Employing magnetic beads (200 nm diameter Estapor particles by Merck, Darmstadt, Germany) functionalized by polyclonal E. coli antibodies, the authors demonstrate a detection limit of about 10 4 cfu/mL for the direct detection of E. coli bacteria in spiked river water samples within 15 min, which is one order of magnitude more sensitive than reference GLFS detection, and about ten times faster than reference ELISA detection . Besides, the authors likewise confirm correct IMA-based identification of E. coli contamination of non-spiked water samples obtained from a livestock farm . In addition to bacteria, the authors also show detection of the cancer biomarker proteins alpha fetoprotein (AFP) and carcino-embryonic antigen (CEA) directly in spiked urine samples using magnetic particles functionalized by pairs of respective monoclonal antibodies, and achieve a detection limit of about 2.5 ng/mL for AFP and 2.0 ng/mL for CEA, both of which are well below the clinical cut-off values . Finally, the authors successfully discriminate AFP and CRP positive from negative patients by IMA-analysis using non-spiked clinical serum samples .
In an applied magnetic field, the magnetic moment of individual particle labels aligns in field direction, and the magnetic dipolar interaction between particles can lead to formation of particle chains along the field lines. In this way, it is possible to conduct standard sandwich immunoassays on the surface of the magnetic particles and read out analyte concentration dependent signals directly in the sample solution without requiring washing. The concept was introduced by Anker et al. and is sketched in . The observed fluorescence intensity of fluorophores bound to the surfaces of the magnetic particles can be modulated by varying the orientation of the particle chains by the applied magnetic field, which changes the relative number of visible (dark stars in ) to non-visible (light stars in ) fluorophores. In a demonstration experiment, Anker et al. applied biotin-labeled fluorophores directly to streptavidin-coated magnetic particles (870 nm mean diameter by Bangs Laboratories Inc., Fishers, IN, USA), and could demonstrate the detection of bound fluorophores above the large background of unbound labels by magnetic modulation . A similar demonstration experiment was later carried out by Petkus et al. , who showed detection of fluorophore-labeled cortisol analyte by magnetic particles (1.6 µm mean diameter BioMag particles by Polysciences Inc., Warrington, PA, USA) functionalized by monoclonal cortisol antibodies . They could achieve a cortisol detection limit of 300 pM in buffer solution by magnetically rotating particle clusters and analyzing the modulated fluorescent intensity by a lock-in amplifier . They later extended their analysis to the cardiac protein biomarker myglobin, and compared immunoassays both in competitive and sandwich (non-competitive) format performed in buffer and serum . They achieved similar detection limits in buffer and serum of about 2.5 nM for the competitive format, and about 50 pM for the sandwich format . Here, however, the only assay that was performed strictly without any washing step was the competitive format type assay in buffer, while all other experiments included at least one washing step . This is also true for the most recent study by the group, where following further refinement of their image analysis procedure , they demonstrate highly sensitive detection of three cardiac biomarkers (detection limits: myoglobin ~360 aM, heart-type fatty acid binding protein (H-FABP) ~67 fM, troponin I ~42 fM) . The biomarkers are spiked into buffer solutions and are detected by a sandwich immunoassay format performed on magnetically rotated particle clusters . In summary, while immunoassays on magnetically rotated particle clusters can be in principle applied in a strictly homogeneous format , up to now the most sensitive results do involve washing steps , and it has yet to be demonstrated that the method can also be performed directly in unprocessed sample material with sufficient sensitivity and specificity.
Another approach of using magnetic particle clustering is to induce binding between magnetic particles in an external magnetic field via bound analyte molecules, thus creating particle doublets, multiplets, chains or clusters that are also retained once an applied magnetic field is removed again. Here, a prerequisite is that the analyte molecule possesses multiple binding sites to receptors immobilized onto the magnetic particles, thus enabling cluster-formation of particles. If this is not the case, as usually encountered for small molecule detection, a competitive assay format can also be chosen where clustering of particles is reduced by analyte interaction (see , for example). In the following, different concepts are presented which are based on optical detection of analyte-specific clustering of magnetic particles that make use of magnetic fields to accelerate cluster formation and/or to induce periodic variations in the optical signal. Detection of Magnetically Accelerated Particle Dimer Formation The most basic realization of particle clustering mediated by analyte molecule binding was introduced by Baudry et al. and is sketched in . It is based on optical density measurements of magnetic particle dispersions functionalized by either polyclonal or two different types of monoclonal antibodies against the target antigen. The antigens are then captured at the surfaces of the particles, and the formation of particle chains in an applied magnetic field accelerates the creation of particle doublets via bound analyte molecules, which are also retained once the magnetic field is switched off again. As particle doublets scatter light differently than two single particles, the concentration of dimers, and, thus, analyte molecules can be quantified by turbidimetric (extinction) measurements . In an initial demonstration experiment, Baudry et al. showed a detection limit of about 1 pM for ovalbumin model analyte in buffer by magnetic particle labels (200 nm diameter by Ademtech SA, Pessac, France) functionalized by polyclonal ovalbumin antibodies with a total cycle time of five minutes, which includes application of a 20 mT strong magnetic field for one minute to accelerate dimer formation . From experiments without the magnetic field incubation step, the authors extrapolate that achieving the same density of dimers without magnetic field acceleration would take more than eight hours . Thus, compared to long-established similar immunoassays based on agglomeration of latex particles , the magnetic agitation step makes this simple method both fast and sensitive. Following detailed analysis of the theory of ligand-receptor interaction in chains of magnetic particles and experimental investigations of the kinetics for analyte molecules with different tether lengths and numbers of binding sites , the group also demonstrated the method to be capable of detecting C-reactive protein (CRP) directly from serum samples with a detection limit of about 1 pM and a dynamic range of three orders of magnitude with a total cycle time of one minute . Finally, the group also introduced an advanced measurement method, where the concentration of dimers is no longer determined in a randomized state, but the extinction difference of the dimers for magnetic-field induced alignment parallel and perpendicular to the optical axis is used, which further increases the signal and achievable sensitivity . Here, the trick is to apply the aligning magnetic field pulse at a magnitude sufficient to rotate the particle doublets created by analyte molecule interaction in the field direction, but insufficient to induce re-chaining of particle labels by magnetic dipolar interactions, which would lead to a false unspecific signal . For their chosen experimental conditions, the authors determined a field magnitude of 5 mT as good compromise between particle doublet alignment rate and prevention of re-chaining . Magnetically Rotated Particle Chain Detection A similar measurement method as described by Baudry et al. has been introduced by Park et al. , but instead of following a multi-stage protocol, they carry out a one-step procedure that comprises continuous application of a rotating magnetic field (RMF) . Here, the RMF induces formation of magnetic particle chains that follow the applied field rotation, which also leads to modulation of the transmitted light intensity (see a) . The length of the particle chains is limited by the balance between the hydrodynamic force due to the viscosity of the solution and the total strength of the attractive force between the particles. For particles with bound analyte molecules, their binding strength adds to the attractive magnetic dipolar interaction force between particles, thus leading to an increasing average particle chain length with analyte molecule concentration (see b) . As the modulation intensity of the transmitted light also depends on the average length of the rotating particle chains, the amplitude of the transmitted light intensity is a measure of the analyte concentration in the sample solution . Applying biotinylated magnetic particles with a mean diameter of 250 nm, Park et al. demonstrated this method for direct one-step detection of the model analyte avidin with a detection limit of about 100 pM within a measurement time of less than 30 s . While detection of actual biomarkers in real samples still needs to be demonstrated, this method represents a fast and simple homogeneous analysis of biomarkers. Scattering Detection of Particle Cluster Magnetorotation The principal multi-step measurement procedure introduced by Baudry et al. , which comprises incubation of the samples with functionalized magnetic particle labels, acceleration of particle clustering via bound analyte molecules by inducing chain formation in an applied magnetic field and optical detection of the formed particle clusters, has been refined with regard to the final detection step by Ranzoni et al. . While the quantity of interest, which is the concentration of particle clusters, is measured above a large background signal of non-agglomerated particles by the extinction measurements performed by Baudry et al. , Ranzoni et al. introduced a method specific to particle clusters based on scattering measurements in a rotating magnetic field (RMF) . shows a sketch of their measurement setup, where the optical path is along the z-axis, the RMF is applied in the xz-plane, and the scattered light is picked up at an angle of ~30° from the z-axis. Due to their characteristic magnetic and optical anisotropy, particle doublets rotate with the applied magnetic field and induce a modulation of the scattered light intensity at twice the frequency of the RMF, while the contribution of single particles to the optical signal modulation is negligible . The measurement signal represents the magnitude of the 2nd harmonic of the optical scattering intensity as analyzed by fast Fourier transformation (FFT). When the frequency of the RMF is increased, the particle doublets first follow the RMF synchronously with increasing phase lag up to a critical frequency, which is defined by equal magnetic and drag torques, while at higher frequencies, alternating forward and backward rotations of the doublets occur . By analyzing the resulting frequency dependence of the particle magnetorotation, Ranzoni et al. demonstrated direct quantification of the concentrations of particle doublets as well as the average values and variations of the magnetic susceptibilities of magnetic particles (particles with mean diameters of 300 nm and 500 nm by Ademtech SA, Pessac, France) . To demonstrate the applicability of their method for homogeneous biosensing, they carried out detection of spiked biotinylated bovine serum albumin (BSA) model analyte by streptavidin functionalized particle labels and showed a detection limit of about 400 fM in buffer and 5 pM in plasma . By optimizing the molecular surface architecture of the magnetic label antibody functionalization, the group could also demonstrate detection of the cancer biomarker prostate-specific antigen (PSA) directly in blood plasma, achieving hereby a detection limit of about 500 fM for a total assay time of 14 min (160 fM in buffer) . In the analysis of the measured analyte dose-response curves, the authors observed two plateaus, which, by modeling of the dependence of the optical signal on the degree of cluster formation, they could attribute to a low analyte concentration regime where only particle singlets and doublets exists, and a higher analyte concentration regime where particle multiplets are also formed . Detection of Bead Assembly Magnetorotation Instead of adjusting the experimental parameters to a regime where mainly formation of particle doublets occurs, another approach is to analyze larger particle clusters. To that end, Kinnunen et al. realized a biosensor based on measuring the magnetorotation of magnetic particles that assemble into a cluster at the bottom of a hanging droplet (see a) . The droplet is illuminated by a laser or LED light source from above, and the particle cluster is observed from below either by an inverted microscope or by a photodetector . Here, the droplet also serves as a lens to magnify the shadow image of the particle cluster 100-fold . The particle cluster is rotated in the image plane by an applied RMF, and the frequency of the RMF is chosen well above the critical frequency of the particle cluster . The critical frequency is defined as the maximum rotation frequency at which a magnetic particle (or particle cluster) can still follow the applied RMF synchronously ( i.e. , equality of magnetic and hydrodynamic drag torque) . Above this critical frequency, the particle (or particle cluster) experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . By performing a FFT of the optical signal, the net rotation rate of the particle cluster is determined, and changes in the particle cluster assembly (e.g., cluster expansion or volume increase) or the local fluid viscosity alter the net rotation rate of the particle cluster (see b) . By employing this measurement principle and magnetic particles (2.8 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by E. coli antibodies, Kinnunen et al. performed E. coli bacteria growth studies, including determination of the minimum inhibitory concentration of the two antibiotics streptomycin and gentamicin . Here, bacteria growth on the particle cluster caused an increase of the cluster volume, thus leading to an increase of its rotational period . In the following, the group expanded their analysis to the blood coagulation factor thrombin by observing clusters of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by two different thrombin-specificaptamers . The main effect of thrombin target protein binding to the particles was an expansion of the gaps between the particles, thus leading to larger cluster volumes and increased rotational periods . The authors also determined the dependence of the fractal dimension of the particle clusters on the thrombin concentration by optical microscopy, which showed a good agreement to the magnetorotation period analysis . In buffer, the authors demonstrated a thrombin detection limit as low as 80 fM , which, however, increases to about 7.5 nM in serum (see SI of ), which the authors mainly attribute to the low specificity of the aptamer receptors . Lately, the group also presented a prototype version of their measurement principle, which no longer requires a microscope or hanging droplets, but is realized on three stacked 384-well plates and enables 48-plex detection . The middle plate contains the sample and the particle cluster, while the top and bottom plate incorporate the optics (LED light sources and photodiode detectors, respectively) . The authors demonstrated detection of E. coli bacteria (LoD 5000 cfu/mL) within a total analysis time of about 90 min and also determined the minimum inhibitory concentration of the antibiotic gentamicin . Optomagnetic Detection Incorporating Blu-ray Optics A highly integrated optomagnetic device for measuring the response of magnetic particle clusters to an applied magnetic field that makes use of Blu-ray optical components and a microfluidic disk has lately been introduced by Donolato et al. . displays a sketch of the most recent version of the employed setup, where the magnetic particle labels within the detection chamber are excited by a linear AC magnetic field generated by electromagnets placed above and below the microfluidic disk . The dynamic response of the particle labels to the AC magnetic field is determined optically by transmission measurements of light emitted from a Blu-Ray laser diode and picked up by a photodetector . The measurement signal is given by the 2nd harmonic of the photodetector signal, which is usually recorded as a function of the frequency of the applied AC magnetic field (2nd harmonic spectrum) . As larger magnetic clusters are formed by analyte-induced binding, the hydrodynamic drag of the clusters increases, resulting in an altered magnitude and frequency of the peak in the 2nd harmonic spectrum . As an initial proof-of-concept of the method, Donolato et al. demonstrated DNA-based detection of E. coli bacteria following isothermal rolling circle amplification (RCA), employing magnetic particles (100 nm diameter by Micromod, Rostock, Germany) functionalized by oligonucleotide detection probes that bind to the DNA coils produced by the RCA, and demonstrated a detection limit of about 10 pM of DNA coils in buffer solution . In the following, the group evaluated different sensing geometries, and found out that a configuration with perpendicular alignment of the AC magnetic field to the optical axis and parallel alignment of the linear polarization direction of the incident light to the AC magnetic field gives the largest signal, which, in addition to the already previously introduced E. coli bacteria detection via RCA products , they demonstrated for the detection of biotinylated BSA model analyte by streptavidin-functionalized magnetic particle labels (obtained detection limit in buffer ~100 pM) . By adding an incubation step in a sufficiently strong static magnetic field to accelerate particle clustering via bound analyte molecules prior to data acquisition (see permanent magnets in ) and digesting the DNA coil RCA products into monomers, the group demonstrated simultaneous detection of three different bacteria causing urinary tract infection ( E. coli , Proteus mirabilis and Pseudomonas aeruginosa ) . In addition, they showed identification of E. coli bacteria from 28 urine samples with 100% specificity compared to standard clinical laboratory plate culture data . The group also adapted their method to a competitive assay format for the detection of the small molecule adenosine triphosphate (ATP), showing a detection limit of about of 74 µM in buffer and a dynamic range of ~0.1–10 mM, which conforms well to the clinically relevant ATP concentration range . Next, the group showed direct detection of Salmonella bacteria by a competitive assay incorporating two types of magnetic particles, i.e. , large capture particles (5 µm diameter by Micromod, Rostock, Germany) and small detection particles (100 nm diameter by Micromod, Rostock, Germany) . Following a sedimentation step of the large capture particles, the concentration of the remaining detection particles is measured, which due to the competitive assay format scales with the concentration of bacteria, resulting in a detection limit of about 80,000 cfu/mL in buffer . The latest application demonstrated by the group concerns quantification of the dengue fever protein biomarker NS1 by magnetic particle labels (170 nm diameter by Merck, Darmstadt, Germany) functionalized by two different monoclonal NS1 antibodies, resulting in a detection limit of 25 ng/mL (corresponds to ~500 pM at a NS1 molecular weight of 46–55 kDa ) measured directly in spiked serum samples . Naked Eye Detection of Particle Clusters The easiest way to optically sense the formation of particle clusters in an applied magnetic field, is, of course, by naked-eye detection. This detection modality has been introduced by Leslie et al. , who applied a rotating magnetic field (RMF) to magnetic particles dispersed in a microfluidic well to detect DNA via particle cluster formation, which is quantified by digital image analysis . shows a sketch of the group’s latest setup , which in addition to the RMF also incorporates agitation of the particles by a vortexer (‘dual-force’ ) to enhance the homogeneity of cluster formation across multiple neighboring wells (12 wells demonstrated), but also to speed up the required incubation time and to enhance the detection limit . The images in show the distribution of magnetic particles following the agitated incubation for a control without analyte DNA (−) and a sample with analyte DNA , the presence of which induces agglomeration of particle labels visible to the naked eye . In their initial work using RMF agitation only, the authors demonstrated total DNA concentration detection by aggregation of magnetic particles (8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) for direct white blood cell count from human whole blood samples . This ‘chaotrope-driven aggregation’ (CDA, ) is caused by unspecific adsorption of DNA onto particles with silica surface driven by DNA dehydration, which is induced by the addition of chaotropic salts . Furthermore, the authors could also achieve detection of specific DNA sequences (synthetic 26-base target) by ‘hybridization induced aggregation’ (HIA, ) recognition of magnetic particles (1 µm diameter Dynabeads by Invitrogen, Waltham, MA, USA) functionalized by two different oligonucleotides complementary to the 5′ and 3′ end of the target sequence . Later, still making use of the RMF-only setup, the group extended their total DNA concentration CDA analysis to microbial growth testing ( E. coli detection) as well as differentiation of CD4+ T-Cells, the latter achieved by adding an immunomagnetic separation step up-front . Following introduction of the dual-force setup , the group systematically analyzed the influence of different target sequence parameters on the HIA efficiency of the target to oligonucleotide-functionalized magnetic particles, also including differentiation of one, two and three base mismatches . The latter analysis was further advanced for detecting single nucleotide polymorphism mutation of the KRAS gene from pancreatic and lung cancer cell lines by the dual-force setup, demonstrating efficient HIA discrimination of mutant and wild-type KRAS genes following polymerase chain reaction (PCR) amplification to a minimum number of 10 12 copies . While the CDA approach is intrinsically non-specific, it can also be turned into a specific detection by performing sequence-specific DNA amplification reactions up-front. However, efficient CDA requires DNA lengths of at least 10 kilo-base-pairs (kbp), while the products of amplification reactions are usually much shorter . By introducing a competitive assay format, where rising concentrations of the amplification product increasingly inhibit magnetic particle agglomeration that is induced by addition of a fixed concentration of 48 kbp long λ-phage DNA, DuVall et al. demonstrated successful detection of the food-borne pathogens E. coli and Salmonella as well as the Rift Valley fever virus by CDA following loop-mediated isothermal amplification (LAMP) . An even simpler CDA analysis procedure called ‘pipette, aggregate and blot’ (PAB) was introduced by Li et al. . Here, the magnetic particles and the sample are sequentially picked up by a pipette, and the mixture within the pipette tip is exposed to a static magnetic field to induce DNA-mediated formation of aggregates . Next, the fluid is dispensed onto a filter paper (‘blotting’), on which the degree of particle aggregate formation is determined by digital photography and image analysis, i.e. , a process that can also be accomplished by any smart phone . The authors demonstrated detection of human genomic DNA from purified whole blood by the PAB technique and showed that the achievable detection limit depends on the size of the employed magnetic particles (800 ng/mL for 1 µm diameter by Invitrogen, Waltham, MA, USA, and 6.4 µg/mL for 8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) . While this does not reach the detection limit of 250 pg/mL demonstrated for genomic DNA detection by CDA analysis using the dual-force setup , the PAB approach has an advantage with regard to its simplicity. A very similar procedure was followed by Lin et al , who exposed mixtures of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) and the sample solution to multiple sequences of aggregation (application of a static magnetic field) and re-suspension . Following dispensing of the mixture onto a filter paper, the degree of particle clustering is determined by digital image analysis of the filter paper . The authors demonstrated their method for the detection of the human papilloma virus type 18 gene following rolling circle amplification (RCA), and could successfully distinguish positive samples (genomic DNA isolated from HeLa cells) from negative control samples (genomic DNA isolated from human hepatoma cells) . With the exception of total DNA content determination by CDA (, the CDA part of Reference and the white blood cell analysis part of ), all naked-eye detection papers presented above do not strictly fall into the category of one-step homogeneous detection, as they do involve some sort of upfront sample preparation, i.e. , immunomagnetic separation (CD4+ T-Cell detection part of ), DNA amplification or DNA purification (, HIA part of ). A true one-step analysis procedure comprising analyte-mediated formation of particle clusters in an applied magnetic field has lately been introduced by Chen et al. . shows a schematic representation of the measurement principle employed by the authors, which they designate as ‘immunomagnetic aggregation’ (IMA) . Here, a static magnetic field is applied that attracts the magnetic particles (immunomagnetic beads, IMB) to the side wall of the sample tube, and the structure of the resulting agglomerate depends on the presence of target molecules in the solution . The reason is the increased diameter and decreased net magnetization of an IMB-target complex as compared to blank IMBs, which influences the balance between the attractive magnetic force component tangential to the wall and the friction force, thus leading to an expanded arc-shaped aggregation of IMB-target complexes along the tube wall as opposed to a compact stripe-shaped form for blank IMBs (see top view representation in ) . The authors compare their naked-eye IMA detection results to gold lateral flow strip (GLFS) references . In addition, by analysis of digital images taken from the sample tubes, the authors extract an average grey scale value that semi-quantitatively depends on the target molecule concentration and can be used to compare the IMA data with dose-response curves obtained from enzyme-linked immunosorbent assay (ELISA) based reference detection . Employing magnetic beads (200 nm diameter Estapor particles by Merck, Darmstadt, Germany) functionalized by polyclonal E. coli antibodies, the authors demonstrate a detection limit of about 10 4 cfu/mL for the direct detection of E. coli bacteria in spiked river water samples within 15 min, which is one order of magnitude more sensitive than reference GLFS detection, and about ten times faster than reference ELISA detection . Besides, the authors likewise confirm correct IMA-based identification of E. coli contamination of non-spiked water samples obtained from a livestock farm . In addition to bacteria, the authors also show detection of the cancer biomarker proteins alpha fetoprotein (AFP) and carcino-embryonic antigen (CEA) directly in spiked urine samples using magnetic particles functionalized by pairs of respective monoclonal antibodies, and achieve a detection limit of about 2.5 ng/mL for AFP and 2.0 ng/mL for CEA, both of which are well below the clinical cut-off values . Finally, the authors successfully discriminate AFP and CRP positive from negative patients by IMA-analysis using non-spiked clinical serum samples .
The most basic realization of particle clustering mediated by analyte molecule binding was introduced by Baudry et al. and is sketched in . It is based on optical density measurements of magnetic particle dispersions functionalized by either polyclonal or two different types of monoclonal antibodies against the target antigen. The antigens are then captured at the surfaces of the particles, and the formation of particle chains in an applied magnetic field accelerates the creation of particle doublets via bound analyte molecules, which are also retained once the magnetic field is switched off again. As particle doublets scatter light differently than two single particles, the concentration of dimers, and, thus, analyte molecules can be quantified by turbidimetric (extinction) measurements . In an initial demonstration experiment, Baudry et al. showed a detection limit of about 1 pM for ovalbumin model analyte in buffer by magnetic particle labels (200 nm diameter by Ademtech SA, Pessac, France) functionalized by polyclonal ovalbumin antibodies with a total cycle time of five minutes, which includes application of a 20 mT strong magnetic field for one minute to accelerate dimer formation . From experiments without the magnetic field incubation step, the authors extrapolate that achieving the same density of dimers without magnetic field acceleration would take more than eight hours . Thus, compared to long-established similar immunoassays based on agglomeration of latex particles , the magnetic agitation step makes this simple method both fast and sensitive. Following detailed analysis of the theory of ligand-receptor interaction in chains of magnetic particles and experimental investigations of the kinetics for analyte molecules with different tether lengths and numbers of binding sites , the group also demonstrated the method to be capable of detecting C-reactive protein (CRP) directly from serum samples with a detection limit of about 1 pM and a dynamic range of three orders of magnitude with a total cycle time of one minute . Finally, the group also introduced an advanced measurement method, where the concentration of dimers is no longer determined in a randomized state, but the extinction difference of the dimers for magnetic-field induced alignment parallel and perpendicular to the optical axis is used, which further increases the signal and achievable sensitivity . Here, the trick is to apply the aligning magnetic field pulse at a magnitude sufficient to rotate the particle doublets created by analyte molecule interaction in the field direction, but insufficient to induce re-chaining of particle labels by magnetic dipolar interactions, which would lead to a false unspecific signal . For their chosen experimental conditions, the authors determined a field magnitude of 5 mT as good compromise between particle doublet alignment rate and prevention of re-chaining .
A similar measurement method as described by Baudry et al. has been introduced by Park et al. , but instead of following a multi-stage protocol, they carry out a one-step procedure that comprises continuous application of a rotating magnetic field (RMF) . Here, the RMF induces formation of magnetic particle chains that follow the applied field rotation, which also leads to modulation of the transmitted light intensity (see a) . The length of the particle chains is limited by the balance between the hydrodynamic force due to the viscosity of the solution and the total strength of the attractive force between the particles. For particles with bound analyte molecules, their binding strength adds to the attractive magnetic dipolar interaction force between particles, thus leading to an increasing average particle chain length with analyte molecule concentration (see b) . As the modulation intensity of the transmitted light also depends on the average length of the rotating particle chains, the amplitude of the transmitted light intensity is a measure of the analyte concentration in the sample solution . Applying biotinylated magnetic particles with a mean diameter of 250 nm, Park et al. demonstrated this method for direct one-step detection of the model analyte avidin with a detection limit of about 100 pM within a measurement time of less than 30 s . While detection of actual biomarkers in real samples still needs to be demonstrated, this method represents a fast and simple homogeneous analysis of biomarkers.
The principal multi-step measurement procedure introduced by Baudry et al. , which comprises incubation of the samples with functionalized magnetic particle labels, acceleration of particle clustering via bound analyte molecules by inducing chain formation in an applied magnetic field and optical detection of the formed particle clusters, has been refined with regard to the final detection step by Ranzoni et al. . While the quantity of interest, which is the concentration of particle clusters, is measured above a large background signal of non-agglomerated particles by the extinction measurements performed by Baudry et al. , Ranzoni et al. introduced a method specific to particle clusters based on scattering measurements in a rotating magnetic field (RMF) . shows a sketch of their measurement setup, where the optical path is along the z-axis, the RMF is applied in the xz-plane, and the scattered light is picked up at an angle of ~30° from the z-axis. Due to their characteristic magnetic and optical anisotropy, particle doublets rotate with the applied magnetic field and induce a modulation of the scattered light intensity at twice the frequency of the RMF, while the contribution of single particles to the optical signal modulation is negligible . The measurement signal represents the magnitude of the 2nd harmonic of the optical scattering intensity as analyzed by fast Fourier transformation (FFT). When the frequency of the RMF is increased, the particle doublets first follow the RMF synchronously with increasing phase lag up to a critical frequency, which is defined by equal magnetic and drag torques, while at higher frequencies, alternating forward and backward rotations of the doublets occur . By analyzing the resulting frequency dependence of the particle magnetorotation, Ranzoni et al. demonstrated direct quantification of the concentrations of particle doublets as well as the average values and variations of the magnetic susceptibilities of magnetic particles (particles with mean diameters of 300 nm and 500 nm by Ademtech SA, Pessac, France) . To demonstrate the applicability of their method for homogeneous biosensing, they carried out detection of spiked biotinylated bovine serum albumin (BSA) model analyte by streptavidin functionalized particle labels and showed a detection limit of about 400 fM in buffer and 5 pM in plasma . By optimizing the molecular surface architecture of the magnetic label antibody functionalization, the group could also demonstrate detection of the cancer biomarker prostate-specific antigen (PSA) directly in blood plasma, achieving hereby a detection limit of about 500 fM for a total assay time of 14 min (160 fM in buffer) . In the analysis of the measured analyte dose-response curves, the authors observed two plateaus, which, by modeling of the dependence of the optical signal on the degree of cluster formation, they could attribute to a low analyte concentration regime where only particle singlets and doublets exists, and a higher analyte concentration regime where particle multiplets are also formed .
Instead of adjusting the experimental parameters to a regime where mainly formation of particle doublets occurs, another approach is to analyze larger particle clusters. To that end, Kinnunen et al. realized a biosensor based on measuring the magnetorotation of magnetic particles that assemble into a cluster at the bottom of a hanging droplet (see a) . The droplet is illuminated by a laser or LED light source from above, and the particle cluster is observed from below either by an inverted microscope or by a photodetector . Here, the droplet also serves as a lens to magnify the shadow image of the particle cluster 100-fold . The particle cluster is rotated in the image plane by an applied RMF, and the frequency of the RMF is chosen well above the critical frequency of the particle cluster . The critical frequency is defined as the maximum rotation frequency at which a magnetic particle (or particle cluster) can still follow the applied RMF synchronously ( i.e. , equality of magnetic and hydrodynamic drag torque) . Above this critical frequency, the particle (or particle cluster) experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . By performing a FFT of the optical signal, the net rotation rate of the particle cluster is determined, and changes in the particle cluster assembly (e.g., cluster expansion or volume increase) or the local fluid viscosity alter the net rotation rate of the particle cluster (see b) . By employing this measurement principle and magnetic particles (2.8 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by E. coli antibodies, Kinnunen et al. performed E. coli bacteria growth studies, including determination of the minimum inhibitory concentration of the two antibiotics streptomycin and gentamicin . Here, bacteria growth on the particle cluster caused an increase of the cluster volume, thus leading to an increase of its rotational period . In the following, the group expanded their analysis to the blood coagulation factor thrombin by observing clusters of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) functionalized by two different thrombin-specificaptamers . The main effect of thrombin target protein binding to the particles was an expansion of the gaps between the particles, thus leading to larger cluster volumes and increased rotational periods . The authors also determined the dependence of the fractal dimension of the particle clusters on the thrombin concentration by optical microscopy, which showed a good agreement to the magnetorotation period analysis . In buffer, the authors demonstrated a thrombin detection limit as low as 80 fM , which, however, increases to about 7.5 nM in serum (see SI of ), which the authors mainly attribute to the low specificity of the aptamer receptors . Lately, the group also presented a prototype version of their measurement principle, which no longer requires a microscope or hanging droplets, but is realized on three stacked 384-well plates and enables 48-plex detection . The middle plate contains the sample and the particle cluster, while the top and bottom plate incorporate the optics (LED light sources and photodiode detectors, respectively) . The authors demonstrated detection of E. coli bacteria (LoD 5000 cfu/mL) within a total analysis time of about 90 min and also determined the minimum inhibitory concentration of the antibiotic gentamicin .
A highly integrated optomagnetic device for measuring the response of magnetic particle clusters to an applied magnetic field that makes use of Blu-ray optical components and a microfluidic disk has lately been introduced by Donolato et al. . displays a sketch of the most recent version of the employed setup, where the magnetic particle labels within the detection chamber are excited by a linear AC magnetic field generated by electromagnets placed above and below the microfluidic disk . The dynamic response of the particle labels to the AC magnetic field is determined optically by transmission measurements of light emitted from a Blu-Ray laser diode and picked up by a photodetector . The measurement signal is given by the 2nd harmonic of the photodetector signal, which is usually recorded as a function of the frequency of the applied AC magnetic field (2nd harmonic spectrum) . As larger magnetic clusters are formed by analyte-induced binding, the hydrodynamic drag of the clusters increases, resulting in an altered magnitude and frequency of the peak in the 2nd harmonic spectrum . As an initial proof-of-concept of the method, Donolato et al. demonstrated DNA-based detection of E. coli bacteria following isothermal rolling circle amplification (RCA), employing magnetic particles (100 nm diameter by Micromod, Rostock, Germany) functionalized by oligonucleotide detection probes that bind to the DNA coils produced by the RCA, and demonstrated a detection limit of about 10 pM of DNA coils in buffer solution . In the following, the group evaluated different sensing geometries, and found out that a configuration with perpendicular alignment of the AC magnetic field to the optical axis and parallel alignment of the linear polarization direction of the incident light to the AC magnetic field gives the largest signal, which, in addition to the already previously introduced E. coli bacteria detection via RCA products , they demonstrated for the detection of biotinylated BSA model analyte by streptavidin-functionalized magnetic particle labels (obtained detection limit in buffer ~100 pM) . By adding an incubation step in a sufficiently strong static magnetic field to accelerate particle clustering via bound analyte molecules prior to data acquisition (see permanent magnets in ) and digesting the DNA coil RCA products into monomers, the group demonstrated simultaneous detection of three different bacteria causing urinary tract infection ( E. coli , Proteus mirabilis and Pseudomonas aeruginosa ) . In addition, they showed identification of E. coli bacteria from 28 urine samples with 100% specificity compared to standard clinical laboratory plate culture data . The group also adapted their method to a competitive assay format for the detection of the small molecule adenosine triphosphate (ATP), showing a detection limit of about of 74 µM in buffer and a dynamic range of ~0.1–10 mM, which conforms well to the clinically relevant ATP concentration range . Next, the group showed direct detection of Salmonella bacteria by a competitive assay incorporating two types of magnetic particles, i.e. , large capture particles (5 µm diameter by Micromod, Rostock, Germany) and small detection particles (100 nm diameter by Micromod, Rostock, Germany) . Following a sedimentation step of the large capture particles, the concentration of the remaining detection particles is measured, which due to the competitive assay format scales with the concentration of bacteria, resulting in a detection limit of about 80,000 cfu/mL in buffer . The latest application demonstrated by the group concerns quantification of the dengue fever protein biomarker NS1 by magnetic particle labels (170 nm diameter by Merck, Darmstadt, Germany) functionalized by two different monoclonal NS1 antibodies, resulting in a detection limit of 25 ng/mL (corresponds to ~500 pM at a NS1 molecular weight of 46–55 kDa ) measured directly in spiked serum samples .
The easiest way to optically sense the formation of particle clusters in an applied magnetic field, is, of course, by naked-eye detection. This detection modality has been introduced by Leslie et al. , who applied a rotating magnetic field (RMF) to magnetic particles dispersed in a microfluidic well to detect DNA via particle cluster formation, which is quantified by digital image analysis . shows a sketch of the group’s latest setup , which in addition to the RMF also incorporates agitation of the particles by a vortexer (‘dual-force’ ) to enhance the homogeneity of cluster formation across multiple neighboring wells (12 wells demonstrated), but also to speed up the required incubation time and to enhance the detection limit . The images in show the distribution of magnetic particles following the agitated incubation for a control without analyte DNA (−) and a sample with analyte DNA , the presence of which induces agglomeration of particle labels visible to the naked eye . In their initial work using RMF agitation only, the authors demonstrated total DNA concentration detection by aggregation of magnetic particles (8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) for direct white blood cell count from human whole blood samples . This ‘chaotrope-driven aggregation’ (CDA, ) is caused by unspecific adsorption of DNA onto particles with silica surface driven by DNA dehydration, which is induced by the addition of chaotropic salts . Furthermore, the authors could also achieve detection of specific DNA sequences (synthetic 26-base target) by ‘hybridization induced aggregation’ (HIA, ) recognition of magnetic particles (1 µm diameter Dynabeads by Invitrogen, Waltham, MA, USA) functionalized by two different oligonucleotides complementary to the 5′ and 3′ end of the target sequence . Later, still making use of the RMF-only setup, the group extended their total DNA concentration CDA analysis to microbial growth testing ( E. coli detection) as well as differentiation of CD4+ T-Cells, the latter achieved by adding an immunomagnetic separation step up-front . Following introduction of the dual-force setup , the group systematically analyzed the influence of different target sequence parameters on the HIA efficiency of the target to oligonucleotide-functionalized magnetic particles, also including differentiation of one, two and three base mismatches . The latter analysis was further advanced for detecting single nucleotide polymorphism mutation of the KRAS gene from pancreatic and lung cancer cell lines by the dual-force setup, demonstrating efficient HIA discrimination of mutant and wild-type KRAS genes following polymerase chain reaction (PCR) amplification to a minimum number of 10 12 copies . While the CDA approach is intrinsically non-specific, it can also be turned into a specific detection by performing sequence-specific DNA amplification reactions up-front. However, efficient CDA requires DNA lengths of at least 10 kilo-base-pairs (kbp), while the products of amplification reactions are usually much shorter . By introducing a competitive assay format, where rising concentrations of the amplification product increasingly inhibit magnetic particle agglomeration that is induced by addition of a fixed concentration of 48 kbp long λ-phage DNA, DuVall et al. demonstrated successful detection of the food-borne pathogens E. coli and Salmonella as well as the Rift Valley fever virus by CDA following loop-mediated isothermal amplification (LAMP) . An even simpler CDA analysis procedure called ‘pipette, aggregate and blot’ (PAB) was introduced by Li et al. . Here, the magnetic particles and the sample are sequentially picked up by a pipette, and the mixture within the pipette tip is exposed to a static magnetic field to induce DNA-mediated formation of aggregates . Next, the fluid is dispensed onto a filter paper (‘blotting’), on which the degree of particle aggregate formation is determined by digital photography and image analysis, i.e. , a process that can also be accomplished by any smart phone . The authors demonstrated detection of human genomic DNA from purified whole blood by the PAB technique and showed that the achievable detection limit depends on the size of the employed magnetic particles (800 ng/mL for 1 µm diameter by Invitrogen, Waltham, MA, USA, and 6.4 µg/mL for 8 µm diameter magnetic silica particles by Promega, Madison, WI, USA) . While this does not reach the detection limit of 250 pg/mL demonstrated for genomic DNA detection by CDA analysis using the dual-force setup , the PAB approach has an advantage with regard to its simplicity. A very similar procedure was followed by Lin et al , who exposed mixtures of magnetic particles (1 µm diameter by Invitrogen, Waltham, MA, USA) and the sample solution to multiple sequences of aggregation (application of a static magnetic field) and re-suspension . Following dispensing of the mixture onto a filter paper, the degree of particle clustering is determined by digital image analysis of the filter paper . The authors demonstrated their method for the detection of the human papilloma virus type 18 gene following rolling circle amplification (RCA), and could successfully distinguish positive samples (genomic DNA isolated from HeLa cells) from negative control samples (genomic DNA isolated from human hepatoma cells) . With the exception of total DNA content determination by CDA (, the CDA part of Reference and the white blood cell analysis part of ), all naked-eye detection papers presented above do not strictly fall into the category of one-step homogeneous detection, as they do involve some sort of upfront sample preparation, i.e. , immunomagnetic separation (CD4+ T-Cell detection part of ), DNA amplification or DNA purification (, HIA part of ). A true one-step analysis procedure comprising analyte-mediated formation of particle clusters in an applied magnetic field has lately been introduced by Chen et al. . shows a schematic representation of the measurement principle employed by the authors, which they designate as ‘immunomagnetic aggregation’ (IMA) . Here, a static magnetic field is applied that attracts the magnetic particles (immunomagnetic beads, IMB) to the side wall of the sample tube, and the structure of the resulting agglomerate depends on the presence of target molecules in the solution . The reason is the increased diameter and decreased net magnetization of an IMB-target complex as compared to blank IMBs, which influences the balance between the attractive magnetic force component tangential to the wall and the friction force, thus leading to an expanded arc-shaped aggregation of IMB-target complexes along the tube wall as opposed to a compact stripe-shaped form for blank IMBs (see top view representation in ) . The authors compare their naked-eye IMA detection results to gold lateral flow strip (GLFS) references . In addition, by analysis of digital images taken from the sample tubes, the authors extract an average grey scale value that semi-quantitatively depends on the target molecule concentration and can be used to compare the IMA data with dose-response curves obtained from enzyme-linked immunosorbent assay (ELISA) based reference detection . Employing magnetic beads (200 nm diameter Estapor particles by Merck, Darmstadt, Germany) functionalized by polyclonal E. coli antibodies, the authors demonstrate a detection limit of about 10 4 cfu/mL for the direct detection of E. coli bacteria in spiked river water samples within 15 min, which is one order of magnitude more sensitive than reference GLFS detection, and about ten times faster than reference ELISA detection . Besides, the authors likewise confirm correct IMA-based identification of E. coli contamination of non-spiked water samples obtained from a livestock farm . In addition to bacteria, the authors also show detection of the cancer biomarker proteins alpha fetoprotein (AFP) and carcino-embryonic antigen (CEA) directly in spiked urine samples using magnetic particles functionalized by pairs of respective monoclonal antibodies, and achieve a detection limit of about 2.5 ng/mL for AFP and 2.0 ng/mL for CEA, both of which are well below the clinical cut-off values . Finally, the authors successfully discriminate AFP and CRP positive from negative patients by IMA-analysis using non-spiked clinical serum samples .
An alternative to generating optical anisotropy by inducing clustering of intrinsically optically isotropic particles (see ) is to make use of magnetic particle labels that display an intrinsic optical anisotropy. To that end, three main approaches have been followed. One possibility is to make use of magneto-optical effects ( i.e. , the Faraday or the Cotton-Mouton effect) as source of optical anisotropy, which usually result in changes of the polarization state of the incident light as optical measurement signal (see ). Alternatively, optical anisotropy can be created by hemi-spherical coating of initially optically isotropic spherical particles (see ) or by employing particle labels with shape anisotropy (e.g., rod-shaped particles, see ). In the latter two cases, the optical measurement signal usually comprises a change in the transmission or scattering intensity of the particle labels. 3.2.1. Magneto-Optical Detection of Magnetic Particle Labels When an external magnetic field is applied to a suspension of magnetic particles, their magnetic moments align parallel to the applied field, and the suspension becomes birefringent and dichroic. As the dichroism induced in magnetic particle suspensions is usually much smaller than the birefringence , it is normally neglected in the analysis. Both, the Faraday effect (magnetic circular birefringence, magnetic field applied parallel to the direction of light propagation, ) and the Cotton-Mouton effect (magnetic linear birefringence; magnetic field applied perpendicular to the direction of light propagation, ) have been exploited to magneto-optically characterize magnetic particles. Regarding the measurement modes, linearly polarized light is incident onto the sample, and the magnetic field amplitude either varies sinusoidal with time (AC susceptibility mode, ), or is applied as a step function (magnetorelaxation (MRX) mode, ). Magneto-optical methods are sensitive to changes in the Brownian relaxation time of magnetic particle suspensions, and, consequently, have been applied to study hydrodynamic particle diameter distributions or medium viscosities . A typical setup, as it is employed to magneto-optically (Cotton-Mouton effect) measure the relaxation of the magnetization of a particle ensemble after an externally applied uniaxial magnetizing field is turned off (MRX mode), is sketched in a . It comprises a laser light source that is linearly polarized by a polarizer aligned at −45° relative to the orientation of the magnetic field, which is oriented perpendicular to the propagation direction of the light and is generated by a Helmholtz coil. In the center of the Helmholtz coil, the sample containing the particle dispersion within a non-birefringent cuvette is positioned. When the magnetic moments of the particles are aligned by the applied magnetic field, the suspension becomes birefringent, and the transmitted light gets elliptically polarized . The physical origin of the optical anisotropy can be related to crystalline or shape anisotropy of the particle cores, but for the commonly applied iron-oxide NPs mostly arises from surface magnetic anisotropy . After passing the quarter wave plate, which is aligned with its slow axis parallel to the polarizer, the light is again linearly polarized, but shifted in polarization by a birefringence-proportional phase lag . As a result, some light can pass the analyzer, which is oriented at +45° relative to the magnetic field ( i.e. , perpendicular to the polarizer), and, consequently, blocks the incident light if no birefringence is induced in the sample ( i.e. , the particles are randomly oriented) . The transmitted light is measured by a photodiode detector, which in this configuration is proportional to the induced birefringence . b schematically shows the time dependence of the measured light intensity for a setup as the one described in a. When the magnetic field is turned on, birefringence in the sample is induced, and the measured intensity reaches a stationary value I 0 . When the magnetizing field is turned off, the magnetic particles transit back to a random state. For particles that predominantly relax their net magnetization via Brownian rotational motion, the measured intensity exponentially decays to zero with a time constant given by the Brownian relaxation time of the particles, which is proportional to the cube of their hydrodynamic diameter . Since analyte molecules bound to the particle surfaces increase their hydrodynamic radii, the measured intensity of analyte-carrying particles (red curve) decays slower than for plain reference particles (green curve). By fitting the measured intensity by exponential decay curves and integrating across the particle diameter , the hydrodynamic diameter distribution of the particle ensemble can be deduced. Alternatively, the intensity curve can also be fitted by a stretched exponential, where the size distribution of the particles is described by a polydispersity index . Owing to their high sensitivity to changes in the hydrodynamic shell thickness, magneto-optical methods are well suited as homogeneous particle-based biosensors that can be applied also to studies in dense and highly scattering media, which makes them advantageous to other techniques such as dynamic light scattering (DLS). For example, Köber et al. demonstrated in-situ evaluation of the hydrodynamic diameter distribution of magnetite NPs with three different surface coatings (plain PMAO polymer, galactose and PEG) directly within the agarose carrier matrix used for gel electrophoresis, and the obtained diameters have been shown to be independent on fluctuations of the NP concentration along the gel . Stepwise increases in the mean hydrodynamic diameters of carboxylated magnetite NPs on the covalent attachment of avidin, followed by functionalization with biotinylated immunoglobulin G (IgG) antibodies and binding of IgG antigen has been demonstrated by Ku et al. , and they showed that the measured NP diameter increases are well in line with the expected hydrodynamic sizes of the respective molecules . Lartigue et al. carried out magneto-optical characterization of the formation of protein coronas around maghemite NPs for three different NP coatings (carboxylic moieties, glucose and citrate) by incubating them with different concentrations of both BSA and whole blood rat plasma . They showed that the formation of the protein corona depends both, on the NP surface coating and the plasma concentration . Here, the glucose coating efficiently prevents further adhesion of plasma proteins, while citrate-coated NPs and NPs with carboxylic moieties first undergo cluster formation at low plasma concentrations (10%–20%), while larger plasma concentrations lead to single particle stabilization with a mean protein corona thickness of 8.8 nm . The largest signal in magneto-optical biosensing can be achieved when the analyte molecule contains multiple binding sites and, consequently, induces cross-linking of the particles. This is demonstrated by Glöckl et al. , who carried out a direct comparison of multicore maghemite NPs functionalized by monoclonal antibodies against PSA and by polyclonal antibodies against IgG . They observed a significant increase in the relaxation time of the NPs only for IgG analyte, which they explained by the analyte-induced formation of NP clusters functionalized by polyclonal antibodies . For the detection of carcinoembryonic antigen (CEA), however, the group obtained cluster formation both for NPs (same type as employed in ) functionalized by monoclonal and polyclonal antibodies, and a detection limit for CEA in buffer in the lower nanomolar regime could be demonstrated . Employing magnetic NPs functionalized by polyclonal antibodies (same type as employed in ), the group also investigated the detection of immunoglobulin M (IgM), IgG, eotaxin, CEA and insulin as well as insulin-like growth factor 1 (IGF-1) , and they could demonstrate a detection limit in the lower nanomolar regime for CEA and IGF-1 and in the picomolar regime for IgG . Furthermore, on the basis of linear chain formation model, the group derived a distribution function of particle clusters, and by fitting the measured intensity curves to this model, they could determine the time evolution of the relative number of monomers, dimers, trimers, etc. . In addition, from the analysis of the time dependence of the measured relaxation curves for different analyte concentrations, the group determined the kinetic parameters for the binding of eotaxin , CEA and IGF-1 to NPs functionalized by respective antibodies, and compared the results to surface plasmon resonance (SPR) data . Similarly, the binding of the lectin concanavalin A (ConA) to carbohydrate-functionalized magnetite NPs was analyzed by Köber et al. . They applied the Hill equation to study the analyte-driven formation of clusters, and directly determined the association and dissociation rate constants by homogeneous magneto-optical measurements by first adding varying concentrations (nanomolar range) of ConA analyte (association), and later adding excess amounts of free carbohydrates (50 millimolar of mannose or glucose) that practically completely dissociates the analyte from the NPs . The demonstrated detection limit for ConA was in the lower nanomolar range . 3.2.2. Hemispherically Coated Spherical Particle Labels Particles with asymmetric properties are commonly designated as ‘Janus’ particles in reference to the two-faced Roman God Janus, a term that has been promoted by P.G. de Gennes in his Nobel Prize address in 1991 . A number of comprehensive reviews have been published within the past decade that detail the different variants, fabrication strategies and applications of Janus particles . Specifically relevant to this review article are magnetic Janus particles for in vitro diagnostic applications as they have been introduced by the term magnetically modulated optical nanoprobe (MagMOON) by the Kopelman group. In its initial realization, Anker et al. employed magnetic microspheres (particles by Spherotech, Lake Forest, IL, USA) that have been coated on one hemisphere by a sputter-deposited gold layer that blocks excitation and detection of fluorophores bound to the non-coated streptavidin-functionalized hemisphere . Consequently, by controlling the alignment of the MagMOONs in the solution by an applied magnetic field, the observed fluorescence intensity can be modulated (see ) . In a demonstration experiment, the authors mixed the MagMOONs with two different biotinylated fluorophores and showed concentration-dependent detection of the fluorophores bound to the MagMOON particles at their respective wavelengths above the large background of non-bound fluorophores by magnetically modulating the particle orientation in the solution . Similarly to the particle chains described in , the MagMOONs can, therefore, be employed as substrates with magnetically modulated fluorescence contrast to directly carry out sandwich immunoassays in the homogeneous sample solution phase without requiring washing . The group also demonstrated detection of single E. coli bacteria by microscopically observing the magnetorotation of individual MagMOONs ( E. coli antibody functionalized magnetic particles with a diameter of 2 µm by Spherotech, Lake Forest, IL, USA, which are hemispherically coated by a 50 nm thick aluminum layer) . The authors applied a rotating magnetic field (RMF) at a frequency well above the critical frequency of the MagMOON , i.e. , the limiting frequency at which a magnetic particle can still follow the applied RMF synchronously . Above this critical frequency, the particle experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . The authors could show that due to the increasing hydrodynamic drag, the measured net rotation rate of the MagMOONs sensitively depends on the number of bound E. coli bacteria, thus providing a tool for homogenous and label-free quantification of bacteria concentrations . Ensemble measurements of MagMOONs, however, are hampered by the rather inhomogeneous magnetization of most available magnetic microspheres . This problem has been addressed by hemispherically coating homogeneous size standard polystyrene particles (diameters of 1, 2, 10 and 100 µm by Polysciences Inc., Warrington, PA, USA) by a nickel layer, thereby reducing the magnetic response variability of the MagMOONs by up to almost one order of magnitude compared to previous results using coated magnetic microspheres . An increase in the throughput of biosensing by observing the magnetorotation of MagMOONs can be accomplished by a droplet-based microfluidic analysis platform, which Sinn et al. introduced and demonstrated for E. coli bacteria growth studies, including fast determination of the minimum inhibitory concentration of the antibiotic gentamicin . Furthermore, the group also demonstrated a stand-alone prototype instrument that no longer requires an optical microscope setup, but measures the magnetorotation of individual MagMOONs by a compact optical setup consisting of a laser diode source and a photodiode detector . Combining such compact optics and high throughput droplet microfluidics, MagMOON magnetorotation as well as the related methodology of ‘label acquired magnetorotation’ have the potential to also find applications beyond research tools. 3.2.3. Magnetic Labels with Optical Shape Anisotropy In this section, we review methods that make use of an intrinsic optical anisotropy of rod-shaped particle labels (nanorods) to optically monitor their orientation in the sample solution. This is enabled by differences of the optical polarizability of nanorods along their principal axes in linearly polarized light . In the following, we discuss a biosensing principle based on this effect as it has been introduced by Schrittwieser et al. . Two distinct types of magnetic nanorods are presented, i.e. , nickel (Ni) nanorods and noble metal shell coated cobalt (Co) nanorods . The measurement method can be applied for detection as well as analysis of proteins in solution. Measurement Principle Nanorods consisting of a ferromagnetic core and an antibody-functionalized noble metal shell are optimal probes for this method , which is based on detecting an increase of the hydrodynamic nanoprobe volume upon binding of target molecules (see sketch of the method in ) . The nanoprobes immersed in the sample solution are excited by an external rotating magnetic field (RMF), which they follow coherently due to their permanent magnetic moment that is fixed along the nanorod axis as a consequence of the magnetic shape anisotropy . The rotational behavior depends on the hydrodynamic nanoprobe drag, which causes the nanoprobe orientation to lag behind the momentary direction of the RMF by a specific phase lag α (see ). Binding of target proteins increases the hydrodynamic nanoprobe volume and drag, thus leading to an increase of the phase lag α. This change in the phase lag represents the measurement signal for this method. To detect these phase lag changes, the anisotropic absorption and scattering properties of the nanorods in linearly polarized light are exploited. Specifically, the detected optical signal intensity depends on the actual orientation of the nanoprobes with respect to the direction of polarization of the incoming light . For measurements performed in transmission geometry, nanoprobes aligned perpendicularly to the polarization show a maximum of transmission, and vice versa . Therefore, it is possible to deduce the momentary orientation of the nanoprobes by analyzing the optical signal. Comparison of the actual magnetic field orientation with the momentary nanoprobe orientation allows deducing the phase lag α, i.e. , the measurement signal of interest. The experimental setup for biosensing measurements by this method consists of two pairs of Helmholtz coils aligned perpendicularly to each other, which are fed by two sinusoidal currents that are phase-shifted by 90°. By adjusting the current amplitudes, a uniform rotating magnetic field is generated, with the sample placed in the center of the coil pair arrangement. The optical part of the setup simply consists of a laser diode, a polarizer, and a photodetector arranged in transmission geometry. A Lock-in amplifier is applied to compare the magnetic signal (specifically: voltage drop across a shunt resistor) with the optical signal. Details on the measurement setup can be found in literature , Due to the symmetry of the applied cylindrical nanorods, the optical signal is frequency doubled with respect to the magnetic excitation. Actual measurements can be carried out under variation of the frequency of the externally applied RMF (phase lag spectra), or at a single frequency for rapid analysis. Ni Nanorod Protein Binding Results Nickel nanorods were synthesized by electrochemical deposition into porous alumina templates . In a two-step anodization process , aluminum foils are anodized in sulfuric acid, which results in the formation of a porous alumina surface layer. The two-step anodization process is necessary to obtain ordered homogeneous porous surface layers of small thickness . Next, the non-conductive oxide layer at the pore bottom was thinned by voltage limited anodization and diameter fluctuations of the pores were reduced by immersion of the foils in phosphoric acid . The so created pores were filled with Ni in a Watts bath by pulsed electrodeposition . Negative and positive voltage pulses were applied periodically to yield homogeneous nanorod growth (see for details). Finally, the nanorod-enclosing aluminum oxide was dissolved in sodium hydroxide with the addition of polyvinylpyrrolidone (PVP) with a molecular weight of 3500 Da as surfactant for nanorod dispersion stabilization. Washing with water and re-dispersion of the nanorods was done by repetitive precipitation in the centrifuge and sonication. a shows a transmission electron microscopy (TEM) image of the final single-particle dispersed nanorod solution. The mean values and standard deviations of the Ni nanorods lengths and diameters were determined by TEM image analysis, and the mean particle magnetic moment was obtained by vibrating sample magnetometry (VSM) measurements . Protein binding to the surface of the Ni nanorods was examined by recording and comparing phase lag spectra of nanorod solutions with and without added protein. BSA was chosen as model protein that binds nonspecifically to the nanorod surface . To quantify the protein shell thickness, a recently developed theoretical model was applied to carry out model fits of the measurement results. Ni nanorods were employed together with a BSA concentration sufficient for at least five times full protein coverage of the nanorod surface . Note that however for similar coatings no more than a monolayer of proteins can be adsorbed . b shows the measured phase lag spectra of Ni nanorods at an external magnetic field strength of 1 mT . Here, the dots represent measured values, while the lines correspond to the results of the fitting procedure. Absolute phase lags of plain nanorods without bound protein (black) and nanorods with bound BSA protein (grey) are plotted against the left y-axis, while the phase lag difference (blue) between the two NP states is plotted against the right y-axis. At each state, the nanorods show a specific hydrodynamic shell thickness on top of the bare metal nanorod surface, which for plain nanorods comprises the PVP surfactant layer and the stagnant surface layer, while for nanorods with bound BSA, the thickness of the protein shell is added to the total shell thickness. By fitting the measured phase lag spectra at both nanorod states by the empirical equations derived from the respective theoretical model , the authors determined an added protein shell thickness of about 22 nm . Noble Metal Coated Co Nanorod Protein Binding Results The here presented Co nanorods possess a small diameter of ~5 nm, which means that surface oxidation easily affects the entire volume. Thus, a precondition of applying Co nanorods for the presented measurement method is the protection of the magnetic core against degradation. This was achieved by a noble metal shell synthesized on top of the magnetic Co core. In the first main step, bare Co nanorods were synthesized, which were covered in the second main step by a noble metal shell of platinum (Pt) and gold (Au) via an interlayer of tin (Sn) (Co@SnPtAu nanorods). Both synthesis steps are described in detail in literature (see ). In brief, bare Co nanorods were fabricated by decomposing a cobalt coordination precursor in the presence of different ligands in anisole solution under a hydrogen atmosphere at elevated temperature. In a next step, a Sn containing layer was grown on top of the nanorod surface to reduce the interface energy between the Co core and the following noble metal shell compounds. The first noble metal shell coating was done with Pt by reacting a Pt precursor with the nanorod surface when immersed in toluene under hydrogen atmosphere, which was then followed by a Au coating process under similar conditions, finally resulting in Co@SnPtAu nanorods. Co core noble metal shell nanorods that have been prepared as outlined above are stable against oxidation and degradation of the magnetic core. a shows a TEM image of a nanorod batch with resulting mean particle lengths of 75 ± 6 nm and diameters of about 9.0 ± 4.5 nm . The polycrystalline nature of the nanorod shell is illustrated by the high resolution transmission electron microscopy (HRTEM) image in b. An elemental map of such a nanorod obtained by scanning transmission electron microscopy energy-dispersive X-ray spectroscopy (STEM-EDX) is shown in the c–f. Here, the different metals are represented by different colors. It can be seen that the growth of the noble metal shell materials takes place on different sections on the nanorod surface. Both shell metals together form a continuous layer that protects the magnetic Co core from oxidation, which was also shown by VSM measurements before and after exposure to air and water . The Co@SnPtAu nanorods are synthesized in organic solvents, so they have to be transferred to aqueous solution to be applicable for any kind of biological measurement. To that end, the nanorods were coated by an amphiphilic polymer consisting of a hydrophilic backbone and hydrophobic side chains . Stabilization of the NPs in water was achieved by charged carboxy groups of the hydrophilic polymer backbone on the nanorod surface . The advantage of these nanorods compared to the Ni nanorods is the presence of the carboxy groups, which can be employed for further surface modifications. This was accomplished by linking antibodies to the nanorods to target a specific protein in a sample solution (contrary to the unspecific adhesion of BSA to the Ni nanorods as described above). The analyte protein to be detected was the soluble domain of the human epidermal growth factor receptor 2 (sHER2) and the antibody protein immobilized onto the nanorods was the monoclonal IgG antibody trastuzumab. Both proteins are clinically applied for the detection and the treatment of breast cancer . a shows the phase lag α spectra recorded at an external magnetic field strength of 5 mT in buffer solution for nanorods without antibody functionalization (nanoreagent—black markers), nanorods including the antibody shell (nanoprobe—red markers) and for nanoprobes fully coated by the target protein (blue markers) . Fitting of the experimental data (solid lines in the figure) by the respective theoretical model resulted in hydrodynamic shell thicknesses of 15 ± 9.5 nm for the antibody shell and of 25 ± 13 nm for the antibody shell including bound target protein (both measured on top of the nanoreagents). These values are in good agreement with respective protein sizes reported in the literature . Here, the target protein sHER2 was added in saturation (200 nM) to ensure full nanoprobe coverage . Addition of BSA protein to the nanoprobes at an even higher concentration (15 µM) did not result in a detectable change in phase lag (green markers), thus demonstrating specific binding of the sHER2 target protein. To detect the concentration of the target protein in solution, it is sufficient to measure the phase lag difference Δα of the nanoprobes to reference nanoprobes without added sHER2 at a single frequency. To that end, a separate experimental setup was chosen to generate a higher magnetic field strength of 10 mT at a fixed rotational frequency of 1000 Hz . The respective sHER2 assay results are shown in b . The sensitivity of the assay was determined by fitting the data by a logistic function , which results in a limit of detection of 440 pM .
When an external magnetic field is applied to a suspension of magnetic particles, their magnetic moments align parallel to the applied field, and the suspension becomes birefringent and dichroic. As the dichroism induced in magnetic particle suspensions is usually much smaller than the birefringence , it is normally neglected in the analysis. Both, the Faraday effect (magnetic circular birefringence, magnetic field applied parallel to the direction of light propagation, ) and the Cotton-Mouton effect (magnetic linear birefringence; magnetic field applied perpendicular to the direction of light propagation, ) have been exploited to magneto-optically characterize magnetic particles. Regarding the measurement modes, linearly polarized light is incident onto the sample, and the magnetic field amplitude either varies sinusoidal with time (AC susceptibility mode, ), or is applied as a step function (magnetorelaxation (MRX) mode, ). Magneto-optical methods are sensitive to changes in the Brownian relaxation time of magnetic particle suspensions, and, consequently, have been applied to study hydrodynamic particle diameter distributions or medium viscosities . A typical setup, as it is employed to magneto-optically (Cotton-Mouton effect) measure the relaxation of the magnetization of a particle ensemble after an externally applied uniaxial magnetizing field is turned off (MRX mode), is sketched in a . It comprises a laser light source that is linearly polarized by a polarizer aligned at −45° relative to the orientation of the magnetic field, which is oriented perpendicular to the propagation direction of the light and is generated by a Helmholtz coil. In the center of the Helmholtz coil, the sample containing the particle dispersion within a non-birefringent cuvette is positioned. When the magnetic moments of the particles are aligned by the applied magnetic field, the suspension becomes birefringent, and the transmitted light gets elliptically polarized . The physical origin of the optical anisotropy can be related to crystalline or shape anisotropy of the particle cores, but for the commonly applied iron-oxide NPs mostly arises from surface magnetic anisotropy . After passing the quarter wave plate, which is aligned with its slow axis parallel to the polarizer, the light is again linearly polarized, but shifted in polarization by a birefringence-proportional phase lag . As a result, some light can pass the analyzer, which is oriented at +45° relative to the magnetic field ( i.e. , perpendicular to the polarizer), and, consequently, blocks the incident light if no birefringence is induced in the sample ( i.e. , the particles are randomly oriented) . The transmitted light is measured by a photodiode detector, which in this configuration is proportional to the induced birefringence . b schematically shows the time dependence of the measured light intensity for a setup as the one described in a. When the magnetic field is turned on, birefringence in the sample is induced, and the measured intensity reaches a stationary value I 0 . When the magnetizing field is turned off, the magnetic particles transit back to a random state. For particles that predominantly relax their net magnetization via Brownian rotational motion, the measured intensity exponentially decays to zero with a time constant given by the Brownian relaxation time of the particles, which is proportional to the cube of their hydrodynamic diameter . Since analyte molecules bound to the particle surfaces increase their hydrodynamic radii, the measured intensity of analyte-carrying particles (red curve) decays slower than for plain reference particles (green curve). By fitting the measured intensity by exponential decay curves and integrating across the particle diameter , the hydrodynamic diameter distribution of the particle ensemble can be deduced. Alternatively, the intensity curve can also be fitted by a stretched exponential, where the size distribution of the particles is described by a polydispersity index . Owing to their high sensitivity to changes in the hydrodynamic shell thickness, magneto-optical methods are well suited as homogeneous particle-based biosensors that can be applied also to studies in dense and highly scattering media, which makes them advantageous to other techniques such as dynamic light scattering (DLS). For example, Köber et al. demonstrated in-situ evaluation of the hydrodynamic diameter distribution of magnetite NPs with three different surface coatings (plain PMAO polymer, galactose and PEG) directly within the agarose carrier matrix used for gel electrophoresis, and the obtained diameters have been shown to be independent on fluctuations of the NP concentration along the gel . Stepwise increases in the mean hydrodynamic diameters of carboxylated magnetite NPs on the covalent attachment of avidin, followed by functionalization with biotinylated immunoglobulin G (IgG) antibodies and binding of IgG antigen has been demonstrated by Ku et al. , and they showed that the measured NP diameter increases are well in line with the expected hydrodynamic sizes of the respective molecules . Lartigue et al. carried out magneto-optical characterization of the formation of protein coronas around maghemite NPs for three different NP coatings (carboxylic moieties, glucose and citrate) by incubating them with different concentrations of both BSA and whole blood rat plasma . They showed that the formation of the protein corona depends both, on the NP surface coating and the plasma concentration . Here, the glucose coating efficiently prevents further adhesion of plasma proteins, while citrate-coated NPs and NPs with carboxylic moieties first undergo cluster formation at low plasma concentrations (10%–20%), while larger plasma concentrations lead to single particle stabilization with a mean protein corona thickness of 8.8 nm . The largest signal in magneto-optical biosensing can be achieved when the analyte molecule contains multiple binding sites and, consequently, induces cross-linking of the particles. This is demonstrated by Glöckl et al. , who carried out a direct comparison of multicore maghemite NPs functionalized by monoclonal antibodies against PSA and by polyclonal antibodies against IgG . They observed a significant increase in the relaxation time of the NPs only for IgG analyte, which they explained by the analyte-induced formation of NP clusters functionalized by polyclonal antibodies . For the detection of carcinoembryonic antigen (CEA), however, the group obtained cluster formation both for NPs (same type as employed in ) functionalized by monoclonal and polyclonal antibodies, and a detection limit for CEA in buffer in the lower nanomolar regime could be demonstrated . Employing magnetic NPs functionalized by polyclonal antibodies (same type as employed in ), the group also investigated the detection of immunoglobulin M (IgM), IgG, eotaxin, CEA and insulin as well as insulin-like growth factor 1 (IGF-1) , and they could demonstrate a detection limit in the lower nanomolar regime for CEA and IGF-1 and in the picomolar regime for IgG . Furthermore, on the basis of linear chain formation model, the group derived a distribution function of particle clusters, and by fitting the measured intensity curves to this model, they could determine the time evolution of the relative number of monomers, dimers, trimers, etc. . In addition, from the analysis of the time dependence of the measured relaxation curves for different analyte concentrations, the group determined the kinetic parameters for the binding of eotaxin , CEA and IGF-1 to NPs functionalized by respective antibodies, and compared the results to surface plasmon resonance (SPR) data . Similarly, the binding of the lectin concanavalin A (ConA) to carbohydrate-functionalized magnetite NPs was analyzed by Köber et al. . They applied the Hill equation to study the analyte-driven formation of clusters, and directly determined the association and dissociation rate constants by homogeneous magneto-optical measurements by first adding varying concentrations (nanomolar range) of ConA analyte (association), and later adding excess amounts of free carbohydrates (50 millimolar of mannose or glucose) that practically completely dissociates the analyte from the NPs . The demonstrated detection limit for ConA was in the lower nanomolar range .
Particles with asymmetric properties are commonly designated as ‘Janus’ particles in reference to the two-faced Roman God Janus, a term that has been promoted by P.G. de Gennes in his Nobel Prize address in 1991 . A number of comprehensive reviews have been published within the past decade that detail the different variants, fabrication strategies and applications of Janus particles . Specifically relevant to this review article are magnetic Janus particles for in vitro diagnostic applications as they have been introduced by the term magnetically modulated optical nanoprobe (MagMOON) by the Kopelman group. In its initial realization, Anker et al. employed magnetic microspheres (particles by Spherotech, Lake Forest, IL, USA) that have been coated on one hemisphere by a sputter-deposited gold layer that blocks excitation and detection of fluorophores bound to the non-coated streptavidin-functionalized hemisphere . Consequently, by controlling the alignment of the MagMOONs in the solution by an applied magnetic field, the observed fluorescence intensity can be modulated (see ) . In a demonstration experiment, the authors mixed the MagMOONs with two different biotinylated fluorophores and showed concentration-dependent detection of the fluorophores bound to the MagMOON particles at their respective wavelengths above the large background of non-bound fluorophores by magnetically modulating the particle orientation in the solution . Similarly to the particle chains described in , the MagMOONs can, therefore, be employed as substrates with magnetically modulated fluorescence contrast to directly carry out sandwich immunoassays in the homogeneous sample solution phase without requiring washing . The group also demonstrated detection of single E. coli bacteria by microscopically observing the magnetorotation of individual MagMOONs ( E. coli antibody functionalized magnetic particles with a diameter of 2 µm by Spherotech, Lake Forest, IL, USA, which are hemispherically coated by a 50 nm thick aluminum layer) . The authors applied a rotating magnetic field (RMF) at a frequency well above the critical frequency of the MagMOON , i.e. , the limiting frequency at which a magnetic particle can still follow the applied RMF synchronously . Above this critical frequency, the particle experiences an asynchronous motion, and the superimposed net rotation rate in the direction of the applied RMF decreases with increasing RMF frequency . The authors could show that due to the increasing hydrodynamic drag, the measured net rotation rate of the MagMOONs sensitively depends on the number of bound E. coli bacteria, thus providing a tool for homogenous and label-free quantification of bacteria concentrations . Ensemble measurements of MagMOONs, however, are hampered by the rather inhomogeneous magnetization of most available magnetic microspheres . This problem has been addressed by hemispherically coating homogeneous size standard polystyrene particles (diameters of 1, 2, 10 and 100 µm by Polysciences Inc., Warrington, PA, USA) by a nickel layer, thereby reducing the magnetic response variability of the MagMOONs by up to almost one order of magnitude compared to previous results using coated magnetic microspheres . An increase in the throughput of biosensing by observing the magnetorotation of MagMOONs can be accomplished by a droplet-based microfluidic analysis platform, which Sinn et al. introduced and demonstrated for E. coli bacteria growth studies, including fast determination of the minimum inhibitory concentration of the antibiotic gentamicin . Furthermore, the group also demonstrated a stand-alone prototype instrument that no longer requires an optical microscope setup, but measures the magnetorotation of individual MagMOONs by a compact optical setup consisting of a laser diode source and a photodiode detector . Combining such compact optics and high throughput droplet microfluidics, MagMOON magnetorotation as well as the related methodology of ‘label acquired magnetorotation’ have the potential to also find applications beyond research tools.
In this section, we review methods that make use of an intrinsic optical anisotropy of rod-shaped particle labels (nanorods) to optically monitor their orientation in the sample solution. This is enabled by differences of the optical polarizability of nanorods along their principal axes in linearly polarized light . In the following, we discuss a biosensing principle based on this effect as it has been introduced by Schrittwieser et al. . Two distinct types of magnetic nanorods are presented, i.e. , nickel (Ni) nanorods and noble metal shell coated cobalt (Co) nanorods . The measurement method can be applied for detection as well as analysis of proteins in solution. Measurement Principle Nanorods consisting of a ferromagnetic core and an antibody-functionalized noble metal shell are optimal probes for this method , which is based on detecting an increase of the hydrodynamic nanoprobe volume upon binding of target molecules (see sketch of the method in ) . The nanoprobes immersed in the sample solution are excited by an external rotating magnetic field (RMF), which they follow coherently due to their permanent magnetic moment that is fixed along the nanorod axis as a consequence of the magnetic shape anisotropy . The rotational behavior depends on the hydrodynamic nanoprobe drag, which causes the nanoprobe orientation to lag behind the momentary direction of the RMF by a specific phase lag α (see ). Binding of target proteins increases the hydrodynamic nanoprobe volume and drag, thus leading to an increase of the phase lag α. This change in the phase lag represents the measurement signal for this method. To detect these phase lag changes, the anisotropic absorption and scattering properties of the nanorods in linearly polarized light are exploited. Specifically, the detected optical signal intensity depends on the actual orientation of the nanoprobes with respect to the direction of polarization of the incoming light . For measurements performed in transmission geometry, nanoprobes aligned perpendicularly to the polarization show a maximum of transmission, and vice versa . Therefore, it is possible to deduce the momentary orientation of the nanoprobes by analyzing the optical signal. Comparison of the actual magnetic field orientation with the momentary nanoprobe orientation allows deducing the phase lag α, i.e. , the measurement signal of interest. The experimental setup for biosensing measurements by this method consists of two pairs of Helmholtz coils aligned perpendicularly to each other, which are fed by two sinusoidal currents that are phase-shifted by 90°. By adjusting the current amplitudes, a uniform rotating magnetic field is generated, with the sample placed in the center of the coil pair arrangement. The optical part of the setup simply consists of a laser diode, a polarizer, and a photodetector arranged in transmission geometry. A Lock-in amplifier is applied to compare the magnetic signal (specifically: voltage drop across a shunt resistor) with the optical signal. Details on the measurement setup can be found in literature , Due to the symmetry of the applied cylindrical nanorods, the optical signal is frequency doubled with respect to the magnetic excitation. Actual measurements can be carried out under variation of the frequency of the externally applied RMF (phase lag spectra), or at a single frequency for rapid analysis. Ni Nanorod Protein Binding Results Nickel nanorods were synthesized by electrochemical deposition into porous alumina templates . In a two-step anodization process , aluminum foils are anodized in sulfuric acid, which results in the formation of a porous alumina surface layer. The two-step anodization process is necessary to obtain ordered homogeneous porous surface layers of small thickness . Next, the non-conductive oxide layer at the pore bottom was thinned by voltage limited anodization and diameter fluctuations of the pores were reduced by immersion of the foils in phosphoric acid . The so created pores were filled with Ni in a Watts bath by pulsed electrodeposition . Negative and positive voltage pulses were applied periodically to yield homogeneous nanorod growth (see for details). Finally, the nanorod-enclosing aluminum oxide was dissolved in sodium hydroxide with the addition of polyvinylpyrrolidone (PVP) with a molecular weight of 3500 Da as surfactant for nanorod dispersion stabilization. Washing with water and re-dispersion of the nanorods was done by repetitive precipitation in the centrifuge and sonication. a shows a transmission electron microscopy (TEM) image of the final single-particle dispersed nanorod solution. The mean values and standard deviations of the Ni nanorods lengths and diameters were determined by TEM image analysis, and the mean particle magnetic moment was obtained by vibrating sample magnetometry (VSM) measurements . Protein binding to the surface of the Ni nanorods was examined by recording and comparing phase lag spectra of nanorod solutions with and without added protein. BSA was chosen as model protein that binds nonspecifically to the nanorod surface . To quantify the protein shell thickness, a recently developed theoretical model was applied to carry out model fits of the measurement results. Ni nanorods were employed together with a BSA concentration sufficient for at least five times full protein coverage of the nanorod surface . Note that however for similar coatings no more than a monolayer of proteins can be adsorbed . b shows the measured phase lag spectra of Ni nanorods at an external magnetic field strength of 1 mT . Here, the dots represent measured values, while the lines correspond to the results of the fitting procedure. Absolute phase lags of plain nanorods without bound protein (black) and nanorods with bound BSA protein (grey) are plotted against the left y-axis, while the phase lag difference (blue) between the two NP states is plotted against the right y-axis. At each state, the nanorods show a specific hydrodynamic shell thickness on top of the bare metal nanorod surface, which for plain nanorods comprises the PVP surfactant layer and the stagnant surface layer, while for nanorods with bound BSA, the thickness of the protein shell is added to the total shell thickness. By fitting the measured phase lag spectra at both nanorod states by the empirical equations derived from the respective theoretical model , the authors determined an added protein shell thickness of about 22 nm . Noble Metal Coated Co Nanorod Protein Binding Results The here presented Co nanorods possess a small diameter of ~5 nm, which means that surface oxidation easily affects the entire volume. Thus, a precondition of applying Co nanorods for the presented measurement method is the protection of the magnetic core against degradation. This was achieved by a noble metal shell synthesized on top of the magnetic Co core. In the first main step, bare Co nanorods were synthesized, which were covered in the second main step by a noble metal shell of platinum (Pt) and gold (Au) via an interlayer of tin (Sn) (Co@SnPtAu nanorods). Both synthesis steps are described in detail in literature (see ). In brief, bare Co nanorods were fabricated by decomposing a cobalt coordination precursor in the presence of different ligands in anisole solution under a hydrogen atmosphere at elevated temperature. In a next step, a Sn containing layer was grown on top of the nanorod surface to reduce the interface energy between the Co core and the following noble metal shell compounds. The first noble metal shell coating was done with Pt by reacting a Pt precursor with the nanorod surface when immersed in toluene under hydrogen atmosphere, which was then followed by a Au coating process under similar conditions, finally resulting in Co@SnPtAu nanorods. Co core noble metal shell nanorods that have been prepared as outlined above are stable against oxidation and degradation of the magnetic core. a shows a TEM image of a nanorod batch with resulting mean particle lengths of 75 ± 6 nm and diameters of about 9.0 ± 4.5 nm . The polycrystalline nature of the nanorod shell is illustrated by the high resolution transmission electron microscopy (HRTEM) image in b. An elemental map of such a nanorod obtained by scanning transmission electron microscopy energy-dispersive X-ray spectroscopy (STEM-EDX) is shown in the c–f. Here, the different metals are represented by different colors. It can be seen that the growth of the noble metal shell materials takes place on different sections on the nanorod surface. Both shell metals together form a continuous layer that protects the magnetic Co core from oxidation, which was also shown by VSM measurements before and after exposure to air and water . The Co@SnPtAu nanorods are synthesized in organic solvents, so they have to be transferred to aqueous solution to be applicable for any kind of biological measurement. To that end, the nanorods were coated by an amphiphilic polymer consisting of a hydrophilic backbone and hydrophobic side chains . Stabilization of the NPs in water was achieved by charged carboxy groups of the hydrophilic polymer backbone on the nanorod surface . The advantage of these nanorods compared to the Ni nanorods is the presence of the carboxy groups, which can be employed for further surface modifications. This was accomplished by linking antibodies to the nanorods to target a specific protein in a sample solution (contrary to the unspecific adhesion of BSA to the Ni nanorods as described above). The analyte protein to be detected was the soluble domain of the human epidermal growth factor receptor 2 (sHER2) and the antibody protein immobilized onto the nanorods was the monoclonal IgG antibody trastuzumab. Both proteins are clinically applied for the detection and the treatment of breast cancer . a shows the phase lag α spectra recorded at an external magnetic field strength of 5 mT in buffer solution for nanorods without antibody functionalization (nanoreagent—black markers), nanorods including the antibody shell (nanoprobe—red markers) and for nanoprobes fully coated by the target protein (blue markers) . Fitting of the experimental data (solid lines in the figure) by the respective theoretical model resulted in hydrodynamic shell thicknesses of 15 ± 9.5 nm for the antibody shell and of 25 ± 13 nm for the antibody shell including bound target protein (both measured on top of the nanoreagents). These values are in good agreement with respective protein sizes reported in the literature . Here, the target protein sHER2 was added in saturation (200 nM) to ensure full nanoprobe coverage . Addition of BSA protein to the nanoprobes at an even higher concentration (15 µM) did not result in a detectable change in phase lag (green markers), thus demonstrating specific binding of the sHER2 target protein. To detect the concentration of the target protein in solution, it is sufficient to measure the phase lag difference Δα of the nanoprobes to reference nanoprobes without added sHER2 at a single frequency. To that end, a separate experimental setup was chosen to generate a higher magnetic field strength of 10 mT at a fixed rotational frequency of 1000 Hz . The respective sHER2 assay results are shown in b . The sensitivity of the assay was determined by fitting the data by a logistic function , which results in a limit of detection of 440 pM .
Nanorods consisting of a ferromagnetic core and an antibody-functionalized noble metal shell are optimal probes for this method , which is based on detecting an increase of the hydrodynamic nanoprobe volume upon binding of target molecules (see sketch of the method in ) . The nanoprobes immersed in the sample solution are excited by an external rotating magnetic field (RMF), which they follow coherently due to their permanent magnetic moment that is fixed along the nanorod axis as a consequence of the magnetic shape anisotropy . The rotational behavior depends on the hydrodynamic nanoprobe drag, which causes the nanoprobe orientation to lag behind the momentary direction of the RMF by a specific phase lag α (see ). Binding of target proteins increases the hydrodynamic nanoprobe volume and drag, thus leading to an increase of the phase lag α. This change in the phase lag represents the measurement signal for this method. To detect these phase lag changes, the anisotropic absorption and scattering properties of the nanorods in linearly polarized light are exploited. Specifically, the detected optical signal intensity depends on the actual orientation of the nanoprobes with respect to the direction of polarization of the incoming light . For measurements performed in transmission geometry, nanoprobes aligned perpendicularly to the polarization show a maximum of transmission, and vice versa . Therefore, it is possible to deduce the momentary orientation of the nanoprobes by analyzing the optical signal. Comparison of the actual magnetic field orientation with the momentary nanoprobe orientation allows deducing the phase lag α, i.e. , the measurement signal of interest. The experimental setup for biosensing measurements by this method consists of two pairs of Helmholtz coils aligned perpendicularly to each other, which are fed by two sinusoidal currents that are phase-shifted by 90°. By adjusting the current amplitudes, a uniform rotating magnetic field is generated, with the sample placed in the center of the coil pair arrangement. The optical part of the setup simply consists of a laser diode, a polarizer, and a photodetector arranged in transmission geometry. A Lock-in amplifier is applied to compare the magnetic signal (specifically: voltage drop across a shunt resistor) with the optical signal. Details on the measurement setup can be found in literature , Due to the symmetry of the applied cylindrical nanorods, the optical signal is frequency doubled with respect to the magnetic excitation. Actual measurements can be carried out under variation of the frequency of the externally applied RMF (phase lag spectra), or at a single frequency for rapid analysis.
Nickel nanorods were synthesized by electrochemical deposition into porous alumina templates . In a two-step anodization process , aluminum foils are anodized in sulfuric acid, which results in the formation of a porous alumina surface layer. The two-step anodization process is necessary to obtain ordered homogeneous porous surface layers of small thickness . Next, the non-conductive oxide layer at the pore bottom was thinned by voltage limited anodization and diameter fluctuations of the pores were reduced by immersion of the foils in phosphoric acid . The so created pores were filled with Ni in a Watts bath by pulsed electrodeposition . Negative and positive voltage pulses were applied periodically to yield homogeneous nanorod growth (see for details). Finally, the nanorod-enclosing aluminum oxide was dissolved in sodium hydroxide with the addition of polyvinylpyrrolidone (PVP) with a molecular weight of 3500 Da as surfactant for nanorod dispersion stabilization. Washing with water and re-dispersion of the nanorods was done by repetitive precipitation in the centrifuge and sonication. a shows a transmission electron microscopy (TEM) image of the final single-particle dispersed nanorod solution. The mean values and standard deviations of the Ni nanorods lengths and diameters were determined by TEM image analysis, and the mean particle magnetic moment was obtained by vibrating sample magnetometry (VSM) measurements . Protein binding to the surface of the Ni nanorods was examined by recording and comparing phase lag spectra of nanorod solutions with and without added protein. BSA was chosen as model protein that binds nonspecifically to the nanorod surface . To quantify the protein shell thickness, a recently developed theoretical model was applied to carry out model fits of the measurement results. Ni nanorods were employed together with a BSA concentration sufficient for at least five times full protein coverage of the nanorod surface . Note that however for similar coatings no more than a monolayer of proteins can be adsorbed . b shows the measured phase lag spectra of Ni nanorods at an external magnetic field strength of 1 mT . Here, the dots represent measured values, while the lines correspond to the results of the fitting procedure. Absolute phase lags of plain nanorods without bound protein (black) and nanorods with bound BSA protein (grey) are plotted against the left y-axis, while the phase lag difference (blue) between the two NP states is plotted against the right y-axis. At each state, the nanorods show a specific hydrodynamic shell thickness on top of the bare metal nanorod surface, which for plain nanorods comprises the PVP surfactant layer and the stagnant surface layer, while for nanorods with bound BSA, the thickness of the protein shell is added to the total shell thickness. By fitting the measured phase lag spectra at both nanorod states by the empirical equations derived from the respective theoretical model , the authors determined an added protein shell thickness of about 22 nm .
The here presented Co nanorods possess a small diameter of ~5 nm, which means that surface oxidation easily affects the entire volume. Thus, a precondition of applying Co nanorods for the presented measurement method is the protection of the magnetic core against degradation. This was achieved by a noble metal shell synthesized on top of the magnetic Co core. In the first main step, bare Co nanorods were synthesized, which were covered in the second main step by a noble metal shell of platinum (Pt) and gold (Au) via an interlayer of tin (Sn) (Co@SnPtAu nanorods). Both synthesis steps are described in detail in literature (see ). In brief, bare Co nanorods were fabricated by decomposing a cobalt coordination precursor in the presence of different ligands in anisole solution under a hydrogen atmosphere at elevated temperature. In a next step, a Sn containing layer was grown on top of the nanorod surface to reduce the interface energy between the Co core and the following noble metal shell compounds. The first noble metal shell coating was done with Pt by reacting a Pt precursor with the nanorod surface when immersed in toluene under hydrogen atmosphere, which was then followed by a Au coating process under similar conditions, finally resulting in Co@SnPtAu nanorods. Co core noble metal shell nanorods that have been prepared as outlined above are stable against oxidation and degradation of the magnetic core. a shows a TEM image of a nanorod batch with resulting mean particle lengths of 75 ± 6 nm and diameters of about 9.0 ± 4.5 nm . The polycrystalline nature of the nanorod shell is illustrated by the high resolution transmission electron microscopy (HRTEM) image in b. An elemental map of such a nanorod obtained by scanning transmission electron microscopy energy-dispersive X-ray spectroscopy (STEM-EDX) is shown in the c–f. Here, the different metals are represented by different colors. It can be seen that the growth of the noble metal shell materials takes place on different sections on the nanorod surface. Both shell metals together form a continuous layer that protects the magnetic Co core from oxidation, which was also shown by VSM measurements before and after exposure to air and water . The Co@SnPtAu nanorods are synthesized in organic solvents, so they have to be transferred to aqueous solution to be applicable for any kind of biological measurement. To that end, the nanorods were coated by an amphiphilic polymer consisting of a hydrophilic backbone and hydrophobic side chains . Stabilization of the NPs in water was achieved by charged carboxy groups of the hydrophilic polymer backbone on the nanorod surface . The advantage of these nanorods compared to the Ni nanorods is the presence of the carboxy groups, which can be employed for further surface modifications. This was accomplished by linking antibodies to the nanorods to target a specific protein in a sample solution (contrary to the unspecific adhesion of BSA to the Ni nanorods as described above). The analyte protein to be detected was the soluble domain of the human epidermal growth factor receptor 2 (sHER2) and the antibody protein immobilized onto the nanorods was the monoclonal IgG antibody trastuzumab. Both proteins are clinically applied for the detection and the treatment of breast cancer . a shows the phase lag α spectra recorded at an external magnetic field strength of 5 mT in buffer solution for nanorods without antibody functionalization (nanoreagent—black markers), nanorods including the antibody shell (nanoprobe—red markers) and for nanoprobes fully coated by the target protein (blue markers) . Fitting of the experimental data (solid lines in the figure) by the respective theoretical model resulted in hydrodynamic shell thicknesses of 15 ± 9.5 nm for the antibody shell and of 25 ± 13 nm for the antibody shell including bound target protein (both measured on top of the nanoreagents). These values are in good agreement with respective protein sizes reported in the literature . Here, the target protein sHER2 was added in saturation (200 nM) to ensure full nanoprobe coverage . Addition of BSA protein to the nanoprobes at an even higher concentration (15 µM) did not result in a detectable change in phase lag (green markers), thus demonstrating specific binding of the sHER2 target protein. To detect the concentration of the target protein in solution, it is sufficient to measure the phase lag difference Δα of the nanoprobes to reference nanoprobes without added sHER2 at a single frequency. To that end, a separate experimental setup was chosen to generate a higher magnetic field strength of 10 mT at a fixed rotational frequency of 1000 Hz . The respective sHER2 assay results are shown in b . The sensitivity of the assay was determined by fitting the data by a logistic function , which results in a limit of detection of 440 pM .
In this review, we have presented sensor principles based on magnetic actuation of magnetic particle labels for in vitro homogeneous biosensing applications. Here, we discriminated between sensing concepts applying magnetic detection methods and sensors relying on optical detection. The underlying measurement principles of the different sensor concepts were presented and discussed. Moreover, relevant application areas and reported biosensing results were reviewed. The presented methods cover all areas of in vitro biosensing, including detection of small molecules like hormones, nucleic acids, proteins and whole cells or bacteria. Homogeneous detection methods are gaining rising attention due to their rapidity and simplicity, which are important factors for point-of-care testing applications that are becoming more and more relevant in the fields of medicine, food control, agriculture or veterinary medicine. Due to their manipulability by applied magnetic fields, employing magnetic particles as labels to homogeneous biosensing offers further advantages with regard to total analysis time and signal-to-noise ratio. While some measurement techniques are already technically advanced, future challenges with regard to PoC applications usually involve the design of inexpensive and portable measurement devices as well as the fabrication of low-cost particle labels applicable for long-term storage. Another challenge is to establish multiplex detection of several biomarkers within the same sample solution. Finally, full-scale clinical trials will be required to prove the advantages of homogeneous biosensor principles based on magnetic particle labels over state of the art methods. Solving these future requirements will also trigger improvements in particle label synthesis and in particle bio-functionalization techniques.
|
Perspectives on Ease of Use and Value of a Self-Monitoring Application to Support Physical Activity Maintenance among Individuals Living with and beyond Cancer | 5834919b-cb5b-4f3b-85cf-3215aa53c374 | 10969407 | Internal Medicine[mh] | Physical activity (PA) can improve the physical and psychosocial health of individuals living with and beyond cancer, and has been linked to increased survival and reduced recurrence in some cancers . However, few individuals meet cancer-specific exercise guidelines and PA levels commonly decline after the completion of exercise oncology programs, which often last between 8 and 12 weeks . These findings indicate that many exercise oncology program participants require ongoing support to maintain PA in the long term . Given the challenges of scaling and sustaining PA maintenance interventions and addressing PA barriers, technology-based tools are being explored to support PA maintenance . The key advantages of technology-supported interventions include their suitability for low-cost remote delivery, increasing reach and scale-up potential, and unique features such as enhanced self-monitoring and feedback on behavior, which can support PA maintenance in oncology . The use of technology to support PA maintenance is particularly relevant for rural and remote cancer populations, who have greater PA barriers and more limited access to local PA resources compared to those in urban areas . However, a recent review found that the effectiveness of technology-based interventions to change PA behaviors in this population was mixed, with many interventions reporting no significant effect on PA levels and no interventions specifically targeting PA maintenance among rural and remote individuals . Qualitative investigations may provide an insight into how people understand why and how such interventions do or do not support PA behavior change . Past research emphasizes the potential value of technology to support PA, by increasing accountability and providing feedback, and important features such as personalized easy-to-use interfaces and social components . The challenges for these technologies include low data accuracy, time-consuming use, and poor ease of use, especially among individuals with lower technology literacy. However, studies to date have focused primarily on urban individuals in the early stages of PA behavior change (i.e., PA adoption) , and conducted interviews after limited short-term engagement (i.e., 1–8 weeks of app use) with technology-based interventions . To address these limitations in the work to date, this study was guided by the technology acceptance model, a widely applied model to understand technology use in a variety of contexts . The model, which is based on the theory of planned behavior, posits perceived ease of use and perceived usefulness as the two central factors influencing technology use. Each factor is impacted by key determinants including the output quality and relevance for perceived usefulness, and the enjoyment and usability for ease of use. External contexts (e.g., experience, social influence) are included in the model as potential moderators of the perceived usefulness and ease of use. Based on these noted limitations in prior qualitative research, important knowledge gaps remain related to rural and remote cancer populations, use of the technology beyond the initial PA adoption phase, and collecting feedback after a prolonged engagement with the technology. Therefore, the purpose of the present study was to gather the perspectives of rural and remote individuals living with and beyond cancer on the use of a mobile app to promote PA maintenance. The research questions were as follows: (1) “What are participant perspectives on the ease of use of the self-monitoring app?”, and (2) “What are participant perspectives on the usefulness of the self-monitoring app to support PA maintenance?”. Participants were recruited from the intervention group of a 2-arm cluster randomized controlled trial (RCT) of an app-based self-monitoring intervention to support PA maintenance (i.e., long-term PA habits up to and beyond 24 weeks), which was embedded within the Exercise for Cancer to Enhance Living well (EXCEL) effectiveness-implementation study . The RCT was prospectively registered and its protocol was published prior to the study starting (NCT 04790578) . Participants used the self-monitoring app to track their PA and health (i.e., energy, fatigue, symptoms, and other personally relevant factors) for the 24-week study period. Semi-structured 1-to-1 interviews were conducted directly after study completion to discuss participant perspectives on the ease of use and usefulness of the self-monitoring app to support PA maintenance, and to better understand the varied contextual factors that may have impacted these perspectives. 2.1. Qualitative Methodology This qualitative study was guided by interpretive description methodology, a well-established methodology in applied health research based on a constructivist philosophy . Constructivist ontology posits that multiple socially constructed realities exist, shaped by contextual factors and lived experiences of each individual. Furthermore, the interaction between researchers and individuals is seen as essential to understanding both common and divergent aspects of these realities, valuing subjectivity and an inductive process when developing knowledge. These aspects of constructivism were applied to guide the study design, interview guide development, interview delivery, and data analysis and reporting. Theoretical scaffolding is a key component of interpretive description. For the present study, past exercise oncology research, the technology acceptance model, and the researcher’s practice-based knowledge were used as the theoretical scaffold. 2.2. Participants and Interviews Interviews were conducted by the first author (ME) directly after each of the 24-week app usage periods starting in April 2021, September 2021, January 2022, and April 2022. Convenience sampling was used for the first round of interviews, to include all those interested. For subsequent interview rounds, purposive sampling was used to collect a range of perspectives considering demographic backgrounds (age, gender, cancer diagnosis, treatment type, self-reported PA, baseline technology use) and experiences with the self-monitoring app (mobile app usability questionnaire ratings, app usage over time). The semi-structured interview guide was informed by previous exercise oncology studies and the technology acceptance model, and was developed by the first author (ME) with the help of experienced qualitative and mixed methods researchers (NCR, MHM), featuring open-ended questions about the self-monitoring app (ease of use, perceived usefulness) and PA maintenance behaviors . The interview guide was piloted and iteratively revised by the research team prior to participant interviews. A copy of the interview guide is available upon request. Interviews lasted between 30 and 65 min. Discussions between authors (ME, NCR, and MHM) were held to guide purposive sampling decisions as well as to adjust the interview guide to ensure that additional unique insights were collected. For example, after 2 rounds of interviews, the interview guide was adapted to increase emphasis on PA maintenance behaviors, while sampling for round 3 focused on recruiting outside of female breast cancer participants and increasing geographic representation. Interviews were conducted at a time convenient for participants via ZOOM videoconferencing and audio was recorded for analysis. 2.3. Data Analysis Audio recordings were transcribed verbatim by two authors (ME and TL) and imported into NVivo 12 for analysis . A brief summary was written for each transcript, describing overall impressions and key concepts shared by participants. The first author (ME) coded each transcript to identify key concepts relevant to the research question. Codes were refined through iterative rounds of reviewing the codes, individual transcripts, theoretical scaffolding, and discussions with co-authors. Themes were developed inductively from the codes by the first author (ME), in collaboration with the senior author (NCR) and a qualitative expert (MHM). Discussions focused on minimizing overlap between themes and guiding meaningful interpretation that remained grounded in the data. Representative quotes were selected, and themes were interpreted in light of the theoretical scaffold. Reflexive practices (e.g., journaling) and critical discussions with co-authors were utilized throughout the study to acknowledge the impactful role of researcher positionality and to heighten rigor. 2.4. Quality Criteria Study rigor was enhanced by adhering to the four principles of quality in interpretive description . Epistemological integrity (i.e., alignment of methods and assumptions with chosen epistemology) was considered by consulting with a qualitative expert (MHM) to ensure that methodological decisions, such as the use of purposive sampling and inductive data analysis approaches, aligned with constructivism. Representative credibility (i.e., that theoretical claims fit with how the data were obtained and analyzed) was addressed via purposive sampling to gather varied perspectives and prolonged engagement with participants throughout the intervention. Analytic logic (i.e., researcher logic is documented to ensure consistency between the research process and results) was attended to by maintaining a detailed record of analysis decisions and co-author discussions. Interpretive authority (i.e., ensuring researcher interpretations are trustworthy) was addressed by processing researcher thoughts and reactions in a reflexivity journal, as well as describing author positionality below. 2.5. Researcher Positionality Given the active role of researchers in developing knowledge in interpretive description, the positionality of the first author, who conducted all interviews and had the primary role in the analysis, is acknowledged . The first author is a 30-year-old white male who is able-bodied and has been physically active his whole life. Although the first author has no personal history of cancer, he has been a close witness to the impact of cancer. Furthermore, as an exercise oncology researcher, the first author is a firm believer in the value of PA for individuals living with and beyond cancer. He was personally invested in the study as part of his doctoral degree project, interacting directly with participants as the study coordinator during the 6-month PA intervention period prior to conducting interviews. The author’s prior interview experience includes qualitative research among advanced lung cancer populations and leading patient advisory board meetings within the EXCEL study . This qualitative study was guided by interpretive description methodology, a well-established methodology in applied health research based on a constructivist philosophy . Constructivist ontology posits that multiple socially constructed realities exist, shaped by contextual factors and lived experiences of each individual. Furthermore, the interaction between researchers and individuals is seen as essential to understanding both common and divergent aspects of these realities, valuing subjectivity and an inductive process when developing knowledge. These aspects of constructivism were applied to guide the study design, interview guide development, interview delivery, and data analysis and reporting. Theoretical scaffolding is a key component of interpretive description. For the present study, past exercise oncology research, the technology acceptance model, and the researcher’s practice-based knowledge were used as the theoretical scaffold. Interviews were conducted by the first author (ME) directly after each of the 24-week app usage periods starting in April 2021, September 2021, January 2022, and April 2022. Convenience sampling was used for the first round of interviews, to include all those interested. For subsequent interview rounds, purposive sampling was used to collect a range of perspectives considering demographic backgrounds (age, gender, cancer diagnosis, treatment type, self-reported PA, baseline technology use) and experiences with the self-monitoring app (mobile app usability questionnaire ratings, app usage over time). The semi-structured interview guide was informed by previous exercise oncology studies and the technology acceptance model, and was developed by the first author (ME) with the help of experienced qualitative and mixed methods researchers (NCR, MHM), featuring open-ended questions about the self-monitoring app (ease of use, perceived usefulness) and PA maintenance behaviors . The interview guide was piloted and iteratively revised by the research team prior to participant interviews. A copy of the interview guide is available upon request. Interviews lasted between 30 and 65 min. Discussions between authors (ME, NCR, and MHM) were held to guide purposive sampling decisions as well as to adjust the interview guide to ensure that additional unique insights were collected. For example, after 2 rounds of interviews, the interview guide was adapted to increase emphasis on PA maintenance behaviors, while sampling for round 3 focused on recruiting outside of female breast cancer participants and increasing geographic representation. Interviews were conducted at a time convenient for participants via ZOOM videoconferencing and audio was recorded for analysis. Audio recordings were transcribed verbatim by two authors (ME and TL) and imported into NVivo 12 for analysis . A brief summary was written for each transcript, describing overall impressions and key concepts shared by participants. The first author (ME) coded each transcript to identify key concepts relevant to the research question. Codes were refined through iterative rounds of reviewing the codes, individual transcripts, theoretical scaffolding, and discussions with co-authors. Themes were developed inductively from the codes by the first author (ME), in collaboration with the senior author (NCR) and a qualitative expert (MHM). Discussions focused on minimizing overlap between themes and guiding meaningful interpretation that remained grounded in the data. Representative quotes were selected, and themes were interpreted in light of the theoretical scaffold. Reflexive practices (e.g., journaling) and critical discussions with co-authors were utilized throughout the study to acknowledge the impactful role of researcher positionality and to heighten rigor. Study rigor was enhanced by adhering to the four principles of quality in interpretive description . Epistemological integrity (i.e., alignment of methods and assumptions with chosen epistemology) was considered by consulting with a qualitative expert (MHM) to ensure that methodological decisions, such as the use of purposive sampling and inductive data analysis approaches, aligned with constructivism. Representative credibility (i.e., that theoretical claims fit with how the data were obtained and analyzed) was addressed via purposive sampling to gather varied perspectives and prolonged engagement with participants throughout the intervention. Analytic logic (i.e., researcher logic is documented to ensure consistency between the research process and results) was attended to by maintaining a detailed record of analysis decisions and co-author discussions. Interpretive authority (i.e., ensuring researcher interpretations are trustworthy) was addressed by processing researcher thoughts and reactions in a reflexivity journal, as well as describing author positionality below. Given the active role of researchers in developing knowledge in interpretive description, the positionality of the first author, who conducted all interviews and had the primary role in the analysis, is acknowledged . The first author is a 30-year-old white male who is able-bodied and has been physically active his whole life. Although the first author has no personal history of cancer, he has been a close witness to the impact of cancer. Furthermore, as an exercise oncology researcher, the first author is a firm believer in the value of PA for individuals living with and beyond cancer. He was personally invested in the study as part of his doctoral degree project, interacting directly with participants as the study coordinator during the 6-month PA intervention period prior to conducting interviews. The author’s prior interview experience includes qualitative research among advanced lung cancer populations and leading patient advisory board meetings within the EXCEL study . Of the 172 participants who completed the 24-week intervention period within the larger study, purposive sampling was used to invite 28 participants with varying demographic backgrounds, medical profiles, and physical activity histories to semi-structured interviews. Eighteen interviews were conducted, and 10 participants did not respond to the invitation. No reasons for refusal to participate were obtained. Participants were between 37 and 75 years old, with representation across seven Canadian provinces and territories; White, Indigenous, and South Asian identities; and eight cancer types (five with advanced cancer, nine on treatment) . The average duration of self-monitoring app use was 18.7 ± 7.5 out of 24 weeks. Participants self-reported a median of 80.0, 210.0, and 225.0 moderate–vigorous PA minutes at baseline, 12 weeks, and 24 weeks, respectively. Fourteen participants completed the 24-week MVPA self-report, with eight decreasing and six increasing their MVPA during the PA maintenance period (i.e., weeks 12–24). 3.1. Themes Participants discussed their perspectives on the use and perceived value of the self-monitoring app to support PA maintenance. presents a visual summary of key takeaways from the present study. Four themes were developed, with illustrative quotes integrated into the results below. 3.1.1. Theme One: Some Individuals Did Not Need the App to Stay Physically Active Some participants explained that they did not need the self-monitoring app to support their PA habits either due to limited PA support needs or pre-existing PA tracking habits. For example, participants with many PA facilitators and few PA barriers were able to maintain PA levels without needing any additional PA support. “Part of the reason why I could let [the app] go like that is because I really felt that I didn’t need it in terms of my own self-discipline to continue on you know with exercise” (Participant 9, age 75, female). Specific facilitators noted by this group included well-established PA habits, high self-discipline for PA, as well as stable health status, thereby reducing the negative impact of the cancer-related PA barriers (e.g., cancer treatment, cancer- and treatment-related side effects) frequently reported among oncology populations. I didn’t see any drastic changes so I kind of got bored with [it]. … I just got complacent in doing, entering it. Because I didn’t feel it was, my symptoms always were the same and my energy was always the same when I did it in the evening. … Somebody that’s in the journey as far as repairing or going through some of the different treatments, this would probably mean more. (Participant 1, age 67, female.) Continued use of the self-monitoring app was often limited among these individuals, given its lack of perceived usefulness for PA maintenance, which was achieved irrespective of app use. These participants did not need the study-specific self-monitoring app or other PA tracking tools to stay active. Others indicated that they did not need the app due to the established use of tracking tools that worked well for them. While this group did express an interest in self-monitoring, their needs were met via paper- or app-based (e.g., Garmin Connect, Google Fit) PA and symptom tracking. I like the idea that it’s paperless, but really I could just keep what I want to keep in my notes. Things that I want to track are so few, I mean sure it’s kind of nice to draw a graph. I do have a graph on graph paper of my weight. … And I keep coming back to the paper because it’s easy, it’s not time consuming. Even that takes me probably you know five minutes a day. Maybe 10. (Participant 17, age 67, female.) Therefore, the study-specific self-monitoring app was not perceived as valuable for these individuals, often leading to the abandonment of this app as they reverted to their preferred tracking tools. While participants’ discussions related to this theme were brief, given their limited PA support needs that resulted in limited app use, the findings highlight the importance of understanding needs on an individual level to determine whether a self-monitoring app may provide valuable PA maintenance support . 3.1.2. Theme Two: Some Individuals Valued the App for Helping Them Maintain Their Physical Activity Habits Several participants described how the self-monitoring app helped them to stay on track with their PA habits, especially during the PA maintenance period when there was a lack of scheduled exercise classes. Three prominent factors contributing to the perceived value of the app were discussed: increased accountability to perform regular PA, greater awareness of current PA and its health benefits, and prompts for PA goal setting. Accountability to stay physically active Regular self-monitoring in the app made some individuals more conscious of their current PA levels, motivating them to continue building and maintaining PA habits. One participant noted how the tracking itself became a habit, further reinforcing their PA habits. It keeps me very conscious with the exercising and I think the Zamplo at nights when I’m having to record it—it’s just that more awareness. … Zamplo is just another way of that progression, that monitoring and working towards maintaining. (Participant 1, age 67, female.) I did it at seven o’clock so if I hadn’t done anything that was like maybe a bit of a prompt you put in zero minutes to be better than the next day, so I’d say you have some days I would’ve had no minutes, you feel a little bit guilty, so the next day you’d try a little bit harder. (Participant 6, age 37, female.) Daily notifications reminded some participants to first think about, then record their PA, energy, and fatigue. Furthermore, daily PA self-monitoring prompted self-reflection, often encouraging participants to plan for additional PA in the coming days. Accountability to PA stood out as the most frequently discussed sub-theme, emphasizing its importance to the perceived usefulness of a self-monitoring app for supporting PA maintenance in oncology. Awareness of health benefits and the need to modify physical activity A number of participants noted the value of recording and visualizing (i.e., via automated graphs) energy, fatigue, and other relevant health outcomes in the app. Having a visual overview of recent trends made them more aware of the positive benefits of PA, such as increased energy and decreased fatigue, which motivated them to stay physically active. With Zamplo app showing me the energy level, I would go in not having a lot of energy and then sitting down to actually think about it and input it realizing that I had more energy and less fatigue [after exercising]. And then further on, I realized, okay, maybe I don’t have as much energy, but I’m going through a lot medically. … So for me, I look at Zamplo like a little lifeline and I just think that I have changed the way I think about exercise so much. (Participant 18, age 63, female.) Self-monitoring also prompted some participants to adapt their PA routines on days with worse symptoms (e.g., higher fatigue, poor sleep quality), allowing them to stay physically active despite their fluctuating health status. Prompting physical activity goal-setting A few participants also explained how the app’s weekly goal-setting prompts, combined with daily PA tracking routines, encouraged them to set PA goals and actively monitor them throughout the week. These personal PA goals (e.g., 5000 steps per day) served as an extrinsic motivator to stay physically active. Okay, so I did that weekly [goal-setting] check-in usually on Sundays. … I was pretty consistent about doing it. And so, I was able to, at least see that over time, I was meeting that that step goal. … It was an easy goal to set in Zamplo and it was an easy goal to monitor in Zamplo, and because it was a single thing monitoring the graph is actually useful. (Participant 16, age 58, female.) As these participants reflected on weekly PA goals, meeting or exceeding them encouraged some individuals to maintain their PA and, as needed, increase their goals over time. Interestingly, only a small subset of participants mentioned that they used the PA goal-setting functionality, suggesting that many individuals used the app for regular self-monitoring but not for setting PA goals. Goal-setting appeared to be more relevant for individuals looking to build back up to their pre-diagnosis PA levels. Adding value via extra feedback and support While many participants emphasized the value of the app for supporting PA maintenance, others bemoaned the lack of meaningful feedback from the app, which negatively impacted continued use. A lack of automated data summaries or positive reinforcement via notifications, as well as challenges viewing graphs on mobile phones, contributed to these perceptions. As such, feedback must be both relevant and easy to understand to be perceived as a valuable self-monitoring app feature. Consistent with the technology acceptance model, the clear and understandable presentation of information can impact perceptions of both the ease of use and, indirectly, the usefulness of technology . To address these concerns and increase the app’s potential value for supporting PA maintenance, participant requests included features such as automated summary reports of weekly tracking data and smart notifications, providing insights on PA and health trends. Yeah like it just would have been nice to have like … a weekly summary or a monthly summary of whatever you were tracking. … I guess this would go more if it was like connected to a watch where you would get like a prompt like ‘this week you’ve only got this many minutes’ like almost like a motivating little quote or something right that would like come to your notifications. (Participant 6, age 37, female.) In addition, some participants spoke about the potential benefit of having a cancer exercise community to connect with directly in the app, providing valuable social support for PA during the PA maintenance phase. This suggestion was raised by participants living in remote locations who lacked social support and thus showed greater interest in engaging with an online community in the app. I think for keeping people active, it would be nice to have also have maybe more of like a community. Where we could all be part of this community, and then we would use an app to not compete against each other, but to motivate each other. And I think that would be good because we’re all in the same physical challenges where we’re tired or fatigued, you know, that sort of thing. So, you know, you’d feel a little bit more on a level playing field with people in the same situation as yourself. … Especially for someone like me who’s rural and can’t do a lot like I don’t go anywhere with COVID because, you know, I have to be so careful. (Participant 8, age 48, female.) A common thread across these requests was the need for a self-monitoring app to provide meaningful output to participants via data-driven insights and encouragement, in return for the time and effort they spent entering data. Given prior experiences with “smart” PA apps featuring tailored feedback (e.g., Insights in Garmin Connect) or community support (e.g., Communities in Strava), many participants expected the study-specific self-monitoring app to do the same. These perspectives suggest that the provision of meaningful feedback can increase the perceived usefulness of PA self-monitoring apps, which may result in meaningful increases in long-term app usage . 3.1.3. Theme Three: The User Experience Ranged from Intuitive to Confusing Perspectives on the ease of use varied greatly between participants. Some individuals spoke about the importance of an intuitive user experience, especially to encourage prolonged app use throughout the study period. Past app experience made the self-monitoring app easier to learn, contributing to positive ease of use perceptions as individuals quickly became more skilled at using it. “So I think, for me, the reason it was kind of easy is because I’ve been using things like that already, right? I use apps that track various metrics, … I’m used to having to track things” (Participant 13, age 45, female). However, some individuals who considered themselves less “tech savvy” and had limited prior experience with apps also found the app intuitive to use. “When you initially look at it, it looked really busy, but it was fairly easy to navigate honestly and I am not technically savvy, I’ll tell you that” (Participant 10, age 47, female). Certain app features impacted these participants’ overall perceptions of the user interface. For example, study-specific tracking templates and dual-platform support (i.e., ability to use via both smartphone and computer) were key factors that enhanced usability, prompting continued app use. In contrast, others had significant challenges with learning to use the self-monitoring app and navigating its basic functions. Many individuals in this group became overwhelmed due to the app’s user interface and high degree of customizability, noting that there were too many tracking options to choose from. “It’s almost like it was too open-ended for you to customize it, too customizable in a way that it was almost overwhelming. And for me personally, if I’m overwhelmed, I just put something down and don’t use it” (Participant 8, age 48, female). These challenges suggested that for individuals with both lower and higher technology literacy, the app was not always easy to learn, clear and understandable, or easy to use, three core elements of ease of use in the technology acceptance model . Furthermore, they contributed to a time-intensive learning process, which was noted by many as a drawback to the app. When participants experienced difficulties learning to use the app or faced persistent problems over time, they grew increasingly frustrated and often abandoned the app. To improve the ease of use, numerous participants recommended changes including a simplified user interface and the provision of additional guidance via in-app tutorials and pre-set PA tracking templates. I wonder if it would be helpful to, like, have like [a little tutorial]—you start in one place—like explain to people ‘start here, start recording daily’. And then, after you get used to that, ‘okay now, if you want to start a weekly tracking or, whatever, and this is how you do that’. (Participant 3, age 39, female.) Tutorials may contribute to a smoother learning curve, helping participants better understand the app interface and functionality. Furthermore, the need for additional tracking templates was noted as a potential solution to counteract the sometimes overwhelming, open-ended nature of the present self-monitoring app. Definitely having templates, … and it doesn’t have to be class-related, right? It can be anything, like your AM-PM check-in. ‘How are you doing?’ I think a lot of people would find that useful. You can track so much and for a brain like mine, it’ll shut itself down, because it’s like ‘well, I can track all of this stuff!’ And I think if you weren’t tech savvy that would be problematic too, right? Trying to figure out how to start [tracking]. (Participant 13, age 45, female.) This theme highlighted the influence of the ease of use, or lack thereof, on continued use. Notably, these perspectives were often established within the early stages of using the app, indicating the importance of simplicity, in-app tutorials, templates, and sufficient technical support to address ease of use challenges. 3.1.4. Theme Four: The Time Burden of App Use Ranged from Acceptable to Overwhelming The time burden of using a self-monitoring app was relevant to many participants. Some participants appreciated that the self-monitoring app made tracking of PA and health outcomes quick and easy, allowing them to adhere to tracking habits despite having many other commitments. For me really, truly it was like you know, during the dinner clean up, it was like a two-second put in your information, you know, get kids doing homework and it was super easy. So, I was like this is easy, and I can manage this. (Participant 6, age 37, female.) Reminder notifications and “one-click” tracking features (i.e., the “repeat from previous” button used to auto-fill information from the previous day of tracking) contributed to positive perceptions of the self-monitoring app and facilitated continued app use. Other participants found the self-monitoring app too time-consuming to use, with the need for manual data entry due to a lack of automated tracking or synchronization with other technologies (e.g., with wearable activity trackers) discussed as notable downsides. Completing daily and weekly tracking presented a significant time burden, especially when participants had a broad range of PA and health factors to record. So, I thought it would be pretty good but it was it was very time-consuming just to use it. It wasn’t simple. So that was the end. […] It took me a lot of time to figure out how to use it or I had to consult with somebody to use it and it just for the benefit, it wasn’t to me worth the time. (Participant 17, age 67, female.) This barrier was especially relevant for participants with busy schedules and lifestyle changes, such as going back to work or summer vacation, that disrupted their health tracking habits. Lifestyle barriers to PA self-monitoring were described in greater detail by younger participants balancing work and family responsibilities, as well as individuals currently undergoing cancer treatment, who had frequent appointments and greater health challenges (e.g., treatment side effects). Participants with greater time barriers struggled to fit the app into their daily routine. Well, because I started work. I worked for the census, and I was just busy with that. … Plus, it was the summer and I went to the lake and our Internet is not good at the lake at all, and, so, that—that, you know, that made it... and, and I think when you’re in the holiday mindset you just take a break from stuff like that, so I did. (Participant 5, age 61, female.) The time burden negatively impacted the perceived ease of use for these participants, resulting in decreased intentions and actual use of the app. This is consistent with established social cognitive behavioral theories, wherein increased barriers are associated with diminished intentions and ultimately reduced behavior . To reduce the time burden, many participants discussed the need for greater self-monitoring automation, as well as seamless integration with other PA-related technologies such as wearable activity trackers. One of the frustrations that I had was that … I had to manually transfer information in from other places. And I realize to some extent that’s a security piece, … but I would have loved to be able to take the body battery function from [Garmin] and just cut and paste it over. Or not to cut and paste it over, just have it there. (Participant 7, age 63, male.) Ease of use improvements to simplify the user experience and reduce the time burden may, at least in part, address the challenges with app engagement during the PA maintenance period that were noted during the intervention. Participants discussed their perspectives on the use and perceived value of the self-monitoring app to support PA maintenance. presents a visual summary of key takeaways from the present study. Four themes were developed, with illustrative quotes integrated into the results below. 3.1.1. Theme One: Some Individuals Did Not Need the App to Stay Physically Active Some participants explained that they did not need the self-monitoring app to support their PA habits either due to limited PA support needs or pre-existing PA tracking habits. For example, participants with many PA facilitators and few PA barriers were able to maintain PA levels without needing any additional PA support. “Part of the reason why I could let [the app] go like that is because I really felt that I didn’t need it in terms of my own self-discipline to continue on you know with exercise” (Participant 9, age 75, female). Specific facilitators noted by this group included well-established PA habits, high self-discipline for PA, as well as stable health status, thereby reducing the negative impact of the cancer-related PA barriers (e.g., cancer treatment, cancer- and treatment-related side effects) frequently reported among oncology populations. I didn’t see any drastic changes so I kind of got bored with [it]. … I just got complacent in doing, entering it. Because I didn’t feel it was, my symptoms always were the same and my energy was always the same when I did it in the evening. … Somebody that’s in the journey as far as repairing or going through some of the different treatments, this would probably mean more. (Participant 1, age 67, female.) Continued use of the self-monitoring app was often limited among these individuals, given its lack of perceived usefulness for PA maintenance, which was achieved irrespective of app use. These participants did not need the study-specific self-monitoring app or other PA tracking tools to stay active. Others indicated that they did not need the app due to the established use of tracking tools that worked well for them. While this group did express an interest in self-monitoring, their needs were met via paper- or app-based (e.g., Garmin Connect, Google Fit) PA and symptom tracking. I like the idea that it’s paperless, but really I could just keep what I want to keep in my notes. Things that I want to track are so few, I mean sure it’s kind of nice to draw a graph. I do have a graph on graph paper of my weight. … And I keep coming back to the paper because it’s easy, it’s not time consuming. Even that takes me probably you know five minutes a day. Maybe 10. (Participant 17, age 67, female.) Therefore, the study-specific self-monitoring app was not perceived as valuable for these individuals, often leading to the abandonment of this app as they reverted to their preferred tracking tools. While participants’ discussions related to this theme were brief, given their limited PA support needs that resulted in limited app use, the findings highlight the importance of understanding needs on an individual level to determine whether a self-monitoring app may provide valuable PA maintenance support . 3.1.2. Theme Two: Some Individuals Valued the App for Helping Them Maintain Their Physical Activity Habits Several participants described how the self-monitoring app helped them to stay on track with their PA habits, especially during the PA maintenance period when there was a lack of scheduled exercise classes. Three prominent factors contributing to the perceived value of the app were discussed: increased accountability to perform regular PA, greater awareness of current PA and its health benefits, and prompts for PA goal setting. Accountability to stay physically active Regular self-monitoring in the app made some individuals more conscious of their current PA levels, motivating them to continue building and maintaining PA habits. One participant noted how the tracking itself became a habit, further reinforcing their PA habits. It keeps me very conscious with the exercising and I think the Zamplo at nights when I’m having to record it—it’s just that more awareness. … Zamplo is just another way of that progression, that monitoring and working towards maintaining. (Participant 1, age 67, female.) I did it at seven o’clock so if I hadn’t done anything that was like maybe a bit of a prompt you put in zero minutes to be better than the next day, so I’d say you have some days I would’ve had no minutes, you feel a little bit guilty, so the next day you’d try a little bit harder. (Participant 6, age 37, female.) Daily notifications reminded some participants to first think about, then record their PA, energy, and fatigue. Furthermore, daily PA self-monitoring prompted self-reflection, often encouraging participants to plan for additional PA in the coming days. Accountability to PA stood out as the most frequently discussed sub-theme, emphasizing its importance to the perceived usefulness of a self-monitoring app for supporting PA maintenance in oncology. Awareness of health benefits and the need to modify physical activity A number of participants noted the value of recording and visualizing (i.e., via automated graphs) energy, fatigue, and other relevant health outcomes in the app. Having a visual overview of recent trends made them more aware of the positive benefits of PA, such as increased energy and decreased fatigue, which motivated them to stay physically active. With Zamplo app showing me the energy level, I would go in not having a lot of energy and then sitting down to actually think about it and input it realizing that I had more energy and less fatigue [after exercising]. And then further on, I realized, okay, maybe I don’t have as much energy, but I’m going through a lot medically. … So for me, I look at Zamplo like a little lifeline and I just think that I have changed the way I think about exercise so much. (Participant 18, age 63, female.) Self-monitoring also prompted some participants to adapt their PA routines on days with worse symptoms (e.g., higher fatigue, poor sleep quality), allowing them to stay physically active despite their fluctuating health status. Prompting physical activity goal-setting A few participants also explained how the app’s weekly goal-setting prompts, combined with daily PA tracking routines, encouraged them to set PA goals and actively monitor them throughout the week. These personal PA goals (e.g., 5000 steps per day) served as an extrinsic motivator to stay physically active. Okay, so I did that weekly [goal-setting] check-in usually on Sundays. … I was pretty consistent about doing it. And so, I was able to, at least see that over time, I was meeting that that step goal. … It was an easy goal to set in Zamplo and it was an easy goal to monitor in Zamplo, and because it was a single thing monitoring the graph is actually useful. (Participant 16, age 58, female.) As these participants reflected on weekly PA goals, meeting or exceeding them encouraged some individuals to maintain their PA and, as needed, increase their goals over time. Interestingly, only a small subset of participants mentioned that they used the PA goal-setting functionality, suggesting that many individuals used the app for regular self-monitoring but not for setting PA goals. Goal-setting appeared to be more relevant for individuals looking to build back up to their pre-diagnosis PA levels. Adding value via extra feedback and support While many participants emphasized the value of the app for supporting PA maintenance, others bemoaned the lack of meaningful feedback from the app, which negatively impacted continued use. A lack of automated data summaries or positive reinforcement via notifications, as well as challenges viewing graphs on mobile phones, contributed to these perceptions. As such, feedback must be both relevant and easy to understand to be perceived as a valuable self-monitoring app feature. Consistent with the technology acceptance model, the clear and understandable presentation of information can impact perceptions of both the ease of use and, indirectly, the usefulness of technology . To address these concerns and increase the app’s potential value for supporting PA maintenance, participant requests included features such as automated summary reports of weekly tracking data and smart notifications, providing insights on PA and health trends. Yeah like it just would have been nice to have like … a weekly summary or a monthly summary of whatever you were tracking. … I guess this would go more if it was like connected to a watch where you would get like a prompt like ‘this week you’ve only got this many minutes’ like almost like a motivating little quote or something right that would like come to your notifications. (Participant 6, age 37, female.) In addition, some participants spoke about the potential benefit of having a cancer exercise community to connect with directly in the app, providing valuable social support for PA during the PA maintenance phase. This suggestion was raised by participants living in remote locations who lacked social support and thus showed greater interest in engaging with an online community in the app. I think for keeping people active, it would be nice to have also have maybe more of like a community. Where we could all be part of this community, and then we would use an app to not compete against each other, but to motivate each other. And I think that would be good because we’re all in the same physical challenges where we’re tired or fatigued, you know, that sort of thing. So, you know, you’d feel a little bit more on a level playing field with people in the same situation as yourself. … Especially for someone like me who’s rural and can’t do a lot like I don’t go anywhere with COVID because, you know, I have to be so careful. (Participant 8, age 48, female.) A common thread across these requests was the need for a self-monitoring app to provide meaningful output to participants via data-driven insights and encouragement, in return for the time and effort they spent entering data. Given prior experiences with “smart” PA apps featuring tailored feedback (e.g., Insights in Garmin Connect) or community support (e.g., Communities in Strava), many participants expected the study-specific self-monitoring app to do the same. These perspectives suggest that the provision of meaningful feedback can increase the perceived usefulness of PA self-monitoring apps, which may result in meaningful increases in long-term app usage . 3.1.3. Theme Three: The User Experience Ranged from Intuitive to Confusing Perspectives on the ease of use varied greatly between participants. Some individuals spoke about the importance of an intuitive user experience, especially to encourage prolonged app use throughout the study period. Past app experience made the self-monitoring app easier to learn, contributing to positive ease of use perceptions as individuals quickly became more skilled at using it. “So I think, for me, the reason it was kind of easy is because I’ve been using things like that already, right? I use apps that track various metrics, … I’m used to having to track things” (Participant 13, age 45, female). However, some individuals who considered themselves less “tech savvy” and had limited prior experience with apps also found the app intuitive to use. “When you initially look at it, it looked really busy, but it was fairly easy to navigate honestly and I am not technically savvy, I’ll tell you that” (Participant 10, age 47, female). Certain app features impacted these participants’ overall perceptions of the user interface. For example, study-specific tracking templates and dual-platform support (i.e., ability to use via both smartphone and computer) were key factors that enhanced usability, prompting continued app use. In contrast, others had significant challenges with learning to use the self-monitoring app and navigating its basic functions. Many individuals in this group became overwhelmed due to the app’s user interface and high degree of customizability, noting that there were too many tracking options to choose from. “It’s almost like it was too open-ended for you to customize it, too customizable in a way that it was almost overwhelming. And for me personally, if I’m overwhelmed, I just put something down and don’t use it” (Participant 8, age 48, female). These challenges suggested that for individuals with both lower and higher technology literacy, the app was not always easy to learn, clear and understandable, or easy to use, three core elements of ease of use in the technology acceptance model . Furthermore, they contributed to a time-intensive learning process, which was noted by many as a drawback to the app. When participants experienced difficulties learning to use the app or faced persistent problems over time, they grew increasingly frustrated and often abandoned the app. To improve the ease of use, numerous participants recommended changes including a simplified user interface and the provision of additional guidance via in-app tutorials and pre-set PA tracking templates. I wonder if it would be helpful to, like, have like [a little tutorial]—you start in one place—like explain to people ‘start here, start recording daily’. And then, after you get used to that, ‘okay now, if you want to start a weekly tracking or, whatever, and this is how you do that’. (Participant 3, age 39, female.) Tutorials may contribute to a smoother learning curve, helping participants better understand the app interface and functionality. Furthermore, the need for additional tracking templates was noted as a potential solution to counteract the sometimes overwhelming, open-ended nature of the present self-monitoring app. Definitely having templates, … and it doesn’t have to be class-related, right? It can be anything, like your AM-PM check-in. ‘How are you doing?’ I think a lot of people would find that useful. You can track so much and for a brain like mine, it’ll shut itself down, because it’s like ‘well, I can track all of this stuff!’ And I think if you weren’t tech savvy that would be problematic too, right? Trying to figure out how to start [tracking]. (Participant 13, age 45, female.) This theme highlighted the influence of the ease of use, or lack thereof, on continued use. Notably, these perspectives were often established within the early stages of using the app, indicating the importance of simplicity, in-app tutorials, templates, and sufficient technical support to address ease of use challenges. 3.1.4. Theme Four: The Time Burden of App Use Ranged from Acceptable to Overwhelming The time burden of using a self-monitoring app was relevant to many participants. Some participants appreciated that the self-monitoring app made tracking of PA and health outcomes quick and easy, allowing them to adhere to tracking habits despite having many other commitments. For me really, truly it was like you know, during the dinner clean up, it was like a two-second put in your information, you know, get kids doing homework and it was super easy. So, I was like this is easy, and I can manage this. (Participant 6, age 37, female.) Reminder notifications and “one-click” tracking features (i.e., the “repeat from previous” button used to auto-fill information from the previous day of tracking) contributed to positive perceptions of the self-monitoring app and facilitated continued app use. Other participants found the self-monitoring app too time-consuming to use, with the need for manual data entry due to a lack of automated tracking or synchronization with other technologies (e.g., with wearable activity trackers) discussed as notable downsides. Completing daily and weekly tracking presented a significant time burden, especially when participants had a broad range of PA and health factors to record. So, I thought it would be pretty good but it was it was very time-consuming just to use it. It wasn’t simple. So that was the end. […] It took me a lot of time to figure out how to use it or I had to consult with somebody to use it and it just for the benefit, it wasn’t to me worth the time. (Participant 17, age 67, female.) This barrier was especially relevant for participants with busy schedules and lifestyle changes, such as going back to work or summer vacation, that disrupted their health tracking habits. Lifestyle barriers to PA self-monitoring were described in greater detail by younger participants balancing work and family responsibilities, as well as individuals currently undergoing cancer treatment, who had frequent appointments and greater health challenges (e.g., treatment side effects). Participants with greater time barriers struggled to fit the app into their daily routine. Well, because I started work. I worked for the census, and I was just busy with that. … Plus, it was the summer and I went to the lake and our Internet is not good at the lake at all, and, so, that—that, you know, that made it... and, and I think when you’re in the holiday mindset you just take a break from stuff like that, so I did. (Participant 5, age 61, female.) The time burden negatively impacted the perceived ease of use for these participants, resulting in decreased intentions and actual use of the app. This is consistent with established social cognitive behavioral theories, wherein increased barriers are associated with diminished intentions and ultimately reduced behavior . To reduce the time burden, many participants discussed the need for greater self-monitoring automation, as well as seamless integration with other PA-related technologies such as wearable activity trackers. One of the frustrations that I had was that … I had to manually transfer information in from other places. And I realize to some extent that’s a security piece, … but I would have loved to be able to take the body battery function from [Garmin] and just cut and paste it over. Or not to cut and paste it over, just have it there. (Participant 7, age 63, male.) Ease of use improvements to simplify the user experience and reduce the time burden may, at least in part, address the challenges with app engagement during the PA maintenance period that were noted during the intervention. Some participants explained that they did not need the self-monitoring app to support their PA habits either due to limited PA support needs or pre-existing PA tracking habits. For example, participants with many PA facilitators and few PA barriers were able to maintain PA levels without needing any additional PA support. “Part of the reason why I could let [the app] go like that is because I really felt that I didn’t need it in terms of my own self-discipline to continue on you know with exercise” (Participant 9, age 75, female). Specific facilitators noted by this group included well-established PA habits, high self-discipline for PA, as well as stable health status, thereby reducing the negative impact of the cancer-related PA barriers (e.g., cancer treatment, cancer- and treatment-related side effects) frequently reported among oncology populations. I didn’t see any drastic changes so I kind of got bored with [it]. … I just got complacent in doing, entering it. Because I didn’t feel it was, my symptoms always were the same and my energy was always the same when I did it in the evening. … Somebody that’s in the journey as far as repairing or going through some of the different treatments, this would probably mean more. (Participant 1, age 67, female.) Continued use of the self-monitoring app was often limited among these individuals, given its lack of perceived usefulness for PA maintenance, which was achieved irrespective of app use. These participants did not need the study-specific self-monitoring app or other PA tracking tools to stay active. Others indicated that they did not need the app due to the established use of tracking tools that worked well for them. While this group did express an interest in self-monitoring, their needs were met via paper- or app-based (e.g., Garmin Connect, Google Fit) PA and symptom tracking. I like the idea that it’s paperless, but really I could just keep what I want to keep in my notes. Things that I want to track are so few, I mean sure it’s kind of nice to draw a graph. I do have a graph on graph paper of my weight. … And I keep coming back to the paper because it’s easy, it’s not time consuming. Even that takes me probably you know five minutes a day. Maybe 10. (Participant 17, age 67, female.) Therefore, the study-specific self-monitoring app was not perceived as valuable for these individuals, often leading to the abandonment of this app as they reverted to their preferred tracking tools. While participants’ discussions related to this theme were brief, given their limited PA support needs that resulted in limited app use, the findings highlight the importance of understanding needs on an individual level to determine whether a self-monitoring app may provide valuable PA maintenance support . Several participants described how the self-monitoring app helped them to stay on track with their PA habits, especially during the PA maintenance period when there was a lack of scheduled exercise classes. Three prominent factors contributing to the perceived value of the app were discussed: increased accountability to perform regular PA, greater awareness of current PA and its health benefits, and prompts for PA goal setting. Accountability to stay physically active Regular self-monitoring in the app made some individuals more conscious of their current PA levels, motivating them to continue building and maintaining PA habits. One participant noted how the tracking itself became a habit, further reinforcing their PA habits. It keeps me very conscious with the exercising and I think the Zamplo at nights when I’m having to record it—it’s just that more awareness. … Zamplo is just another way of that progression, that monitoring and working towards maintaining. (Participant 1, age 67, female.) I did it at seven o’clock so if I hadn’t done anything that was like maybe a bit of a prompt you put in zero minutes to be better than the next day, so I’d say you have some days I would’ve had no minutes, you feel a little bit guilty, so the next day you’d try a little bit harder. (Participant 6, age 37, female.) Daily notifications reminded some participants to first think about, then record their PA, energy, and fatigue. Furthermore, daily PA self-monitoring prompted self-reflection, often encouraging participants to plan for additional PA in the coming days. Accountability to PA stood out as the most frequently discussed sub-theme, emphasizing its importance to the perceived usefulness of a self-monitoring app for supporting PA maintenance in oncology. Awareness of health benefits and the need to modify physical activity A number of participants noted the value of recording and visualizing (i.e., via automated graphs) energy, fatigue, and other relevant health outcomes in the app. Having a visual overview of recent trends made them more aware of the positive benefits of PA, such as increased energy and decreased fatigue, which motivated them to stay physically active. With Zamplo app showing me the energy level, I would go in not having a lot of energy and then sitting down to actually think about it and input it realizing that I had more energy and less fatigue [after exercising]. And then further on, I realized, okay, maybe I don’t have as much energy, but I’m going through a lot medically. … So for me, I look at Zamplo like a little lifeline and I just think that I have changed the way I think about exercise so much. (Participant 18, age 63, female.) Self-monitoring also prompted some participants to adapt their PA routines on days with worse symptoms (e.g., higher fatigue, poor sleep quality), allowing them to stay physically active despite their fluctuating health status. Prompting physical activity goal-setting A few participants also explained how the app’s weekly goal-setting prompts, combined with daily PA tracking routines, encouraged them to set PA goals and actively monitor them throughout the week. These personal PA goals (e.g., 5000 steps per day) served as an extrinsic motivator to stay physically active. Okay, so I did that weekly [goal-setting] check-in usually on Sundays. … I was pretty consistent about doing it. And so, I was able to, at least see that over time, I was meeting that that step goal. … It was an easy goal to set in Zamplo and it was an easy goal to monitor in Zamplo, and because it was a single thing monitoring the graph is actually useful. (Participant 16, age 58, female.) As these participants reflected on weekly PA goals, meeting or exceeding them encouraged some individuals to maintain their PA and, as needed, increase their goals over time. Interestingly, only a small subset of participants mentioned that they used the PA goal-setting functionality, suggesting that many individuals used the app for regular self-monitoring but not for setting PA goals. Goal-setting appeared to be more relevant for individuals looking to build back up to their pre-diagnosis PA levels. Adding value via extra feedback and support While many participants emphasized the value of the app for supporting PA maintenance, others bemoaned the lack of meaningful feedback from the app, which negatively impacted continued use. A lack of automated data summaries or positive reinforcement via notifications, as well as challenges viewing graphs on mobile phones, contributed to these perceptions. As such, feedback must be both relevant and easy to understand to be perceived as a valuable self-monitoring app feature. Consistent with the technology acceptance model, the clear and understandable presentation of information can impact perceptions of both the ease of use and, indirectly, the usefulness of technology . To address these concerns and increase the app’s potential value for supporting PA maintenance, participant requests included features such as automated summary reports of weekly tracking data and smart notifications, providing insights on PA and health trends. Yeah like it just would have been nice to have like … a weekly summary or a monthly summary of whatever you were tracking. … I guess this would go more if it was like connected to a watch where you would get like a prompt like ‘this week you’ve only got this many minutes’ like almost like a motivating little quote or something right that would like come to your notifications. (Participant 6, age 37, female.) In addition, some participants spoke about the potential benefit of having a cancer exercise community to connect with directly in the app, providing valuable social support for PA during the PA maintenance phase. This suggestion was raised by participants living in remote locations who lacked social support and thus showed greater interest in engaging with an online community in the app. I think for keeping people active, it would be nice to have also have maybe more of like a community. Where we could all be part of this community, and then we would use an app to not compete against each other, but to motivate each other. And I think that would be good because we’re all in the same physical challenges where we’re tired or fatigued, you know, that sort of thing. So, you know, you’d feel a little bit more on a level playing field with people in the same situation as yourself. … Especially for someone like me who’s rural and can’t do a lot like I don’t go anywhere with COVID because, you know, I have to be so careful. (Participant 8, age 48, female.) A common thread across these requests was the need for a self-monitoring app to provide meaningful output to participants via data-driven insights and encouragement, in return for the time and effort they spent entering data. Given prior experiences with “smart” PA apps featuring tailored feedback (e.g., Insights in Garmin Connect) or community support (e.g., Communities in Strava), many participants expected the study-specific self-monitoring app to do the same. These perspectives suggest that the provision of meaningful feedback can increase the perceived usefulness of PA self-monitoring apps, which may result in meaningful increases in long-term app usage . Perspectives on the ease of use varied greatly between participants. Some individuals spoke about the importance of an intuitive user experience, especially to encourage prolonged app use throughout the study period. Past app experience made the self-monitoring app easier to learn, contributing to positive ease of use perceptions as individuals quickly became more skilled at using it. “So I think, for me, the reason it was kind of easy is because I’ve been using things like that already, right? I use apps that track various metrics, … I’m used to having to track things” (Participant 13, age 45, female). However, some individuals who considered themselves less “tech savvy” and had limited prior experience with apps also found the app intuitive to use. “When you initially look at it, it looked really busy, but it was fairly easy to navigate honestly and I am not technically savvy, I’ll tell you that” (Participant 10, age 47, female). Certain app features impacted these participants’ overall perceptions of the user interface. For example, study-specific tracking templates and dual-platform support (i.e., ability to use via both smartphone and computer) were key factors that enhanced usability, prompting continued app use. In contrast, others had significant challenges with learning to use the self-monitoring app and navigating its basic functions. Many individuals in this group became overwhelmed due to the app’s user interface and high degree of customizability, noting that there were too many tracking options to choose from. “It’s almost like it was too open-ended for you to customize it, too customizable in a way that it was almost overwhelming. And for me personally, if I’m overwhelmed, I just put something down and don’t use it” (Participant 8, age 48, female). These challenges suggested that for individuals with both lower and higher technology literacy, the app was not always easy to learn, clear and understandable, or easy to use, three core elements of ease of use in the technology acceptance model . Furthermore, they contributed to a time-intensive learning process, which was noted by many as a drawback to the app. When participants experienced difficulties learning to use the app or faced persistent problems over time, they grew increasingly frustrated and often abandoned the app. To improve the ease of use, numerous participants recommended changes including a simplified user interface and the provision of additional guidance via in-app tutorials and pre-set PA tracking templates. I wonder if it would be helpful to, like, have like [a little tutorial]—you start in one place—like explain to people ‘start here, start recording daily’. And then, after you get used to that, ‘okay now, if you want to start a weekly tracking or, whatever, and this is how you do that’. (Participant 3, age 39, female.) Tutorials may contribute to a smoother learning curve, helping participants better understand the app interface and functionality. Furthermore, the need for additional tracking templates was noted as a potential solution to counteract the sometimes overwhelming, open-ended nature of the present self-monitoring app. Definitely having templates, … and it doesn’t have to be class-related, right? It can be anything, like your AM-PM check-in. ‘How are you doing?’ I think a lot of people would find that useful. You can track so much and for a brain like mine, it’ll shut itself down, because it’s like ‘well, I can track all of this stuff!’ And I think if you weren’t tech savvy that would be problematic too, right? Trying to figure out how to start [tracking]. (Participant 13, age 45, female.) This theme highlighted the influence of the ease of use, or lack thereof, on continued use. Notably, these perspectives were often established within the early stages of using the app, indicating the importance of simplicity, in-app tutorials, templates, and sufficient technical support to address ease of use challenges. The time burden of using a self-monitoring app was relevant to many participants. Some participants appreciated that the self-monitoring app made tracking of PA and health outcomes quick and easy, allowing them to adhere to tracking habits despite having many other commitments. For me really, truly it was like you know, during the dinner clean up, it was like a two-second put in your information, you know, get kids doing homework and it was super easy. So, I was like this is easy, and I can manage this. (Participant 6, age 37, female.) Reminder notifications and “one-click” tracking features (i.e., the “repeat from previous” button used to auto-fill information from the previous day of tracking) contributed to positive perceptions of the self-monitoring app and facilitated continued app use. Other participants found the self-monitoring app too time-consuming to use, with the need for manual data entry due to a lack of automated tracking or synchronization with other technologies (e.g., with wearable activity trackers) discussed as notable downsides. Completing daily and weekly tracking presented a significant time burden, especially when participants had a broad range of PA and health factors to record. So, I thought it would be pretty good but it was it was very time-consuming just to use it. It wasn’t simple. So that was the end. […] It took me a lot of time to figure out how to use it or I had to consult with somebody to use it and it just for the benefit, it wasn’t to me worth the time. (Participant 17, age 67, female.) This barrier was especially relevant for participants with busy schedules and lifestyle changes, such as going back to work or summer vacation, that disrupted their health tracking habits. Lifestyle barriers to PA self-monitoring were described in greater detail by younger participants balancing work and family responsibilities, as well as individuals currently undergoing cancer treatment, who had frequent appointments and greater health challenges (e.g., treatment side effects). Participants with greater time barriers struggled to fit the app into their daily routine. Well, because I started work. I worked for the census, and I was just busy with that. … Plus, it was the summer and I went to the lake and our Internet is not good at the lake at all, and, so, that—that, you know, that made it... and, and I think when you’re in the holiday mindset you just take a break from stuff like that, so I did. (Participant 5, age 61, female.) The time burden negatively impacted the perceived ease of use for these participants, resulting in decreased intentions and actual use of the app. This is consistent with established social cognitive behavioral theories, wherein increased barriers are associated with diminished intentions and ultimately reduced behavior . To reduce the time burden, many participants discussed the need for greater self-monitoring automation, as well as seamless integration with other PA-related technologies such as wearable activity trackers. One of the frustrations that I had was that … I had to manually transfer information in from other places. And I realize to some extent that’s a security piece, … but I would have loved to be able to take the body battery function from [Garmin] and just cut and paste it over. Or not to cut and paste it over, just have it there. (Participant 7, age 63, male.) Ease of use improvements to simplify the user experience and reduce the time burden may, at least in part, address the challenges with app engagement during the PA maintenance period that were noted during the intervention. The current study highlighted the perspectives of remote and rural Canadians living with and beyond cancer on the potential of self-monitoring apps to support PA maintenance. It adds to the growing body of qualitative research on technology use for PA behavior change in oncology , filling knowledge gaps with respect to rural populations, prolonged engagement with technology, and the potential impact on PA maintenance. Whereas individuals with established PA habits and use of tracking tools saw less value in the app, others spoke about its value for keeping them accountable to PA, aware of the health benefits of PA, and prompting PA goal-setting. More personalized feedback and social support features were frequently suggested app improvements. The perceived ease of use, which varied widely from simple and intuitive to confusing and frustrating, acted as a concurrent facilitator and barrier to continued app use. Lastly, individuals emphasized the importance of quick and easy tracking in the app, with some individuals finding manual tracking too time consuming to fit within their busy lifestyles. An emphasis on the ease of use as an influential factor for continued engagement with technology, as described in theme 3, was also identified as a key theme in prior qualitative evaluations after technology-based exercise oncology interventions . These findings align with the technology acceptance model, wherein the perceived ease of use can impact actual use by changing an individual’s attitude towards technology . As in the present project, Martin et al. reported that technical problems with the PA app and its time-consuming nature were key barriers highlighted during participant interviews . Therefore, perceptions on the ease of use can either encourage or discourage further app use. While limited, these findings to date highlight that apps that are quick and easy to use may promote continued app engagement, which has been associated with improved PA maintenance in app-based interventions in adult populations . The present study provides new insights on the potential temporality of factors impacting the continued use of technology, with perceived ease of use more frequently discussed in the context of early app adoption, whereas perceived usefulness was seen as more relevant to long-term use. Regarding the perceived usefulness of technology for building and maintaining PA habits in oncology, common themes across qualitative studies include increased accountability to PA, as well as awareness of PA levels and the health benefits of PA . For example, participants interviewed after a similar technology-based PA maintenance intervention that included health coaching, text messaging, and an activity tracker, mentioned how the intervention made them more accountable to staying active and continually reminded them of the positive benefits of PA . However, as our results indicate, some individuals desire technology-based tools that not only track data, but also provide meaningful feedback and community support related to their health behaviors. Whereas technology is evidently a useful PA support tool for some, the present study emphasized that paper-based tracking methods are still preferred by others. The present study shed light on factors impacting technology use among cancer populations that may limit their usefulness. Participants with limited time to use technology due to competing priorities, existing long-term PA habits and limited health concerns, and those using other tracking tools (i.e., other technology or paper-based tracking) saw limited value in using the self-monitoring app. A lower perceived value appeared to contribute to a lack of sustained app use over time. Better tailoring of technology-based support is thus needed to address individual needs and capabilities and improve engagement with technology, which may enhance the intervention’s effects on PA outcomes . 4.1. Implications for Future Research and App Development Our study findings have several implications for future research and development work on technology to promote PA behavior change among cancer populations, with three key takeaways noted: (1) the need for tailored technology-based PA support, (2) the value of self-monitoring apps for PA behavior and suggestions for improving value, and (3) the impact of the ease of use and time burden on continued app use (see ). First, given the varying PA support needs and perspectives on the value of technology to address them, a pre-intervention assessment may be useful to determine the type of PA support needed by an individual, informing better tailoring of PA maintenance support via technology. Further work to understand the interplay between factors impacting the perceived value (e.g., participant needs and preferences, PA barriers, and existing technology use) may be useful to inform this tailoring process. Second, findings highlighted key factors (i.e., accountability to PA, awareness of PA benefits, PA goal-setting) contributing to the perceived usefulness of a self-monitoring app for supporting PA habits. An increased awareness of health benefits is especially relevant for individuals living with and beyond cancer as they seek to overcome treatment side effects and chronic symptoms such as cancer-related fatigue. In addition, more active support (e.g., summary reports, encouraging insights, in-app communities) was requested, which may make the app more useful for overcoming PA maintenance challenges. According to the technology acceptance model, perceived usefulness is the strongest predictor of actual technology use . App developers and researchers are encouraged to consider these factors to optimize the impact of technology on PA maintenance in oncology. Third, the ease of use is particularly important for apps aiming to support PA among cancer populations . Whereas positive experiences encourage further use and allow individuals to discover an app’s value, significant early challenges often lead to app abandonment and preclude participants from realizing its full value. Based on the perspectives of the individuals living with and beyond cancer in the present sample, some of whom had limited experience with apps and lower technology literacy, it is advisable to simplify the user experience as much as possible to avoid frustration. Furthermore, given the busy lifestyles of these individuals, minimizing the time burden of app-based tracking (e.g., via automation and inter-app integration) is key to ensuring that self-monitoring remains feasible. Given the significant time burden of cancer treatment and follow-up care, as well as the common cancer-related cognitive challenges (e.g., chemo brain), ease of use considerations are likely to be even more relevant for individuals living with cancer than for healthy adults. 4.2. Strengths and Limitations The key study strengths lie in our sampling approach and analyses. Purposive sampling was utilized to increase the demographic diversity of the sample in terms of age, gender, location, and cancer type. Sampling decisions also considered app use and PA levels over time, thus capturing understudied perspectives from individuals with low app engagement and those who did not maintain PA habits . The analyses were enhanced by adhering to the quality criteria of interpretive description, with ongoing self-reflexive practices and multiple rounds of discussion with co-authors to remain cognizant of the author’s influence on the study findings and to avoid overinterpretation . The study had several limitations. Despite the use of purposive sampling, the larger exercise oncology intervention from which we invited participants lacked variation in ethnicity and education level . Most participants were female, white, and had moderate–high incomes and education levels. Furthermore, having the study coordinator conduct interviews may have adversely affected participant willingness to share negative perceptions during interviews. Lastly, the timing of the interviews, which were conducted 24 weeks after participants started using the self-monitoring app, made it difficult for participants to describe their early experiences with the app. Novel recruitment strategies and earlier interviews may help address these limitations in future work. Our study findings have several implications for future research and development work on technology to promote PA behavior change among cancer populations, with three key takeaways noted: (1) the need for tailored technology-based PA support, (2) the value of self-monitoring apps for PA behavior and suggestions for improving value, and (3) the impact of the ease of use and time burden on continued app use (see ). First, given the varying PA support needs and perspectives on the value of technology to address them, a pre-intervention assessment may be useful to determine the type of PA support needed by an individual, informing better tailoring of PA maintenance support via technology. Further work to understand the interplay between factors impacting the perceived value (e.g., participant needs and preferences, PA barriers, and existing technology use) may be useful to inform this tailoring process. Second, findings highlighted key factors (i.e., accountability to PA, awareness of PA benefits, PA goal-setting) contributing to the perceived usefulness of a self-monitoring app for supporting PA habits. An increased awareness of health benefits is especially relevant for individuals living with and beyond cancer as they seek to overcome treatment side effects and chronic symptoms such as cancer-related fatigue. In addition, more active support (e.g., summary reports, encouraging insights, in-app communities) was requested, which may make the app more useful for overcoming PA maintenance challenges. According to the technology acceptance model, perceived usefulness is the strongest predictor of actual technology use . App developers and researchers are encouraged to consider these factors to optimize the impact of technology on PA maintenance in oncology. Third, the ease of use is particularly important for apps aiming to support PA among cancer populations . Whereas positive experiences encourage further use and allow individuals to discover an app’s value, significant early challenges often lead to app abandonment and preclude participants from realizing its full value. Based on the perspectives of the individuals living with and beyond cancer in the present sample, some of whom had limited experience with apps and lower technology literacy, it is advisable to simplify the user experience as much as possible to avoid frustration. Furthermore, given the busy lifestyles of these individuals, minimizing the time burden of app-based tracking (e.g., via automation and inter-app integration) is key to ensuring that self-monitoring remains feasible. Given the significant time burden of cancer treatment and follow-up care, as well as the common cancer-related cognitive challenges (e.g., chemo brain), ease of use considerations are likely to be even more relevant for individuals living with cancer than for healthy adults. The key study strengths lie in our sampling approach and analyses. Purposive sampling was utilized to increase the demographic diversity of the sample in terms of age, gender, location, and cancer type. Sampling decisions also considered app use and PA levels over time, thus capturing understudied perspectives from individuals with low app engagement and those who did not maintain PA habits . The analyses were enhanced by adhering to the quality criteria of interpretive description, with ongoing self-reflexive practices and multiple rounds of discussion with co-authors to remain cognizant of the author’s influence on the study findings and to avoid overinterpretation . The study had several limitations. Despite the use of purposive sampling, the larger exercise oncology intervention from which we invited participants lacked variation in ethnicity and education level . Most participants were female, white, and had moderate–high incomes and education levels. Furthermore, having the study coordinator conduct interviews may have adversely affected participant willingness to share negative perceptions during interviews. Lastly, the timing of the interviews, which were conducted 24 weeks after participants started using the self-monitoring app, made it difficult for participants to describe their early experiences with the app. Novel recruitment strategies and earlier interviews may help address these limitations in future work. The present study sheds light on the potential value of a self-monitoring app to support PA maintenance among rural and remote Canadians living with and beyond cancer. The app was less suitable for participants with busy lifestyles, and not needed for those with established PA habits, limited health concerns to track, or a preference for other PA tracking tools. After a 24-week period of PA and health self-monitoring via the app, participants described how perceptions of ease of use impacted their decisions to continue using the app, with accumulating challenges often resulting in app abandonment. They discussed the value of the self-monitoring app for supporting PA maintenance by increasing accountability to PA, awareness of PA-related health benefits, and promoting PA goal- setting. Participants provided suggestions for improving the ease of use and perceived value of the self-monitoring app. These findings provide valuable insights to motivate additional research and inform the ongoing development of technology to help cancer populations stay physically active. Further research is needed to augment the present findings and address remaining challenges related to prolonged technology use and PA maintenance, especially among understudied cancer populations (e.g., rural, ethnic minorities, lower socioeconomic status, and advanced cancers). |
German, Austrian, and Swiss guidelines for systemic treatment of gastric cancer | 6120300d-6c74-4ac9-9312-e281e19e1221 | 10761449 | Internal Medicine[mh] | Outcomes of patients with cancer depend highly on access to high-quality care. Part of the established quality-of-care criteria is adherence to evidence-based treatment recommendations. To provide practising oncologists in the three German-speaking countries in Europe, comprising a population of approximately 100 million inhabitants, with up-to-date evidence-based guidelines for patient care, the scientific German, Austrian, and Swiss societies of hematology and oncology nominated a multidisciplinary group of experts to revise consensus-based oncology treatment guidelines based on available scientific evidence. This process is coordinated by the German Society of Hematology and Medical Oncology (DGHO). Here, we report on the treatment recommendations from the latest version of the multidisciplinary guidelines for gastric cancer (Onkopedia), finalized in August 2023. This article focusses on locally advanced and metastatic stages (IB-IV). In summary, systemic perioperative chemotherapy is recommended as a mainstay of treatment for patients presenting with localized gastric cancer (stages IB-III). In stage IV gastric cancer patients, treatment goals are palliative in most patients. Sequential lines of chemotherapy have shown to provide the best chances for prolonging patients’ survival, providing symptom control and lead to a better maintenance of quality of life. The assessment of tumor tissue for the expression of programmed death ligand-1 (PD-L1), human epidermal growth factor receptor-2 (HER2) and DNA mismatch repair (MMR) enzymes informs the recommendation for complementing systemic treatment with PD-1-directed immune checkpoint inhibition or HER-2-directed targeted treatment.
Diagnosis Initial diagnosis Endoscopy is considered the most sensitive and specific diagnostic method. Using high-resolution video-assisted endoscopy, it is possible to detect even discrete changes in color, mucosal surface, and architecture of the gastric mucosa. Endoscopic detection of early lesions can be improved by chromoendoscopy. The aims of further diagnostics are to determine the stage of the disease and to guide therapy, see Table . Histology and subtypes Histologic diagnosis of gastric cancer should be made from a biopsy, which is evaluated by two experienced pathologists . Laurén classification Histologically, gastric cancer is characterized by a strong heterogeneity, as several different histological features may be present in one tumor. Over the past decades, histologic classification has been based on the Laurén classification : Intestinal type, approximately 54% Diffuse type, approx. 32 Indeterminant, approx. 15% The diffuse subtype is found more in women and people of younger age, while the intestinal type is more common in men and people of older age and is associated with intestinal metaplasia and Helicobacter pylori infection . World Health Organization (WHO) classification of gastric cancer The World Health Organization (WHO) classification distinguishes four definitive types of gastric cancer . Tubular Papillary Mucinous Poorly cohesive (including signet ring cell carcinoma). The classification is based on the predominant histologic pattern of the carcinoma, which often coexists with less dominant features or other histologic patterns. The Cancer Genome Atlas (TCGA) classification Molecular genetic studies divide gastric cancer into molecular subtypes based on studies of the genome, transcriptome, epigenome, and proteome. The most popular molecular subtyping according to TCGA distinguishes four subtypes : Chromosomal instability—CIN Epstein–Barr virus-associated—EBV Microsatellite instability—MSI Genomically stable—GS This classification currently has limited impact on treatment selection. Stages and staging TNM staging The classification of the extent of the primary tumor and metastasis is based on the UICC/AJCC TNM criteria . Since January 1, 2017, the 8th edition has been used in Europe . The TNM criteria are summarized in Table , and the staging is summarized in Table . Endoscopic ultrasound (EUS) is particularly suitable for determining the clinical T category, as it can best visualize the different layers of the gastric wall. EUS should, therefore, be part of primary staging in a patient with a curative therapeutic approach. The following characteristics serve to identify malignant lymph nodes on CT slice imaging : Diameter ≥ 6–8 mm (shorter axis) in perigastric lymph nodes Round shape Central necrosis Loss of the fat hilus Heterogeneous or enhanced contrast agent uptake The sensitivity of CT for lymph node staging is variably estimated at 62.5–91.9% in systematic reviews . EUS improves the accurate determination of the T and N categories and can help determine the proximal and distal margins of the tumor. EUS is less accurate for tumors of the antrum. EUS is considered more accurate than CT in diagnosing malignant lymph nodes. Signs of malignancy on EUS include : Hypoechoic Round shape Blurred demarcation from the surrounding area Size in the longest diameter > 1 cm
Initial diagnosis Endoscopy is considered the most sensitive and specific diagnostic method. Using high-resolution video-assisted endoscopy, it is possible to detect even discrete changes in color, mucosal surface, and architecture of the gastric mucosa. Endoscopic detection of early lesions can be improved by chromoendoscopy. The aims of further diagnostics are to determine the stage of the disease and to guide therapy, see Table . Histology and subtypes Histologic diagnosis of gastric cancer should be made from a biopsy, which is evaluated by two experienced pathologists . Laurén classification Histologically, gastric cancer is characterized by a strong heterogeneity, as several different histological features may be present in one tumor. Over the past decades, histologic classification has been based on the Laurén classification : Intestinal type, approximately 54% Diffuse type, approx. 32 Indeterminant, approx. 15% The diffuse subtype is found more in women and people of younger age, while the intestinal type is more common in men and people of older age and is associated with intestinal metaplasia and Helicobacter pylori infection . World Health Organization (WHO) classification of gastric cancer The World Health Organization (WHO) classification distinguishes four definitive types of gastric cancer . Tubular Papillary Mucinous Poorly cohesive (including signet ring cell carcinoma). The classification is based on the predominant histologic pattern of the carcinoma, which often coexists with less dominant features or other histologic patterns. The Cancer Genome Atlas (TCGA) classification Molecular genetic studies divide gastric cancer into molecular subtypes based on studies of the genome, transcriptome, epigenome, and proteome. The most popular molecular subtyping according to TCGA distinguishes four subtypes : Chromosomal instability—CIN Epstein–Barr virus-associated—EBV Microsatellite instability—MSI Genomically stable—GS This classification currently has limited impact on treatment selection. Stages and staging TNM staging The classification of the extent of the primary tumor and metastasis is based on the UICC/AJCC TNM criteria . Since January 1, 2017, the 8th edition has been used in Europe . The TNM criteria are summarized in Table , and the staging is summarized in Table . Endoscopic ultrasound (EUS) is particularly suitable for determining the clinical T category, as it can best visualize the different layers of the gastric wall. EUS should, therefore, be part of primary staging in a patient with a curative therapeutic approach. The following characteristics serve to identify malignant lymph nodes on CT slice imaging : Diameter ≥ 6–8 mm (shorter axis) in perigastric lymph nodes Round shape Central necrosis Loss of the fat hilus Heterogeneous or enhanced contrast agent uptake The sensitivity of CT for lymph node staging is variably estimated at 62.5–91.9% in systematic reviews . EUS improves the accurate determination of the T and N categories and can help determine the proximal and distal margins of the tumor. EUS is less accurate for tumors of the antrum. EUS is considered more accurate than CT in diagnosing malignant lymph nodes. Signs of malignancy on EUS include : Hypoechoic Round shape Blurred demarcation from the surrounding area Size in the longest diameter > 1 cm
Endoscopy is considered the most sensitive and specific diagnostic method. Using high-resolution video-assisted endoscopy, it is possible to detect even discrete changes in color, mucosal surface, and architecture of the gastric mucosa. Endoscopic detection of early lesions can be improved by chromoendoscopy. The aims of further diagnostics are to determine the stage of the disease and to guide therapy, see Table .
Histologic diagnosis of gastric cancer should be made from a biopsy, which is evaluated by two experienced pathologists . Laurén classification Histologically, gastric cancer is characterized by a strong heterogeneity, as several different histological features may be present in one tumor. Over the past decades, histologic classification has been based on the Laurén classification : Intestinal type, approximately 54% Diffuse type, approx. 32 Indeterminant, approx. 15% The diffuse subtype is found more in women and people of younger age, while the intestinal type is more common in men and people of older age and is associated with intestinal metaplasia and Helicobacter pylori infection . World Health Organization (WHO) classification of gastric cancer The World Health Organization (WHO) classification distinguishes four definitive types of gastric cancer . Tubular Papillary Mucinous Poorly cohesive (including signet ring cell carcinoma). The classification is based on the predominant histologic pattern of the carcinoma, which often coexists with less dominant features or other histologic patterns. The Cancer Genome Atlas (TCGA) classification Molecular genetic studies divide gastric cancer into molecular subtypes based on studies of the genome, transcriptome, epigenome, and proteome. The most popular molecular subtyping according to TCGA distinguishes four subtypes : Chromosomal instability—CIN Epstein–Barr virus-associated—EBV Microsatellite instability—MSI Genomically stable—GS This classification currently has limited impact on treatment selection.
Histologically, gastric cancer is characterized by a strong heterogeneity, as several different histological features may be present in one tumor. Over the past decades, histologic classification has been based on the Laurén classification : Intestinal type, approximately 54% Diffuse type, approx. 32 Indeterminant, approx. 15% The diffuse subtype is found more in women and people of younger age, while the intestinal type is more common in men and people of older age and is associated with intestinal metaplasia and Helicobacter pylori infection .
The World Health Organization (WHO) classification distinguishes four definitive types of gastric cancer . Tubular Papillary Mucinous Poorly cohesive (including signet ring cell carcinoma). The classification is based on the predominant histologic pattern of the carcinoma, which often coexists with less dominant features or other histologic patterns.
Molecular genetic studies divide gastric cancer into molecular subtypes based on studies of the genome, transcriptome, epigenome, and proteome. The most popular molecular subtyping according to TCGA distinguishes four subtypes : Chromosomal instability—CIN Epstein–Barr virus-associated—EBV Microsatellite instability—MSI Genomically stable—GS This classification currently has limited impact on treatment selection.
TNM staging The classification of the extent of the primary tumor and metastasis is based on the UICC/AJCC TNM criteria . Since January 1, 2017, the 8th edition has been used in Europe . The TNM criteria are summarized in Table , and the staging is summarized in Table . Endoscopic ultrasound (EUS) is particularly suitable for determining the clinical T category, as it can best visualize the different layers of the gastric wall. EUS should, therefore, be part of primary staging in a patient with a curative therapeutic approach. The following characteristics serve to identify malignant lymph nodes on CT slice imaging : Diameter ≥ 6–8 mm (shorter axis) in perigastric lymph nodes Round shape Central necrosis Loss of the fat hilus Heterogeneous or enhanced contrast agent uptake The sensitivity of CT for lymph node staging is variably estimated at 62.5–91.9% in systematic reviews . EUS improves the accurate determination of the T and N categories and can help determine the proximal and distal margins of the tumor. EUS is less accurate for tumors of the antrum. EUS is considered more accurate than CT in diagnosing malignant lymph nodes. Signs of malignancy on EUS include : Hypoechoic Round shape Blurred demarcation from the surrounding area Size in the longest diameter > 1 cm
The classification of the extent of the primary tumor and metastasis is based on the UICC/AJCC TNM criteria . Since January 1, 2017, the 8th edition has been used in Europe . The TNM criteria are summarized in Table , and the staging is summarized in Table . Endoscopic ultrasound (EUS) is particularly suitable for determining the clinical T category, as it can best visualize the different layers of the gastric wall. EUS should, therefore, be part of primary staging in a patient with a curative therapeutic approach. The following characteristics serve to identify malignant lymph nodes on CT slice imaging : Diameter ≥ 6–8 mm (shorter axis) in perigastric lymph nodes Round shape Central necrosis Loss of the fat hilus Heterogeneous or enhanced contrast agent uptake The sensitivity of CT for lymph node staging is variably estimated at 62.5–91.9% in systematic reviews . EUS improves the accurate determination of the T and N categories and can help determine the proximal and distal margins of the tumor. EUS is less accurate for tumors of the antrum. EUS is considered more accurate than CT in diagnosing malignant lymph nodes. Signs of malignancy on EUS include : Hypoechoic Round shape Blurred demarcation from the surrounding area Size in the longest diameter > 1 cm
Therapy structure Multidisciplinary planning is required for any initial treatment recommendation. It should be developed in a qualified multidisciplinary tumor board. Core members of the multidisciplinary board include the following disciplines: Visceral Surgery, Medical Oncology, Radiation Oncology, Gastroenterology, Radiology and Pathology. Whenever possible, patients should be treated in clinical trials. Therapy is stage adapted. A treatment algorithm for the stage-adapted management of gastric cancer is shown in Fig. . Stage IA—T1a Since the probability of lymph node metastasis in mucosal gastric cancer (T1a) is very low, endoscopic resection (ER) may be sufficient . If histopathologic workup after endoscopic resection reveals that tumor infiltration extends into the submucosa (T1b), surgical resection with systematic lymphadenectomy should be performed, as lymph node metastases may already be present in up to 30% of cases. Gastric cancers classified as pT1a cN0 cM0 should be treated with endoscopic resection, considering the adapted Japanese criteria . A (limited) surgical approach is an alternative. Perioperative or adjuvant chemotherapy is not indicated for stage IA (T1a) patients. Stage IA—T1b For stage IA gastric cancer with infiltration of the submucosa, the risk of lymph node metastases is 25–28%. The 5-year survival rate is 70.8% for all stage IA in the SEER database , and the cancer-specific survival rate at 10 years is 93% in the Italian IRGGC analysis. Therapy of choice in stage I (T1b category) is radical surgical resection (subtotal, total, or transhiatal extended gastrectomy). Limited resection can be recommended only in exceptional cases due to the imprecise accuracy of pre-therapeutic staging. A benefit from perioperative or adjuvant chemotherapy has not been established for stage IA (T1b) patients. Stage IB—III In stage IB—III, resection should consist of radical resection (subtotal, total, or transhiatal extended gastrectomy) in combination with D2- lymphadenectomy. Subtotal gastrectomy can be performed if safe free tumor margins can be achieved. The previously recommended tumor-free margins of 5 and 8 cm for intestinal and diffuse tumor growth types, respectively, are no longer accepted. The scientific evidence for definitive recommendations is low. A negative oral margin in the intraoperative frozen section is crucial. Perioperative chemotherapy with a platinum derivative, a fluoropyrimidine, and an anthracycline significantly prolonged overall survival in patients with resectable gastric cancer in the MAGIC trial . In the French FNCLCC/FFCD multicenter study, perioperative chemotherapy with a platinum derivative and a fluoropyrimidine without anthracycline showed a comparable effect size on improving survival . Currently, neither chemotherapy regimen is the first choice. Treatment according to the FLOT regimen (5-fluorouracil/folinic acid/oxaliplatin/docetaxel) further improved progression-free survival (hazard ratio, HR 0.75) and overall survival (HR 0.77) in patients with stage ≥ cT2 and/or cN + compared with therapy analogous to MAGIC. The relatively higher efficacy of FLOT was shown to be consistent across relevant subgroup analyses such as age, histology, and tumor location. The rate of perioperative complications was comparable . For patients with gastric cancer ≥ stage IB who received resection without prior chemotherapy (e.g., due to misdiagnosed tumor stage prior to surgery), adjuvant chemotherapy may be recommended. In HER2-positive tumors, a benefit from combining perioperative chemotherapy with a HER2 antibody in the perioperative setting in terms of overall survival has not been proven, and therefore cannot be recommended outside of clinical trials. The AIO-PETRARCA phase 2 study showed a higher histopathologic remission rate when FLOT chemotherapy was combined with trastuzumab + pertuzumab and a trend in favor of better progression-free and overall survival . These data require validation in larger and independent cohorts. In microsatellite instability (MSI-H) localized gastric carcinoma, the efficacy of perioperative chemotherapy, based on retrospective data analyses , has been controversially discussed. However, more recent data from the DANTE trial show that complete and subtotal tumor remissions can be achieved with FLOT chemotherapy even in MSI-H subtype gastric carcinomas . Thus, according to the current status, perioperative chemotherapy with the FLOT regimen remains indicated for MSI-H gastric cancers if tumor response is pursued. The FFCD-NEONIPIGA phase 2 study showed a high histopathologic remission rate after 12 weeks of therapy with nivolumab + ipilimumab without chemotherapy in resectable MSI-H cancers . Data require validation in larger and independent patient cohorts. After R1 resection, adjuvant radiochemotherapy may be considered. Stage IV The aim of therapy is usually non-curative. The first priority is systemic drug therapy, supplemented in individual cases by local therapeutic measures. Active symptom control and supportive measures such as nutritional counseling, psychosocial support, and palliative care are an integral part of treatment. The prognosis of patients with locally advanced and irresectable or metastatic (pooled here as "advanced") gastric cancer is unfavorable. Studies evaluating the benefit from chemotherapy have shown a median survival of less than 1 year . However, there is evidence that chemotherapy can prolong the survival of patients with advanced gastric cancer compared to best supportive therapy alone and maintain quality of life longer . Systemic tumor therapy The current recommended algorithms for drug therapy of patients with advanced gastric cancer are shown in Figs. , , and . First-line chemotherapy, molecular targeted therapy, and immunotherapy Chemotherapy The standard of care for first-line chemotherapy of advanced gastric cancer is a platinum–fluoropyrimidine doublet. Oxaliplatin and cisplatin are comparably effective, with a more favorable side effect profile for oxaliplatin. This may contribute to a trend toward better efficacy, especially in patients > 65 years . Fluoropyrimidines can be administered as infusion (5-FU) or orally (capecitabine or S-1). Oral fluoropyrimidines are comparably effective to infused 5-FU . Capecitabine is approved in combination with a platinum derivative and has been studied with both cis- and oxaliplatin in European patients. S-1 is established as a standard of care in Japan and approved in Europe for palliative first-line therapy in combination with cisplatin. Infused 5-FU should be preferred over oral medications in patients with dysphagia or other feeding problems. In elderly or frail patients, results of the phase III GO-2 trial support a dose-reduced application of oxaliplatin–fluoropyrimidine chemotherapy (to 80 or 60% of the standard dose from the beginning), resulting in fewer side effects with comparable efficacy . The addition of docetaxel to a platinum–fluoropyrimidine combination (three-weekly DCF regimen) improved radiographic response rates and prolonged overall survival in a historical phase III trial, but also resulted in significantly increased side effects . Other phase II trials examined modified docetaxel–platinum–fluoropyrimidine triplets and showed reduced toxicity compared with DCF in some cases . However, the higher response rate of a triplet (37% vs. 25% does not translate into prolonged survival in recent trials, which included effective second-line regimens. In the phase III JCOG1013 trial, patients with advanced gastric cancer received either cisplatin plus S-1 or cisplatin plus S-1 and docetaxel. There were no differences in radiographic response, progression-free survival, or overall survival . Therefore, with increased toxicity and uncertain impact on overall survival, no recommendation can be made for first-line docetaxel–platinum–fluoropyrimidine therapy, so that a platinum–fluoropyrimidine doublet remains the standard approach. In individual cases, e.g., when fast tumor regression is urgently required, first-line therapy with a platinum–fluoropyrimidine–docetaxel triplet may be indicated. Irinotecan-5-FU has been compared with cisplatin-5-FU and with epirubicin–cisplatin–capecitabine in randomized phase III trials and showed comparable survival with controllable side effects . Irinotecan-5-FU can, therefore, be considered a treatment alternative to platinum–fluoropyrimidine doublets according to scientific evidence; however, irinotecan has no formal approval in Europe for gastric cancer. HER2-positive gastric cancer HER2 positivity is defined in gastric cancer as the presence of protein expression with immunohistochemistry score [IHC] of 3 + or IHC 2 + and concomitant gene amplification on in situ hybridization [ISH], HER2/CEP17 ratio ≥ 2.0. HER2 diagnosis should be quality controlled . Trastuzumab should be added to chemotherapy in patients with HER2-positive advanced gastric cancer . The recommendation is based on data from the phase III ToGA trial, showing a higher response rate and prolonged survival for trastuzumab–cisplatin–fluoropyrimidine chemotherapy vs. chemotherapy alone using the above selection criteria; the additional trastuzumab side effects are minor and controllable . Combinations of trastuzumab and oxaliplatin plus fluoropyrimidine show comparable results to the historical cisplatin-containing ToGA regimen . Based on data from the not yet fully reported results of the Keynote-811 study, the Commission for Human Medical Products (CHMP) of the European Medicines Agency (EMA) published a positive opinion for pembrolizumab plus trastuzumab and chemotherapy as first-line treatment for HER2-positive advanced gastric or gastroesophageal junction (GEJ) adenocarcinoma expressing PD-L1 (CPS ≥ 1) on 20th of July 2023 ( https://www.ema.europa.eu/en/medicines/human/summaries-opinion/keytruda-10 ). If available, this combination should be preferred over trastuzumab plus chemotherapy in the respective patient population (Fig. ). Immunotherapy The phase III CheckMate 649 trial evaluated the addition of nivolumab to chemotherapy (capecitabine-oxaliplatin or 5-FU/folinic acid-oxaliplatin) in patients with previously untreated gastric, esophago-gastric junction, or esophageal adenocarcinoma . The study included patients regardless of tumor PD-L1 status; the dual primary endpoints were overall survival and progression-free survival. Approximately 60% of the study population had tumors with a PD-L1 CPS ≥ 5. Nivolumab plus chemotherapy yielded a significant improvement over chemotherapy alone in overall survival (14.4 vs. 11.1 months, HR 0.71 [98.4% CI 0.59–0.86]; p < 0.0001) and progression-free survival (7.7 vs. 6.0 months, HR 0.68 [98% CI 0.56–0.81]; p < 0.0001) in patients with a PD-L1 CPS ≥ 5. Overall survival benefit was enriched in patients with MSI-H tumors with nivolumab plus chemotherapy vs. chemotherapy (unstratified hazard ratio 0.38; 95% confidence interval 0.17, 0.84). The Asian phase II/III ATTRACTION-04 trial also showed a significant improvement in progression-free survival with nivolumab and first-line chemotherapy, but with no significant improvement in overall survival compared to first-line chemotherapy alone. The most likely reason for the lack of survival benefit (> 17 months in both arms) is that many patients received post-progression therapies including immunotherapy after first-line therapy . The multinational randomized phase III Keynote-859 trial included 1589 patients with advanced incurable gastric cancer. Patients received either platinum–fluoropyrimidine plus pembrolizumab or the same chemotherapy plus placebo every 3 weeks. Overall survival was prolonged in the pembrolizumab group (HR 0.78 [95% CI 0.70–0.87], p < 0.0001). The effect was more pronounced in the subgroup with a PD-L1 CPS ≥ 10 (HR 0.64), whereas efficacy was lower for CPS < 10 (HR 0.86). Overall survival benefit was enriched in patients with MSI-H tumors with pembrolizumab plus chemotherapy vs. chemotherapy (hazard ratio 0.34; 95% confidence interval 0.176, 0.663) . The results, thus, complement the positive trial data from the phase III Keynote-590 study, which led to EU approval of pembrolizumab in combination with platinum–fluoropyrimidine chemotherapy for adenocarcinoma of the esophagus and esophago-gastric junction . Positive phase III trial data were also presented on two immune checkpoint (PD-1) inhibitors not currently approved in Europe. Sintilimab in combination with oxaliplatin and capecitabine improved overall survival in the phase III ORIENT-16 trial . In the phase III Rationale-305 study, tislelizumab prolonged overall survival in combination with platinum–fluoropyrimidine or platinum-investigator-choice chemotherapy in patients with a positive PD-L1 score. PD-L1 was evaluated according to a scoring system not yet established internationally (the so-called Tumor Area Proportion score, TAP) . ORIENT-16 and Rationale-305 have not been fully published to date, but support the overall assessment that PD-1 immune checkpoint inhibitors can improve the efficacy of chemotherapy (depending on PD-L1 expression). Claudin 18.2 Data from the multinational phase III Spotlight trial were recently published. These show that in patients with advanced irresectable gastric cancer and tumor claudin 18.2 expression in ≥ 75% of tumor cells, zolbetuximab, a chimeric monoclonal IgG1 antibody directed against claudin 18.2, in combination with FOLFOX chemotherapy prolongs overall survival (median 18.23 vs. 15.54 months, HR 0.750, p = 0.0053). The main side effects of zolbetuximab are nausea and vomiting, especially during the first applications . The results of the phase III Spotlight trial are largely confirmed by the multinational phase III GLOW trial, in which the chemotherapy doublet was used as a control therapy or combination partner for zolbetuximab . It remains to be seen whether the European Medicines Agency will grant approval to zolbetuximab in patients with claudin 18.2-positive metastatic and previously untreated gastric cancer. Second-line and third-line therapy chemotherapy and anti-angiogenic therapy Figures and show the algorithm for second- and third-line therapy for patients with advanced gastric cancer. The evidence-based chemotherapy options in this setting are paclitaxel, docetaxel, and irinotecan, which have comparable efficacy with different specific toxicities . Irinotecan may be preferred in patients with preexisting neuropathy; however, there is no EU approval. 5-FU/folinic acid plus irinotecan (FOLFIRI) is also occasionally used, but the scientific evidence for its use in second- and third-line treatment is limited . Ramucirumab plus paclitaxel is the recommended standard for second-line therapy and is approved in the EU. The addition of the anti-vascular endothelial growth factor receptor-2 (VEGFR-2) antibody ramucirumab to paclitaxel increases tumor response rates and prolongs progression-free and overall survival according to the results of the phase III RAINBOW trial . Already in the phase III REGARD trial, ramucirumab monotherapy showed prolonged survival compared to placebo, albeit with a low radiological response rate . Immunotherapy in second- and third-line therapy In the phase III KEYNOTE-061 trial, pembrolizumab monotherapy did not show prolonged overall survival compared with chemotherapy . However, an exploratory subgroup analysis recognized a clear benefit for anti-PD-1 immunotherapy in patients with MSI-H gastric cancer . Therefore, PD-1 inhibition is recommended in advanced MSI-H carcinomas at the latest in second-line treatment. Pembrolizumab has European approval for this indication based on the Keynote-061 and Keynote-158 trials . Of note, pembrolizumab in second line for MSI-High advanced gastric cancer is not recommended when immunotherapy was administered in first-line treatment. Other biomarkers, particularly EBV and tumor mutation burden, are also discussed as predictive factors for PD-1 immune checkpoint inhibitor efficacy . However, the evidence to date is insufficient to support a positive recommendation for immunotherapy based upon the presence of these biomarkers. HER2-targeted therapy Studies evaluating trastuzumab, lapatinib, and trastuzumab emtansine for second-line treatment in patients with HER2-positive carcinomas were negative . Therefore, these drugs should not be used in gastric cancer outside of clinical trials. A randomized phase II trial showed an improvement in tumor response rate and overall survival for the antibody–drug conjugate trastuzumab deruxtecan (T-DXd) compared with standard chemotherapy in patients with pretreated HER2-positive advanced gastric cancer . Destiny-GC-04 is an ongoing study, assessing the efficacy and safety of T-DXd compared with ramucirumab and paclitaxel in participants with HER2-positive (defined as immunohistochemistry [IHC] 3 + or IHC 2 + /in situ hybridization [ISH] +) gastric or esophago-gastric junction adenocarcinoma who have progressed on or after a trastuzumab-containing regimen and have not received any additional systemic therapy ( https://classic.clinicaltrials.gov/ct2/show/NCT04704934 ). Prerequisites for inclusion in the Destiny-GC-01 study were at least two prior lines of therapy, prior treatment with a platinum derivative, a fluoropyrimidine, and trastuzumab, and previously confirmed HER2 positivity. The study was recruited exclusively in East Asia. The results of Destiny-GC-01 were largely confirmed in the single-arm phase II Destiny-GC-02 trial, which included non-Asian patients in second-line therapy. Mandatory was platinum–fluoropyrimidine–trastuzumab pretreatment and confirmed HER2 positivity of the tumor in a recent re-biopsy before initiating T-DXd therapy . The EU approval includes the following indication of T-DXd: monotherapy for the treatment of adult patients with advanced HER2-positive adenocarcinoma of the stomach or esophago-gastric junction who have received a prior trastuzumab-based regimen. We recommend, according to the classically established HER2 diagnostic criteria, to check the HER2 status prior to therapy with T-DXd, especially if use in second-line therapy is planned, where a valid alternative with paclitaxel–ramucirumab is available. This recommendation is based on the inclusion criteria of the Destiny-GC-02 trial and the knowledge that loss of HER2 status occurs in approximately 30% of gastric cancers after first-line therapy with trastuzumab . There is initial evidence of efficacy of T-DXd in low HER2 expression . However, data are not yet sufficient to recommend its use. Third-line therapy For the treatment of patients with advanced gastric cancer in the third line and beyond, the best evidence is available for trifluridine–tipiracil (FTD/TPI) based on the phase III TAGS trial. Median overall survival with FTD/TPI vs. placebo was significantly improved in the overall patient cohort, in the third-line cohort, and in the fourth-line cohort . Therefore, if oral therapy is feasible, trifluridine–tipiracil (FTD/TPI) should be used; alternatively, if intravenous therapy is preferred, irinotecan or a taxane can be given, if not already used in a previous line of therapy. As shown above, T-DXd is a very effective third-line therapy for HER2-positive carcinoma after trastuzumab pretreatment. Nivolumab also proved to be effective; however, the data from the ATTRACTION-02 trial were obtained exclusively in Asian patients , so that nivolumab in the third line of treatment in patients with advanced gastric cancer does not have EMA approval, and therefore cannot be recommended. Following the recommendation of a molecular tumor board, an unapproved therapeutic option may also be preferred in justified cases, especially if the recommendation can be based on an ESMO Scale for Clinical Actionability of Molecular Targets (ESCAT) level I or II . Surgery for metastatic gastric cancer The randomized phase III REGATTA trial showed that gastrectomy in addition to chemotherapy for metastatic disease did not confer a survival benefit compared with chemotherapy alone . International data analyses show that surgical therapy for oligometastasic disease is increasingly perceived as a treatment option . The AIO-FLOT3 phase II trial reported results on the feasibility of resection for stage IV gastric cancer and survival in highly selected patients with oligometastatic disease that was without primary progression on FLOT chemotherapy . The potential prognostic benefit of resections for oligometastatic gastric cancer is currently being evaluated in randomized phase III trials [RENAISSANCE (NCT0257836) and SURGIGAST (NCT03042169)]. In a Delphi procedure, a definition for oligometastasis was determined in a European expert group (OMEC). According to this definition, oligometastasis can be defined as the following phenotypes: 1–2 metastases in either liver, lung, retroperitoneal lymph nodes, adrenal glands, soft tissue or bone . Supportive therapy and nutrition It is recommended that nutritional and symptom screening with appropriate tools be performed regularly in all patients with advanced gastric cancer, and appropriate supportive therapies be derived. A study from China showed that early integration of supportive-palliative care is effective and suggests a survival benefit in patients with advanced gastric cancer . Weight loss is a multifactorial phenomenon and may be due to digestive tract obstruction, malabsorption, or hypermetabolism. Clinical data sets show that weight loss of ≥ 10% before chemotherapy or ≥ 3% during the first cycle of chemotherapy is associated with poorer survival . Also, a change in body composition with impaired muscular capacity was shown to be prognostically unfavorable in patients with advanced gastric cancer . The modified Glasgow Prognostic Score (serum CRP and albumin) can be used to assess the extent of sarcopenia and the prognosis of patients with advanced gastric cancer . From this, it can be concluded that screening for nutritional status should be performed in all patients with advanced gastric cancer (for example, using Nutritional Risk Screening, NRS) and expert nutritional counseling and co-supervision should be offered, if nutritional deficiency is evident. Dysphagia in proximal gastric cancer can be improved with radiotherapy or stent insertion . Single-dose brachytherapy is the preferred option at some centers and results in longer-lasting symptom control and fewer complications than stent insertion. Stenting is needed for severe dysphagia and especially in patients with limited life expectancy, as the effects of the stent are immediate, whereas radiotherapy improves dysphagic symptoms only after approximately 4–6 weeks . If radiotherapy or a stent are not an option, enteral nutrition via naso-gastric, naso-jejunal, or percutaneously placed feeding tubes may provide relief . The indication for parenteral nutrition follows generally accepted guidelines.
Multidisciplinary planning is required for any initial treatment recommendation. It should be developed in a qualified multidisciplinary tumor board. Core members of the multidisciplinary board include the following disciplines: Visceral Surgery, Medical Oncology, Radiation Oncology, Gastroenterology, Radiology and Pathology. Whenever possible, patients should be treated in clinical trials. Therapy is stage adapted. A treatment algorithm for the stage-adapted management of gastric cancer is shown in Fig. . Stage IA—T1a Since the probability of lymph node metastasis in mucosal gastric cancer (T1a) is very low, endoscopic resection (ER) may be sufficient . If histopathologic workup after endoscopic resection reveals that tumor infiltration extends into the submucosa (T1b), surgical resection with systematic lymphadenectomy should be performed, as lymph node metastases may already be present in up to 30% of cases. Gastric cancers classified as pT1a cN0 cM0 should be treated with endoscopic resection, considering the adapted Japanese criteria . A (limited) surgical approach is an alternative. Perioperative or adjuvant chemotherapy is not indicated for stage IA (T1a) patients. Stage IA—T1b For stage IA gastric cancer with infiltration of the submucosa, the risk of lymph node metastases is 25–28%. The 5-year survival rate is 70.8% for all stage IA in the SEER database , and the cancer-specific survival rate at 10 years is 93% in the Italian IRGGC analysis. Therapy of choice in stage I (T1b category) is radical surgical resection (subtotal, total, or transhiatal extended gastrectomy). Limited resection can be recommended only in exceptional cases due to the imprecise accuracy of pre-therapeutic staging. A benefit from perioperative or adjuvant chemotherapy has not been established for stage IA (T1b) patients. Stage IB—III In stage IB—III, resection should consist of radical resection (subtotal, total, or transhiatal extended gastrectomy) in combination with D2- lymphadenectomy. Subtotal gastrectomy can be performed if safe free tumor margins can be achieved. The previously recommended tumor-free margins of 5 and 8 cm for intestinal and diffuse tumor growth types, respectively, are no longer accepted. The scientific evidence for definitive recommendations is low. A negative oral margin in the intraoperative frozen section is crucial. Perioperative chemotherapy with a platinum derivative, a fluoropyrimidine, and an anthracycline significantly prolonged overall survival in patients with resectable gastric cancer in the MAGIC trial . In the French FNCLCC/FFCD multicenter study, perioperative chemotherapy with a platinum derivative and a fluoropyrimidine without anthracycline showed a comparable effect size on improving survival . Currently, neither chemotherapy regimen is the first choice. Treatment according to the FLOT regimen (5-fluorouracil/folinic acid/oxaliplatin/docetaxel) further improved progression-free survival (hazard ratio, HR 0.75) and overall survival (HR 0.77) in patients with stage ≥ cT2 and/or cN + compared with therapy analogous to MAGIC. The relatively higher efficacy of FLOT was shown to be consistent across relevant subgroup analyses such as age, histology, and tumor location. The rate of perioperative complications was comparable . For patients with gastric cancer ≥ stage IB who received resection without prior chemotherapy (e.g., due to misdiagnosed tumor stage prior to surgery), adjuvant chemotherapy may be recommended. In HER2-positive tumors, a benefit from combining perioperative chemotherapy with a HER2 antibody in the perioperative setting in terms of overall survival has not been proven, and therefore cannot be recommended outside of clinical trials. The AIO-PETRARCA phase 2 study showed a higher histopathologic remission rate when FLOT chemotherapy was combined with trastuzumab + pertuzumab and a trend in favor of better progression-free and overall survival . These data require validation in larger and independent cohorts. In microsatellite instability (MSI-H) localized gastric carcinoma, the efficacy of perioperative chemotherapy, based on retrospective data analyses , has been controversially discussed. However, more recent data from the DANTE trial show that complete and subtotal tumor remissions can be achieved with FLOT chemotherapy even in MSI-H subtype gastric carcinomas . Thus, according to the current status, perioperative chemotherapy with the FLOT regimen remains indicated for MSI-H gastric cancers if tumor response is pursued. The FFCD-NEONIPIGA phase 2 study showed a high histopathologic remission rate after 12 weeks of therapy with nivolumab + ipilimumab without chemotherapy in resectable MSI-H cancers . Data require validation in larger and independent patient cohorts. After R1 resection, adjuvant radiochemotherapy may be considered. Stage IV The aim of therapy is usually non-curative. The first priority is systemic drug therapy, supplemented in individual cases by local therapeutic measures. Active symptom control and supportive measures such as nutritional counseling, psychosocial support, and palliative care are an integral part of treatment. The prognosis of patients with locally advanced and irresectable or metastatic (pooled here as "advanced") gastric cancer is unfavorable. Studies evaluating the benefit from chemotherapy have shown a median survival of less than 1 year . However, there is evidence that chemotherapy can prolong the survival of patients with advanced gastric cancer compared to best supportive therapy alone and maintain quality of life longer . Systemic tumor therapy The current recommended algorithms for drug therapy of patients with advanced gastric cancer are shown in Figs. , , and . First-line chemotherapy, molecular targeted therapy, and immunotherapy Chemotherapy The standard of care for first-line chemotherapy of advanced gastric cancer is a platinum–fluoropyrimidine doublet. Oxaliplatin and cisplatin are comparably effective, with a more favorable side effect profile for oxaliplatin. This may contribute to a trend toward better efficacy, especially in patients > 65 years . Fluoropyrimidines can be administered as infusion (5-FU) or orally (capecitabine or S-1). Oral fluoropyrimidines are comparably effective to infused 5-FU . Capecitabine is approved in combination with a platinum derivative and has been studied with both cis- and oxaliplatin in European patients. S-1 is established as a standard of care in Japan and approved in Europe for palliative first-line therapy in combination with cisplatin. Infused 5-FU should be preferred over oral medications in patients with dysphagia or other feeding problems. In elderly or frail patients, results of the phase III GO-2 trial support a dose-reduced application of oxaliplatin–fluoropyrimidine chemotherapy (to 80 or 60% of the standard dose from the beginning), resulting in fewer side effects with comparable efficacy . The addition of docetaxel to a platinum–fluoropyrimidine combination (three-weekly DCF regimen) improved radiographic response rates and prolonged overall survival in a historical phase III trial, but also resulted in significantly increased side effects . Other phase II trials examined modified docetaxel–platinum–fluoropyrimidine triplets and showed reduced toxicity compared with DCF in some cases . However, the higher response rate of a triplet (37% vs. 25% does not translate into prolonged survival in recent trials, which included effective second-line regimens. In the phase III JCOG1013 trial, patients with advanced gastric cancer received either cisplatin plus S-1 or cisplatin plus S-1 and docetaxel. There were no differences in radiographic response, progression-free survival, or overall survival . Therefore, with increased toxicity and uncertain impact on overall survival, no recommendation can be made for first-line docetaxel–platinum–fluoropyrimidine therapy, so that a platinum–fluoropyrimidine doublet remains the standard approach. In individual cases, e.g., when fast tumor regression is urgently required, first-line therapy with a platinum–fluoropyrimidine–docetaxel triplet may be indicated. Irinotecan-5-FU has been compared with cisplatin-5-FU and with epirubicin–cisplatin–capecitabine in randomized phase III trials and showed comparable survival with controllable side effects . Irinotecan-5-FU can, therefore, be considered a treatment alternative to platinum–fluoropyrimidine doublets according to scientific evidence; however, irinotecan has no formal approval in Europe for gastric cancer. HER2-positive gastric cancer HER2 positivity is defined in gastric cancer as the presence of protein expression with immunohistochemistry score [IHC] of 3 + or IHC 2 + and concomitant gene amplification on in situ hybridization [ISH], HER2/CEP17 ratio ≥ 2.0. HER2 diagnosis should be quality controlled . Trastuzumab should be added to chemotherapy in patients with HER2-positive advanced gastric cancer . The recommendation is based on data from the phase III ToGA trial, showing a higher response rate and prolonged survival for trastuzumab–cisplatin–fluoropyrimidine chemotherapy vs. chemotherapy alone using the above selection criteria; the additional trastuzumab side effects are minor and controllable . Combinations of trastuzumab and oxaliplatin plus fluoropyrimidine show comparable results to the historical cisplatin-containing ToGA regimen . Based on data from the not yet fully reported results of the Keynote-811 study, the Commission for Human Medical Products (CHMP) of the European Medicines Agency (EMA) published a positive opinion for pembrolizumab plus trastuzumab and chemotherapy as first-line treatment for HER2-positive advanced gastric or gastroesophageal junction (GEJ) adenocarcinoma expressing PD-L1 (CPS ≥ 1) on 20th of July 2023 ( https://www.ema.europa.eu/en/medicines/human/summaries-opinion/keytruda-10 ). If available, this combination should be preferred over trastuzumab plus chemotherapy in the respective patient population (Fig. ). Immunotherapy The phase III CheckMate 649 trial evaluated the addition of nivolumab to chemotherapy (capecitabine-oxaliplatin or 5-FU/folinic acid-oxaliplatin) in patients with previously untreated gastric, esophago-gastric junction, or esophageal adenocarcinoma . The study included patients regardless of tumor PD-L1 status; the dual primary endpoints were overall survival and progression-free survival. Approximately 60% of the study population had tumors with a PD-L1 CPS ≥ 5. Nivolumab plus chemotherapy yielded a significant improvement over chemotherapy alone in overall survival (14.4 vs. 11.1 months, HR 0.71 [98.4% CI 0.59–0.86]; p < 0.0001) and progression-free survival (7.7 vs. 6.0 months, HR 0.68 [98% CI 0.56–0.81]; p < 0.0001) in patients with a PD-L1 CPS ≥ 5. Overall survival benefit was enriched in patients with MSI-H tumors with nivolumab plus chemotherapy vs. chemotherapy (unstratified hazard ratio 0.38; 95% confidence interval 0.17, 0.84). The Asian phase II/III ATTRACTION-04 trial also showed a significant improvement in progression-free survival with nivolumab and first-line chemotherapy, but with no significant improvement in overall survival compared to first-line chemotherapy alone. The most likely reason for the lack of survival benefit (> 17 months in both arms) is that many patients received post-progression therapies including immunotherapy after first-line therapy . The multinational randomized phase III Keynote-859 trial included 1589 patients with advanced incurable gastric cancer. Patients received either platinum–fluoropyrimidine plus pembrolizumab or the same chemotherapy plus placebo every 3 weeks. Overall survival was prolonged in the pembrolizumab group (HR 0.78 [95% CI 0.70–0.87], p < 0.0001). The effect was more pronounced in the subgroup with a PD-L1 CPS ≥ 10 (HR 0.64), whereas efficacy was lower for CPS < 10 (HR 0.86). Overall survival benefit was enriched in patients with MSI-H tumors with pembrolizumab plus chemotherapy vs. chemotherapy (hazard ratio 0.34; 95% confidence interval 0.176, 0.663) . The results, thus, complement the positive trial data from the phase III Keynote-590 study, which led to EU approval of pembrolizumab in combination with platinum–fluoropyrimidine chemotherapy for adenocarcinoma of the esophagus and esophago-gastric junction . Positive phase III trial data were also presented on two immune checkpoint (PD-1) inhibitors not currently approved in Europe. Sintilimab in combination with oxaliplatin and capecitabine improved overall survival in the phase III ORIENT-16 trial . In the phase III Rationale-305 study, tislelizumab prolonged overall survival in combination with platinum–fluoropyrimidine or platinum-investigator-choice chemotherapy in patients with a positive PD-L1 score. PD-L1 was evaluated according to a scoring system not yet established internationally (the so-called Tumor Area Proportion score, TAP) . ORIENT-16 and Rationale-305 have not been fully published to date, but support the overall assessment that PD-1 immune checkpoint inhibitors can improve the efficacy of chemotherapy (depending on PD-L1 expression). Claudin 18.2 Data from the multinational phase III Spotlight trial were recently published. These show that in patients with advanced irresectable gastric cancer and tumor claudin 18.2 expression in ≥ 75% of tumor cells, zolbetuximab, a chimeric monoclonal IgG1 antibody directed against claudin 18.2, in combination with FOLFOX chemotherapy prolongs overall survival (median 18.23 vs. 15.54 months, HR 0.750, p = 0.0053). The main side effects of zolbetuximab are nausea and vomiting, especially during the first applications . The results of the phase III Spotlight trial are largely confirmed by the multinational phase III GLOW trial, in which the chemotherapy doublet was used as a control therapy or combination partner for zolbetuximab . It remains to be seen whether the European Medicines Agency will grant approval to zolbetuximab in patients with claudin 18.2-positive metastatic and previously untreated gastric cancer. Second-line and third-line therapy chemotherapy and anti-angiogenic therapy Figures and show the algorithm for second- and third-line therapy for patients with advanced gastric cancer. The evidence-based chemotherapy options in this setting are paclitaxel, docetaxel, and irinotecan, which have comparable efficacy with different specific toxicities . Irinotecan may be preferred in patients with preexisting neuropathy; however, there is no EU approval. 5-FU/folinic acid plus irinotecan (FOLFIRI) is also occasionally used, but the scientific evidence for its use in second- and third-line treatment is limited . Ramucirumab plus paclitaxel is the recommended standard for second-line therapy and is approved in the EU. The addition of the anti-vascular endothelial growth factor receptor-2 (VEGFR-2) antibody ramucirumab to paclitaxel increases tumor response rates and prolongs progression-free and overall survival according to the results of the phase III RAINBOW trial . Already in the phase III REGARD trial, ramucirumab monotherapy showed prolonged survival compared to placebo, albeit with a low radiological response rate . Immunotherapy in second- and third-line therapy In the phase III KEYNOTE-061 trial, pembrolizumab monotherapy did not show prolonged overall survival compared with chemotherapy . However, an exploratory subgroup analysis recognized a clear benefit for anti-PD-1 immunotherapy in patients with MSI-H gastric cancer . Therefore, PD-1 inhibition is recommended in advanced MSI-H carcinomas at the latest in second-line treatment. Pembrolizumab has European approval for this indication based on the Keynote-061 and Keynote-158 trials . Of note, pembrolizumab in second line for MSI-High advanced gastric cancer is not recommended when immunotherapy was administered in first-line treatment. Other biomarkers, particularly EBV and tumor mutation burden, are also discussed as predictive factors for PD-1 immune checkpoint inhibitor efficacy . However, the evidence to date is insufficient to support a positive recommendation for immunotherapy based upon the presence of these biomarkers. HER2-targeted therapy Studies evaluating trastuzumab, lapatinib, and trastuzumab emtansine for second-line treatment in patients with HER2-positive carcinomas were negative . Therefore, these drugs should not be used in gastric cancer outside of clinical trials. A randomized phase II trial showed an improvement in tumor response rate and overall survival for the antibody–drug conjugate trastuzumab deruxtecan (T-DXd) compared with standard chemotherapy in patients with pretreated HER2-positive advanced gastric cancer . Destiny-GC-04 is an ongoing study, assessing the efficacy and safety of T-DXd compared with ramucirumab and paclitaxel in participants with HER2-positive (defined as immunohistochemistry [IHC] 3 + or IHC 2 + /in situ hybridization [ISH] +) gastric or esophago-gastric junction adenocarcinoma who have progressed on or after a trastuzumab-containing regimen and have not received any additional systemic therapy ( https://classic.clinicaltrials.gov/ct2/show/NCT04704934 ). Prerequisites for inclusion in the Destiny-GC-01 study were at least two prior lines of therapy, prior treatment with a platinum derivative, a fluoropyrimidine, and trastuzumab, and previously confirmed HER2 positivity. The study was recruited exclusively in East Asia. The results of Destiny-GC-01 were largely confirmed in the single-arm phase II Destiny-GC-02 trial, which included non-Asian patients in second-line therapy. Mandatory was platinum–fluoropyrimidine–trastuzumab pretreatment and confirmed HER2 positivity of the tumor in a recent re-biopsy before initiating T-DXd therapy . The EU approval includes the following indication of T-DXd: monotherapy for the treatment of adult patients with advanced HER2-positive adenocarcinoma of the stomach or esophago-gastric junction who have received a prior trastuzumab-based regimen. We recommend, according to the classically established HER2 diagnostic criteria, to check the HER2 status prior to therapy with T-DXd, especially if use in second-line therapy is planned, where a valid alternative with paclitaxel–ramucirumab is available. This recommendation is based on the inclusion criteria of the Destiny-GC-02 trial and the knowledge that loss of HER2 status occurs in approximately 30% of gastric cancers after first-line therapy with trastuzumab . There is initial evidence of efficacy of T-DXd in low HER2 expression . However, data are not yet sufficient to recommend its use. Third-line therapy For the treatment of patients with advanced gastric cancer in the third line and beyond, the best evidence is available for trifluridine–tipiracil (FTD/TPI) based on the phase III TAGS trial. Median overall survival with FTD/TPI vs. placebo was significantly improved in the overall patient cohort, in the third-line cohort, and in the fourth-line cohort . Therefore, if oral therapy is feasible, trifluridine–tipiracil (FTD/TPI) should be used; alternatively, if intravenous therapy is preferred, irinotecan or a taxane can be given, if not already used in a previous line of therapy. As shown above, T-DXd is a very effective third-line therapy for HER2-positive carcinoma after trastuzumab pretreatment. Nivolumab also proved to be effective; however, the data from the ATTRACTION-02 trial were obtained exclusively in Asian patients , so that nivolumab in the third line of treatment in patients with advanced gastric cancer does not have EMA approval, and therefore cannot be recommended. Following the recommendation of a molecular tumor board, an unapproved therapeutic option may also be preferred in justified cases, especially if the recommendation can be based on an ESMO Scale for Clinical Actionability of Molecular Targets (ESCAT) level I or II . Surgery for metastatic gastric cancer The randomized phase III REGATTA trial showed that gastrectomy in addition to chemotherapy for metastatic disease did not confer a survival benefit compared with chemotherapy alone . International data analyses show that surgical therapy for oligometastasic disease is increasingly perceived as a treatment option . The AIO-FLOT3 phase II trial reported results on the feasibility of resection for stage IV gastric cancer and survival in highly selected patients with oligometastatic disease that was without primary progression on FLOT chemotherapy . The potential prognostic benefit of resections for oligometastatic gastric cancer is currently being evaluated in randomized phase III trials [RENAISSANCE (NCT0257836) and SURGIGAST (NCT03042169)]. In a Delphi procedure, a definition for oligometastasis was determined in a European expert group (OMEC). According to this definition, oligometastasis can be defined as the following phenotypes: 1–2 metastases in either liver, lung, retroperitoneal lymph nodes, adrenal glands, soft tissue or bone . Supportive therapy and nutrition It is recommended that nutritional and symptom screening with appropriate tools be performed regularly in all patients with advanced gastric cancer, and appropriate supportive therapies be derived. A study from China showed that early integration of supportive-palliative care is effective and suggests a survival benefit in patients with advanced gastric cancer . Weight loss is a multifactorial phenomenon and may be due to digestive tract obstruction, malabsorption, or hypermetabolism. Clinical data sets show that weight loss of ≥ 10% before chemotherapy or ≥ 3% during the first cycle of chemotherapy is associated with poorer survival . Also, a change in body composition with impaired muscular capacity was shown to be prognostically unfavorable in patients with advanced gastric cancer . The modified Glasgow Prognostic Score (serum CRP and albumin) can be used to assess the extent of sarcopenia and the prognosis of patients with advanced gastric cancer . From this, it can be concluded that screening for nutritional status should be performed in all patients with advanced gastric cancer (for example, using Nutritional Risk Screening, NRS) and expert nutritional counseling and co-supervision should be offered, if nutritional deficiency is evident. Dysphagia in proximal gastric cancer can be improved with radiotherapy or stent insertion . Single-dose brachytherapy is the preferred option at some centers and results in longer-lasting symptom control and fewer complications than stent insertion. Stenting is needed for severe dysphagia and especially in patients with limited life expectancy, as the effects of the stent are immediate, whereas radiotherapy improves dysphagic symptoms only after approximately 4–6 weeks . If radiotherapy or a stent are not an option, enteral nutrition via naso-gastric, naso-jejunal, or percutaneously placed feeding tubes may provide relief . The indication for parenteral nutrition follows generally accepted guidelines.
Since the probability of lymph node metastasis in mucosal gastric cancer (T1a) is very low, endoscopic resection (ER) may be sufficient . If histopathologic workup after endoscopic resection reveals that tumor infiltration extends into the submucosa (T1b), surgical resection with systematic lymphadenectomy should be performed, as lymph node metastases may already be present in up to 30% of cases. Gastric cancers classified as pT1a cN0 cM0 should be treated with endoscopic resection, considering the adapted Japanese criteria . A (limited) surgical approach is an alternative. Perioperative or adjuvant chemotherapy is not indicated for stage IA (T1a) patients.
For stage IA gastric cancer with infiltration of the submucosa, the risk of lymph node metastases is 25–28%. The 5-year survival rate is 70.8% for all stage IA in the SEER database , and the cancer-specific survival rate at 10 years is 93% in the Italian IRGGC analysis. Therapy of choice in stage I (T1b category) is radical surgical resection (subtotal, total, or transhiatal extended gastrectomy). Limited resection can be recommended only in exceptional cases due to the imprecise accuracy of pre-therapeutic staging. A benefit from perioperative or adjuvant chemotherapy has not been established for stage IA (T1b) patients.
In stage IB—III, resection should consist of radical resection (subtotal, total, or transhiatal extended gastrectomy) in combination with D2- lymphadenectomy. Subtotal gastrectomy can be performed if safe free tumor margins can be achieved. The previously recommended tumor-free margins of 5 and 8 cm for intestinal and diffuse tumor growth types, respectively, are no longer accepted. The scientific evidence for definitive recommendations is low. A negative oral margin in the intraoperative frozen section is crucial. Perioperative chemotherapy with a platinum derivative, a fluoropyrimidine, and an anthracycline significantly prolonged overall survival in patients with resectable gastric cancer in the MAGIC trial . In the French FNCLCC/FFCD multicenter study, perioperative chemotherapy with a platinum derivative and a fluoropyrimidine without anthracycline showed a comparable effect size on improving survival . Currently, neither chemotherapy regimen is the first choice. Treatment according to the FLOT regimen (5-fluorouracil/folinic acid/oxaliplatin/docetaxel) further improved progression-free survival (hazard ratio, HR 0.75) and overall survival (HR 0.77) in patients with stage ≥ cT2 and/or cN + compared with therapy analogous to MAGIC. The relatively higher efficacy of FLOT was shown to be consistent across relevant subgroup analyses such as age, histology, and tumor location. The rate of perioperative complications was comparable . For patients with gastric cancer ≥ stage IB who received resection without prior chemotherapy (e.g., due to misdiagnosed tumor stage prior to surgery), adjuvant chemotherapy may be recommended. In HER2-positive tumors, a benefit from combining perioperative chemotherapy with a HER2 antibody in the perioperative setting in terms of overall survival has not been proven, and therefore cannot be recommended outside of clinical trials. The AIO-PETRARCA phase 2 study showed a higher histopathologic remission rate when FLOT chemotherapy was combined with trastuzumab + pertuzumab and a trend in favor of better progression-free and overall survival . These data require validation in larger and independent cohorts. In microsatellite instability (MSI-H) localized gastric carcinoma, the efficacy of perioperative chemotherapy, based on retrospective data analyses , has been controversially discussed. However, more recent data from the DANTE trial show that complete and subtotal tumor remissions can be achieved with FLOT chemotherapy even in MSI-H subtype gastric carcinomas . Thus, according to the current status, perioperative chemotherapy with the FLOT regimen remains indicated for MSI-H gastric cancers if tumor response is pursued. The FFCD-NEONIPIGA phase 2 study showed a high histopathologic remission rate after 12 weeks of therapy with nivolumab + ipilimumab without chemotherapy in resectable MSI-H cancers . Data require validation in larger and independent patient cohorts. After R1 resection, adjuvant radiochemotherapy may be considered.
The aim of therapy is usually non-curative. The first priority is systemic drug therapy, supplemented in individual cases by local therapeutic measures. Active symptom control and supportive measures such as nutritional counseling, psychosocial support, and palliative care are an integral part of treatment. The prognosis of patients with locally advanced and irresectable or metastatic (pooled here as "advanced") gastric cancer is unfavorable. Studies evaluating the benefit from chemotherapy have shown a median survival of less than 1 year . However, there is evidence that chemotherapy can prolong the survival of patients with advanced gastric cancer compared to best supportive therapy alone and maintain quality of life longer . Systemic tumor therapy The current recommended algorithms for drug therapy of patients with advanced gastric cancer are shown in Figs. , , and . First-line chemotherapy, molecular targeted therapy, and immunotherapy Chemotherapy The standard of care for first-line chemotherapy of advanced gastric cancer is a platinum–fluoropyrimidine doublet. Oxaliplatin and cisplatin are comparably effective, with a more favorable side effect profile for oxaliplatin. This may contribute to a trend toward better efficacy, especially in patients > 65 years . Fluoropyrimidines can be administered as infusion (5-FU) or orally (capecitabine or S-1). Oral fluoropyrimidines are comparably effective to infused 5-FU . Capecitabine is approved in combination with a platinum derivative and has been studied with both cis- and oxaliplatin in European patients. S-1 is established as a standard of care in Japan and approved in Europe for palliative first-line therapy in combination with cisplatin. Infused 5-FU should be preferred over oral medications in patients with dysphagia or other feeding problems. In elderly or frail patients, results of the phase III GO-2 trial support a dose-reduced application of oxaliplatin–fluoropyrimidine chemotherapy (to 80 or 60% of the standard dose from the beginning), resulting in fewer side effects with comparable efficacy . The addition of docetaxel to a platinum–fluoropyrimidine combination (three-weekly DCF regimen) improved radiographic response rates and prolonged overall survival in a historical phase III trial, but also resulted in significantly increased side effects . Other phase II trials examined modified docetaxel–platinum–fluoropyrimidine triplets and showed reduced toxicity compared with DCF in some cases . However, the higher response rate of a triplet (37% vs. 25% does not translate into prolonged survival in recent trials, which included effective second-line regimens. In the phase III JCOG1013 trial, patients with advanced gastric cancer received either cisplatin plus S-1 or cisplatin plus S-1 and docetaxel. There were no differences in radiographic response, progression-free survival, or overall survival . Therefore, with increased toxicity and uncertain impact on overall survival, no recommendation can be made for first-line docetaxel–platinum–fluoropyrimidine therapy, so that a platinum–fluoropyrimidine doublet remains the standard approach. In individual cases, e.g., when fast tumor regression is urgently required, first-line therapy with a platinum–fluoropyrimidine–docetaxel triplet may be indicated. Irinotecan-5-FU has been compared with cisplatin-5-FU and with epirubicin–cisplatin–capecitabine in randomized phase III trials and showed comparable survival with controllable side effects . Irinotecan-5-FU can, therefore, be considered a treatment alternative to platinum–fluoropyrimidine doublets according to scientific evidence; however, irinotecan has no formal approval in Europe for gastric cancer. HER2-positive gastric cancer HER2 positivity is defined in gastric cancer as the presence of protein expression with immunohistochemistry score [IHC] of 3 + or IHC 2 + and concomitant gene amplification on in situ hybridization [ISH], HER2/CEP17 ratio ≥ 2.0. HER2 diagnosis should be quality controlled . Trastuzumab should be added to chemotherapy in patients with HER2-positive advanced gastric cancer . The recommendation is based on data from the phase III ToGA trial, showing a higher response rate and prolonged survival for trastuzumab–cisplatin–fluoropyrimidine chemotherapy vs. chemotherapy alone using the above selection criteria; the additional trastuzumab side effects are minor and controllable . Combinations of trastuzumab and oxaliplatin plus fluoropyrimidine show comparable results to the historical cisplatin-containing ToGA regimen . Based on data from the not yet fully reported results of the Keynote-811 study, the Commission for Human Medical Products (CHMP) of the European Medicines Agency (EMA) published a positive opinion for pembrolizumab plus trastuzumab and chemotherapy as first-line treatment for HER2-positive advanced gastric or gastroesophageal junction (GEJ) adenocarcinoma expressing PD-L1 (CPS ≥ 1) on 20th of July 2023 ( https://www.ema.europa.eu/en/medicines/human/summaries-opinion/keytruda-10 ). If available, this combination should be preferred over trastuzumab plus chemotherapy in the respective patient population (Fig. ). Immunotherapy The phase III CheckMate 649 trial evaluated the addition of nivolumab to chemotherapy (capecitabine-oxaliplatin or 5-FU/folinic acid-oxaliplatin) in patients with previously untreated gastric, esophago-gastric junction, or esophageal adenocarcinoma . The study included patients regardless of tumor PD-L1 status; the dual primary endpoints were overall survival and progression-free survival. Approximately 60% of the study population had tumors with a PD-L1 CPS ≥ 5. Nivolumab plus chemotherapy yielded a significant improvement over chemotherapy alone in overall survival (14.4 vs. 11.1 months, HR 0.71 [98.4% CI 0.59–0.86]; p < 0.0001) and progression-free survival (7.7 vs. 6.0 months, HR 0.68 [98% CI 0.56–0.81]; p < 0.0001) in patients with a PD-L1 CPS ≥ 5. Overall survival benefit was enriched in patients with MSI-H tumors with nivolumab plus chemotherapy vs. chemotherapy (unstratified hazard ratio 0.38; 95% confidence interval 0.17, 0.84). The Asian phase II/III ATTRACTION-04 trial also showed a significant improvement in progression-free survival with nivolumab and first-line chemotherapy, but with no significant improvement in overall survival compared to first-line chemotherapy alone. The most likely reason for the lack of survival benefit (> 17 months in both arms) is that many patients received post-progression therapies including immunotherapy after first-line therapy . The multinational randomized phase III Keynote-859 trial included 1589 patients with advanced incurable gastric cancer. Patients received either platinum–fluoropyrimidine plus pembrolizumab or the same chemotherapy plus placebo every 3 weeks. Overall survival was prolonged in the pembrolizumab group (HR 0.78 [95% CI 0.70–0.87], p < 0.0001). The effect was more pronounced in the subgroup with a PD-L1 CPS ≥ 10 (HR 0.64), whereas efficacy was lower for CPS < 10 (HR 0.86). Overall survival benefit was enriched in patients with MSI-H tumors with pembrolizumab plus chemotherapy vs. chemotherapy (hazard ratio 0.34; 95% confidence interval 0.176, 0.663) . The results, thus, complement the positive trial data from the phase III Keynote-590 study, which led to EU approval of pembrolizumab in combination with platinum–fluoropyrimidine chemotherapy for adenocarcinoma of the esophagus and esophago-gastric junction . Positive phase III trial data were also presented on two immune checkpoint (PD-1) inhibitors not currently approved in Europe. Sintilimab in combination with oxaliplatin and capecitabine improved overall survival in the phase III ORIENT-16 trial . In the phase III Rationale-305 study, tislelizumab prolonged overall survival in combination with platinum–fluoropyrimidine or platinum-investigator-choice chemotherapy in patients with a positive PD-L1 score. PD-L1 was evaluated according to a scoring system not yet established internationally (the so-called Tumor Area Proportion score, TAP) . ORIENT-16 and Rationale-305 have not been fully published to date, but support the overall assessment that PD-1 immune checkpoint inhibitors can improve the efficacy of chemotherapy (depending on PD-L1 expression). Claudin 18.2 Data from the multinational phase III Spotlight trial were recently published. These show that in patients with advanced irresectable gastric cancer and tumor claudin 18.2 expression in ≥ 75% of tumor cells, zolbetuximab, a chimeric monoclonal IgG1 antibody directed against claudin 18.2, in combination with FOLFOX chemotherapy prolongs overall survival (median 18.23 vs. 15.54 months, HR 0.750, p = 0.0053). The main side effects of zolbetuximab are nausea and vomiting, especially during the first applications . The results of the phase III Spotlight trial are largely confirmed by the multinational phase III GLOW trial, in which the chemotherapy doublet was used as a control therapy or combination partner for zolbetuximab . It remains to be seen whether the European Medicines Agency will grant approval to zolbetuximab in patients with claudin 18.2-positive metastatic and previously untreated gastric cancer. Second-line and third-line therapy chemotherapy and anti-angiogenic therapy Figures and show the algorithm for second- and third-line therapy for patients with advanced gastric cancer. The evidence-based chemotherapy options in this setting are paclitaxel, docetaxel, and irinotecan, which have comparable efficacy with different specific toxicities . Irinotecan may be preferred in patients with preexisting neuropathy; however, there is no EU approval. 5-FU/folinic acid plus irinotecan (FOLFIRI) is also occasionally used, but the scientific evidence for its use in second- and third-line treatment is limited . Ramucirumab plus paclitaxel is the recommended standard for second-line therapy and is approved in the EU. The addition of the anti-vascular endothelial growth factor receptor-2 (VEGFR-2) antibody ramucirumab to paclitaxel increases tumor response rates and prolongs progression-free and overall survival according to the results of the phase III RAINBOW trial . Already in the phase III REGARD trial, ramucirumab monotherapy showed prolonged survival compared to placebo, albeit with a low radiological response rate . Immunotherapy in second- and third-line therapy In the phase III KEYNOTE-061 trial, pembrolizumab monotherapy did not show prolonged overall survival compared with chemotherapy . However, an exploratory subgroup analysis recognized a clear benefit for anti-PD-1 immunotherapy in patients with MSI-H gastric cancer . Therefore, PD-1 inhibition is recommended in advanced MSI-H carcinomas at the latest in second-line treatment. Pembrolizumab has European approval for this indication based on the Keynote-061 and Keynote-158 trials . Of note, pembrolizumab in second line for MSI-High advanced gastric cancer is not recommended when immunotherapy was administered in first-line treatment. Other biomarkers, particularly EBV and tumor mutation burden, are also discussed as predictive factors for PD-1 immune checkpoint inhibitor efficacy . However, the evidence to date is insufficient to support a positive recommendation for immunotherapy based upon the presence of these biomarkers. HER2-targeted therapy Studies evaluating trastuzumab, lapatinib, and trastuzumab emtansine for second-line treatment in patients with HER2-positive carcinomas were negative . Therefore, these drugs should not be used in gastric cancer outside of clinical trials. A randomized phase II trial showed an improvement in tumor response rate and overall survival for the antibody–drug conjugate trastuzumab deruxtecan (T-DXd) compared with standard chemotherapy in patients with pretreated HER2-positive advanced gastric cancer . Destiny-GC-04 is an ongoing study, assessing the efficacy and safety of T-DXd compared with ramucirumab and paclitaxel in participants with HER2-positive (defined as immunohistochemistry [IHC] 3 + or IHC 2 + /in situ hybridization [ISH] +) gastric or esophago-gastric junction adenocarcinoma who have progressed on or after a trastuzumab-containing regimen and have not received any additional systemic therapy ( https://classic.clinicaltrials.gov/ct2/show/NCT04704934 ). Prerequisites for inclusion in the Destiny-GC-01 study were at least two prior lines of therapy, prior treatment with a platinum derivative, a fluoropyrimidine, and trastuzumab, and previously confirmed HER2 positivity. The study was recruited exclusively in East Asia. The results of Destiny-GC-01 were largely confirmed in the single-arm phase II Destiny-GC-02 trial, which included non-Asian patients in second-line therapy. Mandatory was platinum–fluoropyrimidine–trastuzumab pretreatment and confirmed HER2 positivity of the tumor in a recent re-biopsy before initiating T-DXd therapy . The EU approval includes the following indication of T-DXd: monotherapy for the treatment of adult patients with advanced HER2-positive adenocarcinoma of the stomach or esophago-gastric junction who have received a prior trastuzumab-based regimen. We recommend, according to the classically established HER2 diagnostic criteria, to check the HER2 status prior to therapy with T-DXd, especially if use in second-line therapy is planned, where a valid alternative with paclitaxel–ramucirumab is available. This recommendation is based on the inclusion criteria of the Destiny-GC-02 trial and the knowledge that loss of HER2 status occurs in approximately 30% of gastric cancers after first-line therapy with trastuzumab . There is initial evidence of efficacy of T-DXd in low HER2 expression . However, data are not yet sufficient to recommend its use. Third-line therapy For the treatment of patients with advanced gastric cancer in the third line and beyond, the best evidence is available for trifluridine–tipiracil (FTD/TPI) based on the phase III TAGS trial. Median overall survival with FTD/TPI vs. placebo was significantly improved in the overall patient cohort, in the third-line cohort, and in the fourth-line cohort . Therefore, if oral therapy is feasible, trifluridine–tipiracil (FTD/TPI) should be used; alternatively, if intravenous therapy is preferred, irinotecan or a taxane can be given, if not already used in a previous line of therapy. As shown above, T-DXd is a very effective third-line therapy for HER2-positive carcinoma after trastuzumab pretreatment. Nivolumab also proved to be effective; however, the data from the ATTRACTION-02 trial were obtained exclusively in Asian patients , so that nivolumab in the third line of treatment in patients with advanced gastric cancer does not have EMA approval, and therefore cannot be recommended. Following the recommendation of a molecular tumor board, an unapproved therapeutic option may also be preferred in justified cases, especially if the recommendation can be based on an ESMO Scale for Clinical Actionability of Molecular Targets (ESCAT) level I or II . Surgery for metastatic gastric cancer The randomized phase III REGATTA trial showed that gastrectomy in addition to chemotherapy for metastatic disease did not confer a survival benefit compared with chemotherapy alone . International data analyses show that surgical therapy for oligometastasic disease is increasingly perceived as a treatment option . The AIO-FLOT3 phase II trial reported results on the feasibility of resection for stage IV gastric cancer and survival in highly selected patients with oligometastatic disease that was without primary progression on FLOT chemotherapy . The potential prognostic benefit of resections for oligometastatic gastric cancer is currently being evaluated in randomized phase III trials [RENAISSANCE (NCT0257836) and SURGIGAST (NCT03042169)]. In a Delphi procedure, a definition for oligometastasis was determined in a European expert group (OMEC). According to this definition, oligometastasis can be defined as the following phenotypes: 1–2 metastases in either liver, lung, retroperitoneal lymph nodes, adrenal glands, soft tissue or bone . Supportive therapy and nutrition It is recommended that nutritional and symptom screening with appropriate tools be performed regularly in all patients with advanced gastric cancer, and appropriate supportive therapies be derived. A study from China showed that early integration of supportive-palliative care is effective and suggests a survival benefit in patients with advanced gastric cancer . Weight loss is a multifactorial phenomenon and may be due to digestive tract obstruction, malabsorption, or hypermetabolism. Clinical data sets show that weight loss of ≥ 10% before chemotherapy or ≥ 3% during the first cycle of chemotherapy is associated with poorer survival . Also, a change in body composition with impaired muscular capacity was shown to be prognostically unfavorable in patients with advanced gastric cancer . The modified Glasgow Prognostic Score (serum CRP and albumin) can be used to assess the extent of sarcopenia and the prognosis of patients with advanced gastric cancer . From this, it can be concluded that screening for nutritional status should be performed in all patients with advanced gastric cancer (for example, using Nutritional Risk Screening, NRS) and expert nutritional counseling and co-supervision should be offered, if nutritional deficiency is evident. Dysphagia in proximal gastric cancer can be improved with radiotherapy or stent insertion . Single-dose brachytherapy is the preferred option at some centers and results in longer-lasting symptom control and fewer complications than stent insertion. Stenting is needed for severe dysphagia and especially in patients with limited life expectancy, as the effects of the stent are immediate, whereas radiotherapy improves dysphagic symptoms only after approximately 4–6 weeks . If radiotherapy or a stent are not an option, enteral nutrition via naso-gastric, naso-jejunal, or percutaneously placed feeding tubes may provide relief . The indication for parenteral nutrition follows generally accepted guidelines.
The current recommended algorithms for drug therapy of patients with advanced gastric cancer are shown in Figs. , , and .
The standard of care for first-line chemotherapy of advanced gastric cancer is a platinum–fluoropyrimidine doublet. Oxaliplatin and cisplatin are comparably effective, with a more favorable side effect profile for oxaliplatin. This may contribute to a trend toward better efficacy, especially in patients > 65 years . Fluoropyrimidines can be administered as infusion (5-FU) or orally (capecitabine or S-1). Oral fluoropyrimidines are comparably effective to infused 5-FU . Capecitabine is approved in combination with a platinum derivative and has been studied with both cis- and oxaliplatin in European patients. S-1 is established as a standard of care in Japan and approved in Europe for palliative first-line therapy in combination with cisplatin. Infused 5-FU should be preferred over oral medications in patients with dysphagia or other feeding problems. In elderly or frail patients, results of the phase III GO-2 trial support a dose-reduced application of oxaliplatin–fluoropyrimidine chemotherapy (to 80 or 60% of the standard dose from the beginning), resulting in fewer side effects with comparable efficacy . The addition of docetaxel to a platinum–fluoropyrimidine combination (three-weekly DCF regimen) improved radiographic response rates and prolonged overall survival in a historical phase III trial, but also resulted in significantly increased side effects . Other phase II trials examined modified docetaxel–platinum–fluoropyrimidine triplets and showed reduced toxicity compared with DCF in some cases . However, the higher response rate of a triplet (37% vs. 25% does not translate into prolonged survival in recent trials, which included effective second-line regimens. In the phase III JCOG1013 trial, patients with advanced gastric cancer received either cisplatin plus S-1 or cisplatin plus S-1 and docetaxel. There were no differences in radiographic response, progression-free survival, or overall survival . Therefore, with increased toxicity and uncertain impact on overall survival, no recommendation can be made for first-line docetaxel–platinum–fluoropyrimidine therapy, so that a platinum–fluoropyrimidine doublet remains the standard approach. In individual cases, e.g., when fast tumor regression is urgently required, first-line therapy with a platinum–fluoropyrimidine–docetaxel triplet may be indicated. Irinotecan-5-FU has been compared with cisplatin-5-FU and with epirubicin–cisplatin–capecitabine in randomized phase III trials and showed comparable survival with controllable side effects . Irinotecan-5-FU can, therefore, be considered a treatment alternative to platinum–fluoropyrimidine doublets according to scientific evidence; however, irinotecan has no formal approval in Europe for gastric cancer.
HER2 positivity is defined in gastric cancer as the presence of protein expression with immunohistochemistry score [IHC] of 3 + or IHC 2 + and concomitant gene amplification on in situ hybridization [ISH], HER2/CEP17 ratio ≥ 2.0. HER2 diagnosis should be quality controlled . Trastuzumab should be added to chemotherapy in patients with HER2-positive advanced gastric cancer . The recommendation is based on data from the phase III ToGA trial, showing a higher response rate and prolonged survival for trastuzumab–cisplatin–fluoropyrimidine chemotherapy vs. chemotherapy alone using the above selection criteria; the additional trastuzumab side effects are minor and controllable . Combinations of trastuzumab and oxaliplatin plus fluoropyrimidine show comparable results to the historical cisplatin-containing ToGA regimen . Based on data from the not yet fully reported results of the Keynote-811 study, the Commission for Human Medical Products (CHMP) of the European Medicines Agency (EMA) published a positive opinion for pembrolizumab plus trastuzumab and chemotherapy as first-line treatment for HER2-positive advanced gastric or gastroesophageal junction (GEJ) adenocarcinoma expressing PD-L1 (CPS ≥ 1) on 20th of July 2023 ( https://www.ema.europa.eu/en/medicines/human/summaries-opinion/keytruda-10 ). If available, this combination should be preferred over trastuzumab plus chemotherapy in the respective patient population (Fig. ).
The phase III CheckMate 649 trial evaluated the addition of nivolumab to chemotherapy (capecitabine-oxaliplatin or 5-FU/folinic acid-oxaliplatin) in patients with previously untreated gastric, esophago-gastric junction, or esophageal adenocarcinoma . The study included patients regardless of tumor PD-L1 status; the dual primary endpoints were overall survival and progression-free survival. Approximately 60% of the study population had tumors with a PD-L1 CPS ≥ 5. Nivolumab plus chemotherapy yielded a significant improvement over chemotherapy alone in overall survival (14.4 vs. 11.1 months, HR 0.71 [98.4% CI 0.59–0.86]; p < 0.0001) and progression-free survival (7.7 vs. 6.0 months, HR 0.68 [98% CI 0.56–0.81]; p < 0.0001) in patients with a PD-L1 CPS ≥ 5. Overall survival benefit was enriched in patients with MSI-H tumors with nivolumab plus chemotherapy vs. chemotherapy (unstratified hazard ratio 0.38; 95% confidence interval 0.17, 0.84). The Asian phase II/III ATTRACTION-04 trial also showed a significant improvement in progression-free survival with nivolumab and first-line chemotherapy, but with no significant improvement in overall survival compared to first-line chemotherapy alone. The most likely reason for the lack of survival benefit (> 17 months in both arms) is that many patients received post-progression therapies including immunotherapy after first-line therapy . The multinational randomized phase III Keynote-859 trial included 1589 patients with advanced incurable gastric cancer. Patients received either platinum–fluoropyrimidine plus pembrolizumab or the same chemotherapy plus placebo every 3 weeks. Overall survival was prolonged in the pembrolizumab group (HR 0.78 [95% CI 0.70–0.87], p < 0.0001). The effect was more pronounced in the subgroup with a PD-L1 CPS ≥ 10 (HR 0.64), whereas efficacy was lower for CPS < 10 (HR 0.86). Overall survival benefit was enriched in patients with MSI-H tumors with pembrolizumab plus chemotherapy vs. chemotherapy (hazard ratio 0.34; 95% confidence interval 0.176, 0.663) . The results, thus, complement the positive trial data from the phase III Keynote-590 study, which led to EU approval of pembrolizumab in combination with platinum–fluoropyrimidine chemotherapy for adenocarcinoma of the esophagus and esophago-gastric junction . Positive phase III trial data were also presented on two immune checkpoint (PD-1) inhibitors not currently approved in Europe. Sintilimab in combination with oxaliplatin and capecitabine improved overall survival in the phase III ORIENT-16 trial . In the phase III Rationale-305 study, tislelizumab prolonged overall survival in combination with platinum–fluoropyrimidine or platinum-investigator-choice chemotherapy in patients with a positive PD-L1 score. PD-L1 was evaluated according to a scoring system not yet established internationally (the so-called Tumor Area Proportion score, TAP) . ORIENT-16 and Rationale-305 have not been fully published to date, but support the overall assessment that PD-1 immune checkpoint inhibitors can improve the efficacy of chemotherapy (depending on PD-L1 expression).
Data from the multinational phase III Spotlight trial were recently published. These show that in patients with advanced irresectable gastric cancer and tumor claudin 18.2 expression in ≥ 75% of tumor cells, zolbetuximab, a chimeric monoclonal IgG1 antibody directed against claudin 18.2, in combination with FOLFOX chemotherapy prolongs overall survival (median 18.23 vs. 15.54 months, HR 0.750, p = 0.0053). The main side effects of zolbetuximab are nausea and vomiting, especially during the first applications . The results of the phase III Spotlight trial are largely confirmed by the multinational phase III GLOW trial, in which the chemotherapy doublet was used as a control therapy or combination partner for zolbetuximab . It remains to be seen whether the European Medicines Agency will grant approval to zolbetuximab in patients with claudin 18.2-positive metastatic and previously untreated gastric cancer.
Figures and show the algorithm for second- and third-line therapy for patients with advanced gastric cancer. The evidence-based chemotherapy options in this setting are paclitaxel, docetaxel, and irinotecan, which have comparable efficacy with different specific toxicities . Irinotecan may be preferred in patients with preexisting neuropathy; however, there is no EU approval. 5-FU/folinic acid plus irinotecan (FOLFIRI) is also occasionally used, but the scientific evidence for its use in second- and third-line treatment is limited . Ramucirumab plus paclitaxel is the recommended standard for second-line therapy and is approved in the EU. The addition of the anti-vascular endothelial growth factor receptor-2 (VEGFR-2) antibody ramucirumab to paclitaxel increases tumor response rates and prolongs progression-free and overall survival according to the results of the phase III RAINBOW trial . Already in the phase III REGARD trial, ramucirumab monotherapy showed prolonged survival compared to placebo, albeit with a low radiological response rate .
In the phase III KEYNOTE-061 trial, pembrolizumab monotherapy did not show prolonged overall survival compared with chemotherapy . However, an exploratory subgroup analysis recognized a clear benefit for anti-PD-1 immunotherapy in patients with MSI-H gastric cancer . Therefore, PD-1 inhibition is recommended in advanced MSI-H carcinomas at the latest in second-line treatment. Pembrolizumab has European approval for this indication based on the Keynote-061 and Keynote-158 trials . Of note, pembrolizumab in second line for MSI-High advanced gastric cancer is not recommended when immunotherapy was administered in first-line treatment. Other biomarkers, particularly EBV and tumor mutation burden, are also discussed as predictive factors for PD-1 immune checkpoint inhibitor efficacy . However, the evidence to date is insufficient to support a positive recommendation for immunotherapy based upon the presence of these biomarkers.
Studies evaluating trastuzumab, lapatinib, and trastuzumab emtansine for second-line treatment in patients with HER2-positive carcinomas were negative . Therefore, these drugs should not be used in gastric cancer outside of clinical trials. A randomized phase II trial showed an improvement in tumor response rate and overall survival for the antibody–drug conjugate trastuzumab deruxtecan (T-DXd) compared with standard chemotherapy in patients with pretreated HER2-positive advanced gastric cancer . Destiny-GC-04 is an ongoing study, assessing the efficacy and safety of T-DXd compared with ramucirumab and paclitaxel in participants with HER2-positive (defined as immunohistochemistry [IHC] 3 + or IHC 2 + /in situ hybridization [ISH] +) gastric or esophago-gastric junction adenocarcinoma who have progressed on or after a trastuzumab-containing regimen and have not received any additional systemic therapy ( https://classic.clinicaltrials.gov/ct2/show/NCT04704934 ). Prerequisites for inclusion in the Destiny-GC-01 study were at least two prior lines of therapy, prior treatment with a platinum derivative, a fluoropyrimidine, and trastuzumab, and previously confirmed HER2 positivity. The study was recruited exclusively in East Asia. The results of Destiny-GC-01 were largely confirmed in the single-arm phase II Destiny-GC-02 trial, which included non-Asian patients in second-line therapy. Mandatory was platinum–fluoropyrimidine–trastuzumab pretreatment and confirmed HER2 positivity of the tumor in a recent re-biopsy before initiating T-DXd therapy . The EU approval includes the following indication of T-DXd: monotherapy for the treatment of adult patients with advanced HER2-positive adenocarcinoma of the stomach or esophago-gastric junction who have received a prior trastuzumab-based regimen. We recommend, according to the classically established HER2 diagnostic criteria, to check the HER2 status prior to therapy with T-DXd, especially if use in second-line therapy is planned, where a valid alternative with paclitaxel–ramucirumab is available. This recommendation is based on the inclusion criteria of the Destiny-GC-02 trial and the knowledge that loss of HER2 status occurs in approximately 30% of gastric cancers after first-line therapy with trastuzumab . There is initial evidence of efficacy of T-DXd in low HER2 expression . However, data are not yet sufficient to recommend its use.
For the treatment of patients with advanced gastric cancer in the third line and beyond, the best evidence is available for trifluridine–tipiracil (FTD/TPI) based on the phase III TAGS trial. Median overall survival with FTD/TPI vs. placebo was significantly improved in the overall patient cohort, in the third-line cohort, and in the fourth-line cohort . Therefore, if oral therapy is feasible, trifluridine–tipiracil (FTD/TPI) should be used; alternatively, if intravenous therapy is preferred, irinotecan or a taxane can be given, if not already used in a previous line of therapy. As shown above, T-DXd is a very effective third-line therapy for HER2-positive carcinoma after trastuzumab pretreatment. Nivolumab also proved to be effective; however, the data from the ATTRACTION-02 trial were obtained exclusively in Asian patients , so that nivolumab in the third line of treatment in patients with advanced gastric cancer does not have EMA approval, and therefore cannot be recommended. Following the recommendation of a molecular tumor board, an unapproved therapeutic option may also be preferred in justified cases, especially if the recommendation can be based on an ESMO Scale for Clinical Actionability of Molecular Targets (ESCAT) level I or II .
The randomized phase III REGATTA trial showed that gastrectomy in addition to chemotherapy for metastatic disease did not confer a survival benefit compared with chemotherapy alone . International data analyses show that surgical therapy for oligometastasic disease is increasingly perceived as a treatment option . The AIO-FLOT3 phase II trial reported results on the feasibility of resection for stage IV gastric cancer and survival in highly selected patients with oligometastatic disease that was without primary progression on FLOT chemotherapy . The potential prognostic benefit of resections for oligometastatic gastric cancer is currently being evaluated in randomized phase III trials [RENAISSANCE (NCT0257836) and SURGIGAST (NCT03042169)]. In a Delphi procedure, a definition for oligometastasis was determined in a European expert group (OMEC). According to this definition, oligometastasis can be defined as the following phenotypes: 1–2 metastases in either liver, lung, retroperitoneal lymph nodes, adrenal glands, soft tissue or bone .
It is recommended that nutritional and symptom screening with appropriate tools be performed regularly in all patients with advanced gastric cancer, and appropriate supportive therapies be derived. A study from China showed that early integration of supportive-palliative care is effective and suggests a survival benefit in patients with advanced gastric cancer . Weight loss is a multifactorial phenomenon and may be due to digestive tract obstruction, malabsorption, or hypermetabolism. Clinical data sets show that weight loss of ≥ 10% before chemotherapy or ≥ 3% during the first cycle of chemotherapy is associated with poorer survival . Also, a change in body composition with impaired muscular capacity was shown to be prognostically unfavorable in patients with advanced gastric cancer . The modified Glasgow Prognostic Score (serum CRP and albumin) can be used to assess the extent of sarcopenia and the prognosis of patients with advanced gastric cancer . From this, it can be concluded that screening for nutritional status should be performed in all patients with advanced gastric cancer (for example, using Nutritional Risk Screening, NRS) and expert nutritional counseling and co-supervision should be offered, if nutritional deficiency is evident. Dysphagia in proximal gastric cancer can be improved with radiotherapy or stent insertion . Single-dose brachytherapy is the preferred option at some centers and results in longer-lasting symptom control and fewer complications than stent insertion. Stenting is needed for severe dysphagia and especially in patients with limited life expectancy, as the effects of the stent are immediate, whereas radiotherapy improves dysphagic symptoms only after approximately 4–6 weeks . If radiotherapy or a stent are not an option, enteral nutrition via naso-gastric, naso-jejunal, or percutaneously placed feeding tubes may provide relief . The indication for parenteral nutrition follows generally accepted guidelines.
|
Advancing biomonitoring of eDNA studies with the | beb9b547-d602-4c03-b5e6-6634dd1b5190 | 11737689 | Microbiology[mh] | The One Health concept highlights the close connections and the interdependency between humans, animals, plants and the surrounding environment . Soil health constitutes a keystone element of One Health. Indeed, soils are vital living ecosystems that support ecosystem services and subsequently sustain plants and animals’ health, including humans . Human well-being is intrinsically tied to the soil’s capacity to provide food in quantity and quality . To illustrate this point, it has been estimated that about 95% of our food comes from soils . In addition to supplying nutrients to humans, soils are a reservoir of beneficial and detrimental microorganisms . The latter include fungi, bacteria, nematodes and viruses that spend their entire life cycle or part of it in soils; incidental presence can also occur due, for instance, to anthropogenic activities . Though these soil-borne pathogens represent a small fraction of living organisms in soils, they can potentially cause serious human and plant infectious diseases and outbreaks . Agricultural practices (e.g., organic amendment, tillage, conservation tillage, crop rotations, fallow period, and use of agrochemical products) can affect soil microbial communities and can either promote or suppress soil pathogens . Thus, as stated by healthy soil should display, by definition, a low pathogens and related diseases level. in their recent review (in accordance with the European Commission’s recommendations ), pointed out the lack of biological indicators in soil health assessments and proposed the inclusion of soil biodiversity and pathogens as indicators. The evaluation of pathogens risk is challenging and requires the use of appropriate analytical and statistical methods for the establishment of sensitive, informative and feasible ‘biological indicators’ (also called ‘bioindicators’) . To address this ambitious task, it necessitates the conjunction of diverse disciplines (e.g., agronomy, ecology, bioinformatics, biostatistics, and social science), and a close appropriation of the new emerging technologies for accessing this hidden biodiversity. Ecosystem monitoring powered by environmental ‘omics’ represents a revolutionary toolbox that is increasingly being used . Among this ‘ecogenomic toolbox’ , the taxonomy-based implementation methods rely on environmental DNA (eDNA). The term ‘eDNA’ generally means DNA extracted from an environmental sample without isolating the target organism . The eDNA approach has been applied to diverse environments, from terrestrial to deep-sea habitats, and a large array of organisms, from microscopic to macroscopic forms (e.g., fungi, bacteria, insects, plants and fishes) . High‐throughput eDNA amplicon sequencing–metabarcoding of eDNA–has been recently used for estimating environmental quality from the diversity, composition, structure and functioning of biological communities . As an example, in the context of ecological restoration of degraded lands, soil microbial phyla and functional groups were newly investigated in different regions and proposed as potential indicators of ecosystem recovery . In addition to this, some community analyses take into consideration the significant variations in relative abundances of taxa at the species level in terms of Operational Taxonomic Units (OTUs) or Amplified Sequence Variants (ASVs)) . To find a relative abundance of species correlated to a condition, a current consensus seems to have been found by the scientific community with the use of the DESeq2 tool —a tool normally used for gene expression (transcriptomics) and not for eDNA metabarcoding studies. However, some limitations appear as there are disparities between studies in the way this tool is used in metabarcoding research. There are many standardisation (normalisation) methods, and they are sometimes used at different stages of analysis. Such as examples, the method of normalisation that is sometimes independent of the rarefaction or not (e.g., vs . ); the use of rarefied data or not (independently of the normalisation, like in vs . ); the use of DESeq2 normalisation instead of rarefaction (e.g., ). Also, the notion of enrichment does not seem to be the same depending on the study (e.g., vs . ). And lastly, the use of taxonomic rank is not similar between studies (e.g., vs . ). All this makes it difficult to compare studies. These differences stem from the absence of standardised guidelines or manuals for the use of this kind of tool for metabarcoding studies, which prevents researchers from following a reproducible and validated methodology. In transcriptomics, to compare the relative changes between different conditions in the expression levels of a gene or a protein, the ‘Log-fold change’ measurement is used . It is calculated as the logarithm of the ratio of the values. A significant positive log-fold change indicates an enrichment (a greater relative abundance), whereas a negative log-fold change indicates a depletion (a lower relative abundance). Since these statistics are originally used for genomics/transcriptomics, a genetic enrichment stricto sensu corresponds to a group of genes that have a similar biological function and are expressed in the same way and there is therefore genetic enrichment for a given function . In the case of taxonomy (by parallelism with genomics), literally, this would correspond more to enrichment by several ASVs or OTUs (and not just one from relative abundance values) that share a higher taxonomic rank (e.g., Kingdom, Class, Order, Family, or Genus–like in ), or a similar biological/ecological function (e.g., plant pathogen for example as in FUNGuild ). Here, the Anaconda R package was developed with the ideas of homogenising and reframing the metabarcoding analyses using the DESeq2 tool (named ‘targeted’ analysis)—to address the points on the use of statistics discussed above (LogFoldFC, DESeq2, etc.), and to go further in the analysis of taxonomic enrichment (named ‘global’ analysis). Taxonomic enrichment here, stricto sensu , allows highlighting a particular taxonomic rank that is carried by several phylogenetically related species. In the field of identifying bioindicators, working at higher taxonomic ranks than the species can be particularly relevant . Taxonomic enrichment analysis methods can therefore find taxonomic ranks that are condition-specific over- or under-represented. This ‘global’ analysis approach follows methods for gene expression, which used a hierarchical clustering tree of significant Gene Ontology (GO) categories based on shared genes (e.g., Rank-based Gene Ontology Analysis with Adaptive Clustering—RBGOA). This method was adapted in the Anaconda R package to the taxonomy, to obtain an enrichment based on taxonomic ranks ( i . e ., Kingdom, Class, Order, Family, Genus, and Species). This shift between GO and a taxonomy ontology was possible due to the work of who adapted the GO system to the NCBI Taxon terms. We believe that such a combination of ‘targeted’ and ‘global’ approaches could, in the near future, boost the use of DNA metabarcoding in biomonitoring and could even represent the next breakthrough in the assessment of soil health and One Health. Analyses were performed on soil fungal and bacterial communities from Maré Island, an island which is part of the Loyalty Island and the French archipelago of New Caledonia . In the Loyalty Islands, indigenous people have traditionally practised fire-fallow agriculture . In Maré, yam cultivation, which displays a high symbolic value, is carried out after low burning (ecoburial) in forests and can be followed, before a fallow period, by vegetable or fruit plantations in the two succeeding years. Societal transformations have led to changes in the traditional agricultural practices on the island . Indeed, fallow periods that last for one to two decades, are more and more frequently limited to a few years (; Drouin, pers. com.). There is worldwide very limited information on how fallow practices affect soil properties, particularly concerning soil microorganisms . The few studies undertaken showed that the effects of the fallow period on microbial diversity are inconsistent, with findings ranging from increases to decreases, or no clear changes . In terms of composition, fallow treatment seems overall to induce changes in fungal and bacterial assemblies . More in depth , revealed that fallow management in a crop cultivation in China decreased the relative abundance of fungal plant pathogens in soil. However, as stated by the authors, due to the absence of replicated plots ( i . e ., only one field plot considered per condition), these results are preliminary. For developing sustainable agricultural practices, it is thus crucial to determine in which extent fallow management influence soil microbial communities, and subsequently soil health. In this context, our main objective was to determine how changes in traditional agricultural practices in Maré Island impact the soil microbial communities, with both ‘classical’ community analyses and our newly developed methodology implemented in the Anaconda R package. We hypothesised that the reduction in the fallowing period would lead to a possible emergence of fungal and bacterial pathogens in soils. To test this hypothesis, plots were established in cultivations differing in their fallowing length, i . e ., short fallow (SF) versus long fallow (LF), and compared to ‘natural’ forests (F) that were used as reference ecosystems. Soil bacterial and fungal communities were assessed using high-throughput amplicon sequencing of environmental DNA (eDNA). In addition, we also looked at other indicators of soil health , such as soil organic carbon, soil nutrients content, pH, vegetation cover, and another biological group that corresponds to the nematodes (characterised using a morphological approach). We subsequently determined whether these parameters were related or not to the soil microbial communities, since they are known to be involved in the accumulation or suppression of pathogens. Experimental design Study sites The archipelago of New Caledonia is located in the southwestern Pacific, just above the Capricorn tropic, about 1500 km east of Australia and 2000 km north of New Zealand . The New Caledonian archipelago encompass the Loyalty Islands, which includes Maré Island ( –map realised with the R package marmap V. 1.0.10 ). The Maré Island . comprise four main types of soils . Among them, the Gibbsic Ferralsol are known for their extraordinary content of organic matter (humic soils) and gibbsite and used for yam ( Dioscorea sp.) cultivation. All sampling in this study took place on this type of soil. Conditions and soil sampling Three condition types were studied: (1) fields that were recently (two to three years ago) cultivated and harvested, then let in fallow, representing the short fallow condition (SF), (2) fields that were last cultivated and harvested ten to twenty years ago and which will be planted in the year of the study, called the long fallow condition (LF), and (3) lands that have never been cultivated and are used as a reference, called the forest condition (F). Five plots of 20 x 20 m were established per condition, totalling 15 plots ( and ). In each 20 x 20 m plot, four 5 x 5 m sub-plots were placed in the corners and one in the centre. Five soil samples were collected from each sub-plot at a depth of 0–15 cm using a 5 cm diameter auger. The samples from each plot were combined to form a composite soil sample, resulting in 15 composite samples. These samples were sieved on site using 5 mm and 2 mm sieves, placed in a cooler, and stored at 4°C overnight before being flown to Grande Terre . The soil samples were then divided for analysis: one part for DNA extraction (stored at -20°C) at the Plateforme du Vivant in Nouméa, and the other part sent to France within five days for soil organo-physical-chemical analysis and nematode characterisation. Soil organo-physico-chemical analyses All organic-physical-chemical analyses were carried out by an independent laboratory for analysis, study and advice on soil biology (Celesta lab, https://celesta-lab.fr ); see S1 for more information. Plant community inventory Plots of 20 x 20m were inventoried for plant species with DBH > 5cm. In each plot, four 5 x 5m sub-plots (same as above) were established, where plant species over 1m in height were recorded and measured. Additionally, smaller plant species (less than 1m in height) were counted within these sub-plots. Nematodes survey On the same soil samples used for previous analyses, a survey of nematodes was realised by the independent engineering office Elisol environnement ( https://www.elisol.fr ). The taxonomic distinction was made up to the families. The abundance per site was also recorded (number of individuals per 100g of dry soil). Molecular method Environmental DNA extraction, libraries generation and sequencing Environmental DNA extraction, libraries generation and sequencing we realized as previously described in . The Regional Genotyping Platform (GPTR Génotypage, https://www.gptr-lr-genotypage.com/ ) of the UMR AGAP (CIRAD—INRAE—Montpellier SupAgro) performed the libraries generation and sequencing. Approximately 13 million paired reads of 250 bp length were obtained for both ITS2 (Fungi) and V4 (Bacteria) in independent sequencing runs. Bioinformatics Working environment The pipeline was run on the Nouméa Institut de Recherche pour le Développement (IRD) cluster under CentOS Linux release 8.3.2011. Downstream analysis has proceeded on macOS Mojave 10.14.6 (x86_64-apple-darwin17.0 (64-bit)). All scripts created and used for this pipeline can be found at https://github.com/PLStenger/Diversity_in_Mare_yam_crop . Qiime2 framework Microbiome analysis was performed using the QIIME 2 framework V. 2021.4.0 . Dereplicated and trimmed sequences were imported into the framework as paired-end (Phred33V2) sequences and denoised using the DADA2 plugin, based on the DADA2 V. 1.8 R library , which removed singletons, chimaeras, and sequencing errors and processed the sequences into a table of exact amplicon sequence variants (ASVs) . Negative control library sequences were used as in . ASVs that were present in only a single sample were filtered, based on the idea that these may not represent real biological diversity but rather PCR or sequencing errors. Finally, all samples were rarefied to the sample with the lowest number of reads, to keep at the higher number of samples ( and Figs). Statistical analyses Soil microbial diversity, composition and structure Statistical analyses were performed using the R software environment V. 4.2.1 . For diversity, the observed number of ASVs , Chao1 , Simpson evenness Pielou evenness , Shannon entropy , and Faith PD were performed using Kruskal-Wallis test after checking the normal law by Shapiro test. Bray-Curtis dissimilarity and Jaccard similarity index matrices were calculated with the q2-diversity tool. These statistics and their significance post hoc test were obtained with the agricolae R package V. 1.3–5 . Boxplots were realised with ggplot2 R package V. 3.3.5 ( and Figs). For fungal functional assignments, we follow the method implemented in . For bacterial functional traits assignment, the database from was used. As Archaea becoming a growing kingdom that is studied with the V4 markers in soil analysis and as there is a unique founded Phyla (Crenarchaeota) and a unique Class (Nitrososphaeria) in our dataset, we included them in our bacteria analysis in the composition bar plots. For all Phyla (ITS2 and V4), Kruskal-Wallis tests were performed on the proportion of the relative abundances between conditions (e.g., SF vs . LF vs . F). Regarding soil microbial community structure analyses, distance matrices based on the Bray-Curtis measurement were visualised using non-metric multidimensional scaling (NMDS) with vegan R package V. 2.5–7 and ggplot2 R package V. 3.3.5 . Differences between microbial communities were tested using PERMANOVA, with 9999 permutations with vegan R package V. 2.5–7 and the post hoc test was realised with the pairwiseAdonis R package V. 0.4 . Relationships between soil chemical properties, plants and nematodes on microbial communities All organic-physical-chemical, plants and nematodes differences between conditions (e.g., F vs . LF; F vs . SF; LF vs . SF) were checked previously using Kruskal-Wallis tests. A soil texture triangle was realised with the ggplot2 R package V. 3.3.5 . We examined relationships between soil fungal and bacterial communities, soil chemical properties, plant and nematode communities using PERMANOVA (nPerm = 9999) like in . We identified significant differences in community structure and then performed post-hoc tests to determine the specific environmental and biological variables driving these differences. After identifying significant environmental variables, we used db-RDA to examine relationships between soil microbial communities and other parameters (soil properties, plant and nematode communities) for each variable following the methods using the R packages ggord V.1.0.0 , pmultcomp V. 1.4–16 , factoextra V. 1.0.7 and vegan V. 2.5–7 . ‘Targeted’ and ‘Global’ analysis by Anaconda R package for high-throughput eDNA sequencing data The R functions created for ‘tArgeted differeNtial and globAl enriChment analysis of taxOnomic raNk by shareD Asvs’ (ANACONDA) were bottled into an R package and submitted and then published to CRAN for code review and better use by third parties and can be found at https://cran.r-project.org/web/packages/Anaconda/index.html and https://github.com/PLStenger/Anaconda . This package has been created based on the data presented in this paper and was built for high-throughput eDNA sequencing analysis, but can be used for more classical ecological studies (see below with plants and nematodes data). This work package encompasses two steps: (I) the ‘targeted’ differential analysis from QIIME2 data by the DeSeq2 algorithm, and (II) the ‘global’ analysis by Taxon Mann-Whitney U test analysis from ‘targeted’ analysis. This also integrates the FunGuild and Bactotraits databases (for using FunGuild, Python V. > 2.7 is required). For the first step (I), the Anaconda R package estimates variance-mean dependence in count/abundance ASVs data from high-throughput sequencing assays and test for differential represented ASVs (through the comparison of previously explained conditions (here in our case, F vs . LF, F vs . SF, and SF vs . LF) based on a model using the negative binomial distribution as in for transcriptomic data (but instead of having gene expressions, we have an abundance of species). This step, therefore, focuses on whether there is an over-representation or an under-representation of specific species in one condition compared to another in a significant way. Here is a simplification of the protocol: download the R package on CRAN ( https://cran.r-project.org/web/packages/Anaconda/index.html ) or in its GitHub mirror ( https://github.com/PLStenger/Anaconda ). i ) Use the QIIME2 files ‘ ASV . tsv ’ which is the list of ASVs abundance for each of your samples created by the QIIME2 pipeline; ii ) ‘ taxonomy . tsv ’ which is the file with the listed taxonomy-ASV key for the rarefied dataset created by the QIIME2 pipeline (will be useful for ‘global’ analysis (II)); iii ) ‘ taxonomy_RepSeq . tsv ’ which is similar to the previous file, but from the representative sequences QIIME2 step (will be useful for ‘global’ analysis (II)), and finally a handmade file named iv ) ‘ SampleSheet_comparison . txt ’. More detailed material and methods can be found at https://github.com/PLStenger/Anaconda and . On R, the dASVa object (differential ASV abundance object) will be created to be fit on a Gamma-Poisson Generalised Linear Model (dispersion estimates for Negative Binomial distributed data), and the dispersion plot and the sparsity plot can be checked. The corresponding taxonomy can be added in the ASVs keys in results and put in a text and Excel file in output. FunGuilds can be added for fungi and Bactotrait for bacteria. MA plots are disponible in the package to adapt the p -value and the FoldChange cut-off. For the second step, the ‘global’ analysis (II) by Taxon Mann-Whitney U test analysis will use the results of the ‘targeted’ analysis. This step does not specifically focus on species that are over- or under-represented in a given condition (like step I) but on all taxonomic ranks (e.g., Phylum, Class, Order, Family, Genus and Species). For this second step, more files are needed and can be downloaded here https://github.com/PLStenger/Anaconda . The first of these files, the ‘ ncbitaxon_ontology . obo ’, is an NCBI organismal classification file adapted for the Anaconda R package, originally based on . The other files are a correspondence for fungi and bacteria QIIME2 code to NCBI Taxon code. Here, the Mann-Whitney U (MWU) test analysis is realised on the correspondence of the NCBI Taxon among the analogous database (NCBITaxon_MWU). This NCBITaxon_MWU uses a continuous measure of significance (such as fold-change or -log( p -value)) to identify NCBITaxon that are significantly enriched with either up- or down-represented ASVs. If the measure is binary (0 or 1) the script will perform a typical ’NCBITaxon enrichment’ analysis based on Fisher’s exact test: it will show NCBITaxon over-represented among the ASVs that have 1 as their measure. On the plot, different fonts are used to indicate significance, and colour indicates enrichment with either up (red) or down (blue) regulated ASVs. The tree on the plot is a hierarchical clustering of NCBITaxon based on shared ASVs. As in , categories that do not have any branch length separating them are included within one another. Also as in , the fraction next to the category name indicates the fraction of ’good’ ASVs in it; ’good’ ASVs are the ones exceeding the arbitrary absValue cutoff (option in taxon_mwuPlot. For realised a Fisher’s based test, specify absValue = 0.5. This value does not affect statistics and is used for plotting only. The original idea was for gene differential expression analysis from adapted here for taxonomic analysis (except that instead of having different functional categories of genes, we have different taxonomic ranks). This step is relevant if there is a consequent amount of data, and to hook a group of species that are taxonomically similar and present in a significant quantity in a condition. Anaconda R package for classical ecological data We applied Anaconda analyses to non-sequencing data (plants and nematodes) from classical inventories, using the ’targeted’ analysis to examine abundance files formatted to match QIIME2 ASV . tsv files (data on plants did not constitute an exhaustive database and data on nematodes stopped at family rank for the ‘global’ analysis). Study sites The archipelago of New Caledonia is located in the southwestern Pacific, just above the Capricorn tropic, about 1500 km east of Australia and 2000 km north of New Zealand . The New Caledonian archipelago encompass the Loyalty Islands, which includes Maré Island ( –map realised with the R package marmap V. 1.0.10 ). The Maré Island . comprise four main types of soils . Among them, the Gibbsic Ferralsol are known for their extraordinary content of organic matter (humic soils) and gibbsite and used for yam ( Dioscorea sp.) cultivation. All sampling in this study took place on this type of soil. Conditions and soil sampling Three condition types were studied: (1) fields that were recently (two to three years ago) cultivated and harvested, then let in fallow, representing the short fallow condition (SF), (2) fields that were last cultivated and harvested ten to twenty years ago and which will be planted in the year of the study, called the long fallow condition (LF), and (3) lands that have never been cultivated and are used as a reference, called the forest condition (F). Five plots of 20 x 20 m were established per condition, totalling 15 plots ( and ). In each 20 x 20 m plot, four 5 x 5 m sub-plots were placed in the corners and one in the centre. Five soil samples were collected from each sub-plot at a depth of 0–15 cm using a 5 cm diameter auger. The samples from each plot were combined to form a composite soil sample, resulting in 15 composite samples. These samples were sieved on site using 5 mm and 2 mm sieves, placed in a cooler, and stored at 4°C overnight before being flown to Grande Terre . The soil samples were then divided for analysis: one part for DNA extraction (stored at -20°C) at the Plateforme du Vivant in Nouméa, and the other part sent to France within five days for soil organo-physical-chemical analysis and nematode characterisation. The archipelago of New Caledonia is located in the southwestern Pacific, just above the Capricorn tropic, about 1500 km east of Australia and 2000 km north of New Zealand . The New Caledonian archipelago encompass the Loyalty Islands, which includes Maré Island ( –map realised with the R package marmap V. 1.0.10 ). The Maré Island . comprise four main types of soils . Among them, the Gibbsic Ferralsol are known for their extraordinary content of organic matter (humic soils) and gibbsite and used for yam ( Dioscorea sp.) cultivation. All sampling in this study took place on this type of soil. Three condition types were studied: (1) fields that were recently (two to three years ago) cultivated and harvested, then let in fallow, representing the short fallow condition (SF), (2) fields that were last cultivated and harvested ten to twenty years ago and which will be planted in the year of the study, called the long fallow condition (LF), and (3) lands that have never been cultivated and are used as a reference, called the forest condition (F). Five plots of 20 x 20 m were established per condition, totalling 15 plots ( and ). In each 20 x 20 m plot, four 5 x 5 m sub-plots were placed in the corners and one in the centre. Five soil samples were collected from each sub-plot at a depth of 0–15 cm using a 5 cm diameter auger. The samples from each plot were combined to form a composite soil sample, resulting in 15 composite samples. These samples were sieved on site using 5 mm and 2 mm sieves, placed in a cooler, and stored at 4°C overnight before being flown to Grande Terre . The soil samples were then divided for analysis: one part for DNA extraction (stored at -20°C) at the Plateforme du Vivant in Nouméa, and the other part sent to France within five days for soil organo-physical-chemical analysis and nematode characterisation. All organic-physical-chemical analyses were carried out by an independent laboratory for analysis, study and advice on soil biology (Celesta lab, https://celesta-lab.fr ); see S1 for more information. Plots of 20 x 20m were inventoried for plant species with DBH > 5cm. In each plot, four 5 x 5m sub-plots (same as above) were established, where plant species over 1m in height were recorded and measured. Additionally, smaller plant species (less than 1m in height) were counted within these sub-plots. On the same soil samples used for previous analyses, a survey of nematodes was realised by the independent engineering office Elisol environnement ( https://www.elisol.fr ). The taxonomic distinction was made up to the families. The abundance per site was also recorded (number of individuals per 100g of dry soil). Environmental DNA extraction, libraries generation and sequencing Environmental DNA extraction, libraries generation and sequencing we realized as previously described in . The Regional Genotyping Platform (GPTR Génotypage, https://www.gptr-lr-genotypage.com/ ) of the UMR AGAP (CIRAD—INRAE—Montpellier SupAgro) performed the libraries generation and sequencing. Approximately 13 million paired reads of 250 bp length were obtained for both ITS2 (Fungi) and V4 (Bacteria) in independent sequencing runs. Environmental DNA extraction, libraries generation and sequencing we realized as previously described in . The Regional Genotyping Platform (GPTR Génotypage, https://www.gptr-lr-genotypage.com/ ) of the UMR AGAP (CIRAD—INRAE—Montpellier SupAgro) performed the libraries generation and sequencing. Approximately 13 million paired reads of 250 bp length were obtained for both ITS2 (Fungi) and V4 (Bacteria) in independent sequencing runs. Working environment The pipeline was run on the Nouméa Institut de Recherche pour le Développement (IRD) cluster under CentOS Linux release 8.3.2011. Downstream analysis has proceeded on macOS Mojave 10.14.6 (x86_64-apple-darwin17.0 (64-bit)). All scripts created and used for this pipeline can be found at https://github.com/PLStenger/Diversity_in_Mare_yam_crop . Qiime2 framework Microbiome analysis was performed using the QIIME 2 framework V. 2021.4.0 . Dereplicated and trimmed sequences were imported into the framework as paired-end (Phred33V2) sequences and denoised using the DADA2 plugin, based on the DADA2 V. 1.8 R library , which removed singletons, chimaeras, and sequencing errors and processed the sequences into a table of exact amplicon sequence variants (ASVs) . Negative control library sequences were used as in . ASVs that were present in only a single sample were filtered, based on the idea that these may not represent real biological diversity but rather PCR or sequencing errors. Finally, all samples were rarefied to the sample with the lowest number of reads, to keep at the higher number of samples ( and Figs). The pipeline was run on the Nouméa Institut de Recherche pour le Développement (IRD) cluster under CentOS Linux release 8.3.2011. Downstream analysis has proceeded on macOS Mojave 10.14.6 (x86_64-apple-darwin17.0 (64-bit)). All scripts created and used for this pipeline can be found at https://github.com/PLStenger/Diversity_in_Mare_yam_crop . Microbiome analysis was performed using the QIIME 2 framework V. 2021.4.0 . Dereplicated and trimmed sequences were imported into the framework as paired-end (Phred33V2) sequences and denoised using the DADA2 plugin, based on the DADA2 V. 1.8 R library , which removed singletons, chimaeras, and sequencing errors and processed the sequences into a table of exact amplicon sequence variants (ASVs) . Negative control library sequences were used as in . ASVs that were present in only a single sample were filtered, based on the idea that these may not represent real biological diversity but rather PCR or sequencing errors. Finally, all samples were rarefied to the sample with the lowest number of reads, to keep at the higher number of samples ( and Figs). Soil microbial diversity, composition and structure Statistical analyses were performed using the R software environment V. 4.2.1 . For diversity, the observed number of ASVs , Chao1 , Simpson evenness Pielou evenness , Shannon entropy , and Faith PD were performed using Kruskal-Wallis test after checking the normal law by Shapiro test. Bray-Curtis dissimilarity and Jaccard similarity index matrices were calculated with the q2-diversity tool. These statistics and their significance post hoc test were obtained with the agricolae R package V. 1.3–5 . Boxplots were realised with ggplot2 R package V. 3.3.5 ( and Figs). For fungal functional assignments, we follow the method implemented in . For bacterial functional traits assignment, the database from was used. As Archaea becoming a growing kingdom that is studied with the V4 markers in soil analysis and as there is a unique founded Phyla (Crenarchaeota) and a unique Class (Nitrososphaeria) in our dataset, we included them in our bacteria analysis in the composition bar plots. For all Phyla (ITS2 and V4), Kruskal-Wallis tests were performed on the proportion of the relative abundances between conditions (e.g., SF vs . LF vs . F). Regarding soil microbial community structure analyses, distance matrices based on the Bray-Curtis measurement were visualised using non-metric multidimensional scaling (NMDS) with vegan R package V. 2.5–7 and ggplot2 R package V. 3.3.5 . Differences between microbial communities were tested using PERMANOVA, with 9999 permutations with vegan R package V. 2.5–7 and the post hoc test was realised with the pairwiseAdonis R package V. 0.4 . Relationships between soil chemical properties, plants and nematodes on microbial communities All organic-physical-chemical, plants and nematodes differences between conditions (e.g., F vs . LF; F vs . SF; LF vs . SF) were checked previously using Kruskal-Wallis tests. A soil texture triangle was realised with the ggplot2 R package V. 3.3.5 . We examined relationships between soil fungal and bacterial communities, soil chemical properties, plant and nematode communities using PERMANOVA (nPerm = 9999) like in . We identified significant differences in community structure and then performed post-hoc tests to determine the specific environmental and biological variables driving these differences. After identifying significant environmental variables, we used db-RDA to examine relationships between soil microbial communities and other parameters (soil properties, plant and nematode communities) for each variable following the methods using the R packages ggord V.1.0.0 , pmultcomp V. 1.4–16 , factoextra V. 1.0.7 and vegan V. 2.5–7 . ‘Targeted’ and ‘Global’ analysis by Anaconda R package for high-throughput eDNA sequencing data The R functions created for ‘tArgeted differeNtial and globAl enriChment analysis of taxOnomic raNk by shareD Asvs’ (ANACONDA) were bottled into an R package and submitted and then published to CRAN for code review and better use by third parties and can be found at https://cran.r-project.org/web/packages/Anaconda/index.html and https://github.com/PLStenger/Anaconda . This package has been created based on the data presented in this paper and was built for high-throughput eDNA sequencing analysis, but can be used for more classical ecological studies (see below with plants and nematodes data). This work package encompasses two steps: (I) the ‘targeted’ differential analysis from QIIME2 data by the DeSeq2 algorithm, and (II) the ‘global’ analysis by Taxon Mann-Whitney U test analysis from ‘targeted’ analysis. This also integrates the FunGuild and Bactotraits databases (for using FunGuild, Python V. > 2.7 is required). For the first step (I), the Anaconda R package estimates variance-mean dependence in count/abundance ASVs data from high-throughput sequencing assays and test for differential represented ASVs (through the comparison of previously explained conditions (here in our case, F vs . LF, F vs . SF, and SF vs . LF) based on a model using the negative binomial distribution as in for transcriptomic data (but instead of having gene expressions, we have an abundance of species). This step, therefore, focuses on whether there is an over-representation or an under-representation of specific species in one condition compared to another in a significant way. Here is a simplification of the protocol: download the R package on CRAN ( https://cran.r-project.org/web/packages/Anaconda/index.html ) or in its GitHub mirror ( https://github.com/PLStenger/Anaconda ). i ) Use the QIIME2 files ‘ ASV . tsv ’ which is the list of ASVs abundance for each of your samples created by the QIIME2 pipeline; ii ) ‘ taxonomy . tsv ’ which is the file with the listed taxonomy-ASV key for the rarefied dataset created by the QIIME2 pipeline (will be useful for ‘global’ analysis (II)); iii ) ‘ taxonomy_RepSeq . tsv ’ which is similar to the previous file, but from the representative sequences QIIME2 step (will be useful for ‘global’ analysis (II)), and finally a handmade file named iv ) ‘ SampleSheet_comparison . txt ’. More detailed material and methods can be found at https://github.com/PLStenger/Anaconda and . On R, the dASVa object (differential ASV abundance object) will be created to be fit on a Gamma-Poisson Generalised Linear Model (dispersion estimates for Negative Binomial distributed data), and the dispersion plot and the sparsity plot can be checked. The corresponding taxonomy can be added in the ASVs keys in results and put in a text and Excel file in output. FunGuilds can be added for fungi and Bactotrait for bacteria. MA plots are disponible in the package to adapt the p -value and the FoldChange cut-off. For the second step, the ‘global’ analysis (II) by Taxon Mann-Whitney U test analysis will use the results of the ‘targeted’ analysis. This step does not specifically focus on species that are over- or under-represented in a given condition (like step I) but on all taxonomic ranks (e.g., Phylum, Class, Order, Family, Genus and Species). For this second step, more files are needed and can be downloaded here https://github.com/PLStenger/Anaconda . The first of these files, the ‘ ncbitaxon_ontology . obo ’, is an NCBI organismal classification file adapted for the Anaconda R package, originally based on . The other files are a correspondence for fungi and bacteria QIIME2 code to NCBI Taxon code. Here, the Mann-Whitney U (MWU) test analysis is realised on the correspondence of the NCBI Taxon among the analogous database (NCBITaxon_MWU). This NCBITaxon_MWU uses a continuous measure of significance (such as fold-change or -log( p -value)) to identify NCBITaxon that are significantly enriched with either up- or down-represented ASVs. If the measure is binary (0 or 1) the script will perform a typical ’NCBITaxon enrichment’ analysis based on Fisher’s exact test: it will show NCBITaxon over-represented among the ASVs that have 1 as their measure. On the plot, different fonts are used to indicate significance, and colour indicates enrichment with either up (red) or down (blue) regulated ASVs. The tree on the plot is a hierarchical clustering of NCBITaxon based on shared ASVs. As in , categories that do not have any branch length separating them are included within one another. Also as in , the fraction next to the category name indicates the fraction of ’good’ ASVs in it; ’good’ ASVs are the ones exceeding the arbitrary absValue cutoff (option in taxon_mwuPlot. For realised a Fisher’s based test, specify absValue = 0.5. This value does not affect statistics and is used for plotting only. The original idea was for gene differential expression analysis from adapted here for taxonomic analysis (except that instead of having different functional categories of genes, we have different taxonomic ranks). This step is relevant if there is a consequent amount of data, and to hook a group of species that are taxonomically similar and present in a significant quantity in a condition. Anaconda R package for classical ecological data We applied Anaconda analyses to non-sequencing data (plants and nematodes) from classical inventories, using the ’targeted’ analysis to examine abundance files formatted to match QIIME2 ASV . tsv files (data on plants did not constitute an exhaustive database and data on nematodes stopped at family rank for the ‘global’ analysis). Statistical analyses were performed using the R software environment V. 4.2.1 . For diversity, the observed number of ASVs , Chao1 , Simpson evenness Pielou evenness , Shannon entropy , and Faith PD were performed using Kruskal-Wallis test after checking the normal law by Shapiro test. Bray-Curtis dissimilarity and Jaccard similarity index matrices were calculated with the q2-diversity tool. These statistics and their significance post hoc test were obtained with the agricolae R package V. 1.3–5 . Boxplots were realised with ggplot2 R package V. 3.3.5 ( and Figs). For fungal functional assignments, we follow the method implemented in . For bacterial functional traits assignment, the database from was used. As Archaea becoming a growing kingdom that is studied with the V4 markers in soil analysis and as there is a unique founded Phyla (Crenarchaeota) and a unique Class (Nitrososphaeria) in our dataset, we included them in our bacteria analysis in the composition bar plots. For all Phyla (ITS2 and V4), Kruskal-Wallis tests were performed on the proportion of the relative abundances between conditions (e.g., SF vs . LF vs . F). Regarding soil microbial community structure analyses, distance matrices based on the Bray-Curtis measurement were visualised using non-metric multidimensional scaling (NMDS) with vegan R package V. 2.5–7 and ggplot2 R package V. 3.3.5 . Differences between microbial communities were tested using PERMANOVA, with 9999 permutations with vegan R package V. 2.5–7 and the post hoc test was realised with the pairwiseAdonis R package V. 0.4 . All organic-physical-chemical, plants and nematodes differences between conditions (e.g., F vs . LF; F vs . SF; LF vs . SF) were checked previously using Kruskal-Wallis tests. A soil texture triangle was realised with the ggplot2 R package V. 3.3.5 . We examined relationships between soil fungal and bacterial communities, soil chemical properties, plant and nematode communities using PERMANOVA (nPerm = 9999) like in . We identified significant differences in community structure and then performed post-hoc tests to determine the specific environmental and biological variables driving these differences. After identifying significant environmental variables, we used db-RDA to examine relationships between soil microbial communities and other parameters (soil properties, plant and nematode communities) for each variable following the methods using the R packages ggord V.1.0.0 , pmultcomp V. 1.4–16 , factoextra V. 1.0.7 and vegan V. 2.5–7 . Anaconda R package for high-throughput eDNA sequencing data The R functions created for ‘tArgeted differeNtial and globAl enriChment analysis of taxOnomic raNk by shareD Asvs’ (ANACONDA) were bottled into an R package and submitted and then published to CRAN for code review and better use by third parties and can be found at https://cran.r-project.org/web/packages/Anaconda/index.html and https://github.com/PLStenger/Anaconda . This package has been created based on the data presented in this paper and was built for high-throughput eDNA sequencing analysis, but can be used for more classical ecological studies (see below with plants and nematodes data). This work package encompasses two steps: (I) the ‘targeted’ differential analysis from QIIME2 data by the DeSeq2 algorithm, and (II) the ‘global’ analysis by Taxon Mann-Whitney U test analysis from ‘targeted’ analysis. This also integrates the FunGuild and Bactotraits databases (for using FunGuild, Python V. > 2.7 is required). For the first step (I), the Anaconda R package estimates variance-mean dependence in count/abundance ASVs data from high-throughput sequencing assays and test for differential represented ASVs (through the comparison of previously explained conditions (here in our case, F vs . LF, F vs . SF, and SF vs . LF) based on a model using the negative binomial distribution as in for transcriptomic data (but instead of having gene expressions, we have an abundance of species). This step, therefore, focuses on whether there is an over-representation or an under-representation of specific species in one condition compared to another in a significant way. Here is a simplification of the protocol: download the R package on CRAN ( https://cran.r-project.org/web/packages/Anaconda/index.html ) or in its GitHub mirror ( https://github.com/PLStenger/Anaconda ). i ) Use the QIIME2 files ‘ ASV . tsv ’ which is the list of ASVs abundance for each of your samples created by the QIIME2 pipeline; ii ) ‘ taxonomy . tsv ’ which is the file with the listed taxonomy-ASV key for the rarefied dataset created by the QIIME2 pipeline (will be useful for ‘global’ analysis (II)); iii ) ‘ taxonomy_RepSeq . tsv ’ which is similar to the previous file, but from the representative sequences QIIME2 step (will be useful for ‘global’ analysis (II)), and finally a handmade file named iv ) ‘ SampleSheet_comparison . txt ’. More detailed material and methods can be found at https://github.com/PLStenger/Anaconda and . On R, the dASVa object (differential ASV abundance object) will be created to be fit on a Gamma-Poisson Generalised Linear Model (dispersion estimates for Negative Binomial distributed data), and the dispersion plot and the sparsity plot can be checked. The corresponding taxonomy can be added in the ASVs keys in results and put in a text and Excel file in output. FunGuilds can be added for fungi and Bactotrait for bacteria. MA plots are disponible in the package to adapt the p -value and the FoldChange cut-off. For the second step, the ‘global’ analysis (II) by Taxon Mann-Whitney U test analysis will use the results of the ‘targeted’ analysis. This step does not specifically focus on species that are over- or under-represented in a given condition (like step I) but on all taxonomic ranks (e.g., Phylum, Class, Order, Family, Genus and Species). For this second step, more files are needed and can be downloaded here https://github.com/PLStenger/Anaconda . The first of these files, the ‘ ncbitaxon_ontology . obo ’, is an NCBI organismal classification file adapted for the Anaconda R package, originally based on . The other files are a correspondence for fungi and bacteria QIIME2 code to NCBI Taxon code. Here, the Mann-Whitney U (MWU) test analysis is realised on the correspondence of the NCBI Taxon among the analogous database (NCBITaxon_MWU). This NCBITaxon_MWU uses a continuous measure of significance (such as fold-change or -log( p -value)) to identify NCBITaxon that are significantly enriched with either up- or down-represented ASVs. If the measure is binary (0 or 1) the script will perform a typical ’NCBITaxon enrichment’ analysis based on Fisher’s exact test: it will show NCBITaxon over-represented among the ASVs that have 1 as their measure. On the plot, different fonts are used to indicate significance, and colour indicates enrichment with either up (red) or down (blue) regulated ASVs. The tree on the plot is a hierarchical clustering of NCBITaxon based on shared ASVs. As in , categories that do not have any branch length separating them are included within one another. Also as in , the fraction next to the category name indicates the fraction of ’good’ ASVs in it; ’good’ ASVs are the ones exceeding the arbitrary absValue cutoff (option in taxon_mwuPlot. For realised a Fisher’s based test, specify absValue = 0.5. This value does not affect statistics and is used for plotting only. The original idea was for gene differential expression analysis from adapted here for taxonomic analysis (except that instead of having different functional categories of genes, we have different taxonomic ranks). This step is relevant if there is a consequent amount of data, and to hook a group of species that are taxonomically similar and present in a significant quantity in a condition. R package for classical ecological data We applied Anaconda analyses to non-sequencing data (plants and nematodes) from classical inventories, using the ’targeted’ analysis to examine abundance files formatted to match QIIME2 ASV . tsv files (data on plants did not constitute an exhaustive database and data on nematodes stopped at family rank for the ‘global’ analysis). Soil eDNA pre-processing analysis For the ITS2 marker (fungi) 2,594,514 raw sequences from 15 samples were obtained and then 270,160 sequences were kept after different cleaning steps ( and Tables). Due to a calculated rarefaction of 12,582 reads, four plots were not kept for further analysis (namely, plots F2, LF2, LF5 and SF3) . For the V4 marker (bacteria), 3,064,846 raw sequences from 15 samples were obtained and then 236,235 sequences were kept after the cleaning steps ( and Tables). As a result of a calculated rarefaction of 4,483 reads, two samples were removed for subsequent analyses (i.e., F2 and LF2) . Thus, 270,160 quality-filtered fungal sequences (ITS2) and 102,277 quality-filtered bacterial sequences (V4) from 11 and 13 soil samples respectively were finally generated and further analysed. Soil fungal and bacterial diversity In total, 383 and 94 fungal and bacterial ASVs, respectively, were delineated. For both fungi and bacteria, no significant differences were observed in diversity indices between the conditions (i.e., SF, LF, and F) ( and Figs). Soil fungal and bacterial composition and functional groups presents the relative abundances of the fungal phyla and functional groups (i.e., guilds and trophic modes) . Ascomycota was observed as the most abundant phylum in each condition (SF: 55.4% ±18.6%, LF: 63.9% ±6.9%, and F: 61.5% ±8.1%), followed by Basidiomycota (SF: 33.5% ±18.5, LF: 23.9% ±7.8%, and F: 28.9% ±11.1%). All other phyla (Rozellomycota, Chytridiomycota, Mucoromycota, Calcarisporiellomycota, Glomeromycota, and Mortierellomycota) showed a relative abundance inferior to 8%. No significant variations in the proportions of the relative phyla abundances between the three conditions were detected (Kruskal Wallis test). Regarding fungal guilds, the undefined saprotroph guild was the most relatively abundant in short fallow (44.4% ±11.3%) and long fallow (40.4% ±9.4%), and the second most abundant in the forest (36.2% ±19.1%). In the forest, the animal pathogen guild was the most relatively abundant guild with a proportion a relative abundance of 41.0% ±20.2%, whereas it was the second most abundant guild in short fallow (25.4% ± 13.8%) and long fallow, as well (40.0% ± 21.5%). The plant-pathogen guild was the third most relatively abundant guild for all conditions (SF: 13.6% ±8.3%, LF: 9.0% ±8.9%, F: 9.4% ±6.3%). The guild of ectomycorrhizal fungi was the fourth most relatively abundant guild in all conditions (SF: 8.9% ± 5.3%, LF: 5.3% ± 3.1%, F: 5.9% ± 3.6%). All other guilds showed a relative abundance inferior to 8%. When comparing the different conditions, the Kruskal-Wallis test revealed no significant variation in the relative abundances of these guilds. The relative abundance of each bacterial phyla (and the only archaeal phyla), and their corresponding functional traits (excluding Archaea) are presented respectively in . For the phylum composition, in the three conditions studied, two bacterial phyla dominated the soil communities, namely the Firmicutes (SF: 31.5% ±9.3%; LF: 27.3% ±1.5%; F: 38.9% ±5.4%) and the Verrucomicrobiota (SF: 17.4% ±9.3%; LF: 32.7% ±13.3%; F: 15.3% ±%5.4). The only detected archaeal phylum that was the Crenarchaeota, was also observed in relatively high proportions (SF: 25.9% ±12.5%; LF: 16.8% ±8.4%; F: 18.6% ±2.5%) . All these phyla did not show any significative differences in their relative abundances between the three compared treatments. The only phylum that presented significant variations in its proportions (SF: 4.9% ±0.6%; LF: 1.2% ±1.4%; F: 9.4% ±2.3%) was the Proteobacteria (Kruskal Wallis test p- value = 0.004723). Concerning the bacterial functional traits, the organotroph-chemotroph functional group was dominant in all conditions (SF: 80.4% ±2.7%, LF: 72.0% ±11.2%, and F: 80.9% ±2.1%), and was followed by the heterotroph group (SF: 11.7% ±2.7%, LF: 21.5% ±12.2%, F: 10.4% ±2.5%). The organotrophs and the composite group represented systematically less than 8%. The heterotrophs were the only functional group that showed a significant variation between conditions (Kruskal Wallis test, p- value = 0.0333). Microbial communities’ structure The NMDS ordination, based on the Bray-Curtis dissimilarity index, suggests that soil fungal communities were distinct between the studied conditions, particularly between short fallow and forest . The PERMANOVA analysis supports this observation (PERMANOVA: p- value = 0.018, R 2 = 0.277 for sites and 0.723 for residual, post-hoc test pairwise adonis F vs . SF p- value = 0.019; F vs . LF and LF vs . SF were non-significant). Conversely, for bacteria, no community structure was observed ( ; PERMANOVA non-significant). Thus, in contrast to fungi, bacterial communities did not exhibit any significant differences between land-use conditions. Influence of physico-chemical parameters The soil texture ( i . e ., the proportions of clays, silt, and sand) was homogeneous among plots and, hence, among the three related conditions investigated ( , and Figs) and was classified as silt loam. The organic matter, carbon (C), nitrogen (N), and pH showed significant differences ( p- values < 0.05), with systematically higher values in the forest ( and ). In linked organic matter analysis, C/N showed significant differences ( p- value = 0.029), with a higher ratio in long fallow. Regarding the microbial biomass parameters, significant differences were observed ( and ). Indeed, the carbon, the estimated total microbial biomass, as well as the estimated related parameters nitrogen, phosphorus, potassium, calcium, and magnesium stored in microbial biomass showed significant differences between conditions ( p- values < 0.05). Looking at the pairwise comparisons, the soils in forests presented, for most parameters, higher significant values. In addition to these investigated parameters, significant differences were found in mineralized carbon (microbial activity), decreasing from forest to short fallow ( p -value = 0.015, ; ). Since a structure of communities according to the studied conditions was only observed for fungi , db-RDA analysis using soil physico-chemicals parameters as explanatory variables has only been proceeded on this microbial group. The db-RDA representation showed that fungal phyla were significantly related to the soil texture and not to other parameters ( –with only the significant parameter, i.e., the soil texture). The PERMANOVA (nPerm = 9999; 20% of the variance explained, p- value = 0.007) and post-hoc tests (Clays, p- value = 0.031; and Silt, p- value = 0.009) supported this relationship. More precisely, the Basidiomycota were found to be related to the silt content, whereas the Ascomycota were inversely related to the clay content . However, no relationships were detected regarding the soil fungal fallows and forest communities. So, the soil texture was homogeneous among plots, but the soil physico-chemical properties, including organic matter, carbon, nitrogen, and pH, showed significant differences between the forest and fallow conditions, with the forest having systematically higher values. Influence of plants Kruskal-Wallis tests, followed by Dunn post-hoc tests revealed significant differences in plant species composition between the forest and fallow conditions, with several species showing significant presence or absence in specific conditions, such as Acacia spirorbis ( p- value = 0.00621) being absent in the forest and Dodonaea viscosa ( p- value = 0.00327) being more present in short fallow. Influence of nematodes One nematode family, Aphelenchoididae (Kruskal-Wallis test: p-value = 0.00918), showed significant changes between conditions, with differing abundance in long-term and short-term fallows . ‘Targeted’ analysis for fungi and bacteria with the Anaconda R package Eleven and 13 samples were used, respectively, for fungi and bacteria/archaea analyses (as a result of the deletion of samples due to the previous rarefaction step). An estimate of the dispersion by shrinkage can be visualised by plotting the dispersion estimates on the average ASVs presence strength (here ‘ASV abundance’ is used as a ‘count’) by adjusting only an intercept term. First, and following , the maximum likelihood estimate of the ASVs was obtained using only the respective ASVs data (black dots). Then, a curve (red) was fitted to the maximum likelihood estimate to capture the general trend of the dispersion-mean dependence. This fit was used as a prior mean for a second round of estimation, which resulted in the final estimates of the dispersion at the maximum a posteriori. This can be understood as a narrowing (blue circle) of the noisy estimates by ASVs towards the consensus represented by the red line. The black points circled in blue were detected as outliers of the dispersion and were not reconciled with the prior (the reconciliation would follow the dotted line). In our case, we see that few ASVs were not fitted in the (here, parametric) model (which is normal according to ) and that the results were very similar between the two kingdoms, although the bacteria showed few ASVs in comparison. The analysis of the inter-sample relationships after the previous transformation showed us that the variability observed in the previous analyses (e.g., sections 3.2 to 3.4) was well preserved. For example, the similarity between the NMDS and the PCA presented here was remarkable. Nevertheless, we can observe nuances in this variability. For fungi , the hierarchical clustering on Euclidean distances on logarithm-transformed ASVs abundance with average clustering method showed higher differences in sample relationships than in the PCA. As an example, the samples from the Forest condition (‘F’) were tightly grouped in this PCA whereas they were fitted in three different sub-clusters in the hierarchical clustering. The 31% (18%+13%) total explained variation in the PCA showed that a small part of the data explained this convergence. The sample-to-sample heatmap based on rlog transformation with trim on too low represented ASVs showed a certain homogeneity of the samples, which could illustrate a variability homogeneously explained by some ASVs (with a variability not pulled by only some ASVs in a specific way, but also by several ASVs in the same direction). For bacteria and archaea , the hierarchical clustering on Euclidean distances on logarithm-transformed ASVs abundance with average clustering method showed similar differences in sample relationships than in the PCA. As an example, the samples F1, F3, F4, SF1, and SF2 were at the margin of the other samples in the PCA, which was well highlighted by a similar sub-cluster in the hierarchical clustering. The 34% (19%+15%) total explained variation in the PCA showed that a small part of the data explained the presented variation. The sample-to-sample heatmap based on rlog transformation with trims on too low represented ASVs, showed higher heterogeneity within some of the samples, as for LF5-SF5 for example, this could display a variability explained by few ASVs in a heterogeny way (with a statistical variation pulled by few ASVs, and not a lot of ASVs). Such similarities (e.g., between the previous and the Anaconda analyses) with nuances allowed us to ensure that the variability structure of the dataset (e.g., few ASVs highly over or under-represented in a condition versus a lot of similar ASVs slightly over or under-represented in a condition) was maintained while allowing us to explore the smallest variation so that we can answer our biological/ecological question with further analysis (see below). The clustered heatmap of the 75 most abundant ASVs based on Euclidean distance with average clustering method for fungi and bacteria/archaea showed that there is no discernible pattern based on the most prevalent ASVs. This allows for condition-specific analyses to recover ASVs that are specifically over- or under-represented in the different conditions, which could therefore explain the observed variations. The DeSeq2 algorithm allowed such comparison, and here with P- adjusted < 0.05 and LogFoldChange > |2|, F vs . LF, F vs . SF, and SF vs . LF comparison hooked, respectively, 43, 96, and 43 significantly under- or over-represented ASVs respectively for fungi , and 33, 35, and 17 significantly under- or over-represented ASVs for bacteria . Venn diagram representations allowed us to recover species (ASVs) that were i ) specific to a comparison, and most importantly that were ii ) specific to a condition (the latter correspond to those with a common denominator, e.g., F vs . SF compare to F vs . LF will show ASVs specific to F). For fungi , of the 43 ASVs significantly over- or under-represented in the F vs . LF pairwise comparison, nine were specific to this comparison. Out of the 96 ASVs significantly over- or under-represented in the F vs . SF comparison, 40 were here specific to this comparison. For the SF vs . LF comparison, of the 43 ASVs significantly over- or under-represented, 13 were specific to this pairwise comparison. Thirty ASVs, 4 ASVs, and 26 ASVs significantly over- or under-represented were specific to the forest, the long fallow and the short fallow, respectively (condition-specific ASVs). For the bacteria/archaea , of the 33 ASVs significantly over- or under-represented in the F vs . LF comparison, two were only recovered from this comparison. Out of the 35 ASVs significantly over- or under-represented in the F vs . SF comparison, only one was specific to this pairwise comparison. Of the 17 ASVs significantly over- or under-represented in the SF vs . LF comparison, none were specific. Twenty-four, 7, and 10 ASVs that were significantly over- or under-represented were, respectively, restricted to the forest, the long fallow and the short fallow (condition-specific ASVs). Looking at the fungal ASVs ( p- value < 0.05; LogFoldChange > |2|) that were condition-specific , 15 ASVs were over-represented in short fallows (present only in short fallows), particularly Sarocladium kiliense , Acrocalymma fici and Exophiala aquamarina , and 11 were under-represented in short fallows (present in forests and long fallows, but not in short fallows), notably Acrocalymma walkeri , Angustimassarina acerina , and Mortierella minutissima . In long fallows condition-specific, three ASVs were found to be over-represented, such as Trechispora invisitata , and one was under-represented, namely Agaricales sp . 01 . In forests condition-specific, 20 ASVs were over-represented, like Mortierella bisporalis , Lycogalopsis solmsii and Hygrocybe sp .), and 10 were under-represented, like Spizellomyces punctatus , Botryosphaeria sp . and Mortierella alpina . For ASVs ( p- value < 0.05; LogFoldChange > |2|) that were condition-specific for bacteria and archaea , nine ASVs were over-represented, such as Burkholderiales, Gemmataceae, or Gaiella, and one was under-represented in short fallows, an ASV assigned at the Vicinamibacteraceae family. In long fallows, four ASVs were over-represented, for instance, bacteriap25 sp., Entotheonellaceae sp. and Vicinamibacteraceae sp., and three were under-represented, i . e ., Hyphomicrobium sp., Candidatus Udaeobacter and Nitrososphaeria sp. (archaea). Finally, nine ASVs were over-represented, like Candidatus xiphinematobacter , Bacillus sp . and Acidibacter sp ., and 15 were under-represented in F, such as three archaeal ASVs belonging to the Nitrososphaeria genus. So here, hierarchical clustering analysis revealed greater differences in fungal sample relationships, with distinct sub-clusters in Forest samples, while PCA showed a more compact grouping. Bacterial and archaeal samples exhibited similar patterns, with some distinct sub-clusters and others showing higher heterogeneity. Condition-specific ASVs were identified in fungi and bacteria/archaea, with distinct ASVs over- or under-represented in each condition, including Sarocladium kiliense , Acrocalymma fici , and Exophiala aquamarina in short fallows, and Mortierella bisporalis , Lycogalopsis solmsii , and Hygrocybe sp. in forests. ‘Global’ analysis of fungal and bacterial communities with the Anaconda R package Concerning the ‘global’ analysis for fungi , 1174 ASVs matched to 653 NCBITaxon different terms, and 639 NCBITaxon were remaining. After the secondary clustering, the MWU test output 23, 31, and 21 NCBITaxon terms at 10% FDR for the pairwise comparisons F vs . SF, F vs . LF, and SF vs . LF, respectively. Here, compared to the ‘targeted’ analysis, when an affiliation is made at a higher rank than species (e.g., family, order, or genus), this corresponds to several ASVs that share the same taxonomic rank. When a group of ASVs are ascribed at the species level, it means that several ASVs share this taxonomic affiliation and can correspond to different sub-species, or strains. In the forest, compared to the short fallow, numerous ASVs assigned to the fungal entomopathogen Metarhizium robertsii were over-represented ( p- value < 0.05) . In contrast, other ASVs were under-represented, hence, over-represented in the short fallow, such as the ones ascribed to the Curvalaria plant pathogen genus ( p -value < 0.05), the potential plant pathogens that are Spizellomyces punctatus ( p- value < 0.01) and Fusarium oxysporum ( p- value < 0.05), and the potential human pathogen that is Exophiala equina ( p- value < 0.05) . In comparison to the long fallow, Metarhizium robertsii was again found in higher abundance than in the forest. Only Spizellomyces punctatus was significantly observed as over-represented ( p- value < 0.05) in the long fallow, but to a lesser extent than in the short fallow . The comparison between the short and the long fallow periods showed an over-representation in the former for numerous taxa: Sarocladium , Fusarium oxysporum species complex, Pyrenochaetopsis leptospora , Curvularia , Pleosporaceae, Alternaria , Chaetomella , Chytridiomycota , and Spizellomyces punctatus ( p- value < 0.01). Leotiomycetes and Talaromyces sect. Talaromyces were also over-represented in the short fallow ( p- value < 0.05). In the long fallow, Hymenochaete acerosa , Glutinoglossum , Lycogalopsis solmsii , Trechispora invisitata , Saksenaea trapezispora , Metarhizium robertsii , and Exophiala equina were found in higher abundances than in the short fallow ( p- value < 0.01). For bacteria and archaea, 486 ASVs matched to 108 NCBITaxon different terms. One hundred NCBITaxon were remaining. After the secondary clustering, the MWU test output zero NCBITaxon terms at 10% FDR for all comparisons. This result mirrors the ones presented in , which displays a variability explained by a few ASVs in a heterogeny way (with a variability pulled by only some ASVs in a specific way, and not by several ASVs in the same direction). Here it means that some ASVs are very strongly over- or underrepresented in a condition (which is why the ‘targeted’ analyses worked) but that there are not enough similar ASVs that are slightly over- or underrepresented in a condition in the same direction (which is why the ‘global’ analyses cannot be realised because it is not significant). To summarize, the analysis of fungal ASVs in the forest, short fallow, and long fallow conditions revealed significant differences in taxonomic affiliations, with Metarhizium robertsii being over-represented in the forest and short fallow, and Curvalaria , Spizellomyces punctatus , and Fusarium oxysporum being under-represented in the forest and over-represented in the short fallow, whereas Leotiomycetes and Talaromyces sect. Talaromyces were over-represented in the short fallow. The analysis of bacterial and archaeal ASVs revealed that only 108 taxonomic groups were significantly represented, with the majority of ASVs remaining unclassified, and no significant differences were found between the forest, short fallow, and long fallow conditions, indicating that a few ASVs strongly drive the variation in this community composition. Anaconda package on ecological data To determine the usefulness of this tool for other types of data, the Anaconda package was used on plants’ and nematodes’ ecological data. Using the ‘targeted’ analysis, six plant species were observed as forest-specific ( p- value < 0.05; LogFoldChange > |2|), notably Aglaia elaeagnoidea , Diospyros fasciculosa , and Schefflera gabriellae . Acacia spirorbis , Dodonaea viscosa , and Psidium guajava were only encountered in the two fallowing periods . Acalypha grandis and Pitpturus argenteus were only observed in the short fallow ( p- value < 0.05; LogFoldChange > |2|), whereas Podonephelium homei and Polyscias bracteata were only found in the two other conditions. Schinus terebinthifolius was found significant in the short fallows compared to the forests ( p- value < 0.01), which means that this species was found in larger quantities in short fallows compared to forests. In forests vs . long fallows, Acronychia laevis ( p- value = < 0.01) and Glochidion billardieri ( p- value = < 0.05) were encountered in greater quantities in long fallows compared to forests, meaning that these species were mostly found in long fallows compared to forests. Finally, Diospyros samoensis ( p- value = < 0.05) was observed in higher abundance in forests compared to long fallows, indicating that it was mostly present in the forest and slightly in the long fallow land. For nematodes, the comparison between the forest and the short fallow showed significant variation in the abundance of one family, the Tylenchidae, which was more present in the forest ( p- value = 0.01665, LogFoldChange = 2.28). A significant variation in another family, the Aphelenchoididae, was also observed when the short and low fallow were compared, with a higher representation in the former ( p- value = 0.00974, LogFoldChange = -2.23). So here, the application of the ‘targeted’ analysis to plant and nematode ecological data revealed significant differences in species abundance between the forest and fallow conditions, with specific plant species showing preferences for either forest or fallow environments, and nematode families exhibiting altered abundance patterns in response to different land-use regimes. This clearly demonstrates the usefulness of this package for ‘classic’ ecological data, and its use can therefore be extended beyond metabarcoding data. For the ITS2 marker (fungi) 2,594,514 raw sequences from 15 samples were obtained and then 270,160 sequences were kept after different cleaning steps ( and Tables). Due to a calculated rarefaction of 12,582 reads, four plots were not kept for further analysis (namely, plots F2, LF2, LF5 and SF3) . For the V4 marker (bacteria), 3,064,846 raw sequences from 15 samples were obtained and then 236,235 sequences were kept after the cleaning steps ( and Tables). As a result of a calculated rarefaction of 4,483 reads, two samples were removed for subsequent analyses (i.e., F2 and LF2) . Thus, 270,160 quality-filtered fungal sequences (ITS2) and 102,277 quality-filtered bacterial sequences (V4) from 11 and 13 soil samples respectively were finally generated and further analysed. In total, 383 and 94 fungal and bacterial ASVs, respectively, were delineated. For both fungi and bacteria, no significant differences were observed in diversity indices between the conditions (i.e., SF, LF, and F) ( and Figs). presents the relative abundances of the fungal phyla and functional groups (i.e., guilds and trophic modes) . Ascomycota was observed as the most abundant phylum in each condition (SF: 55.4% ±18.6%, LF: 63.9% ±6.9%, and F: 61.5% ±8.1%), followed by Basidiomycota (SF: 33.5% ±18.5, LF: 23.9% ±7.8%, and F: 28.9% ±11.1%). All other phyla (Rozellomycota, Chytridiomycota, Mucoromycota, Calcarisporiellomycota, Glomeromycota, and Mortierellomycota) showed a relative abundance inferior to 8%. No significant variations in the proportions of the relative phyla abundances between the three conditions were detected (Kruskal Wallis test). Regarding fungal guilds, the undefined saprotroph guild was the most relatively abundant in short fallow (44.4% ±11.3%) and long fallow (40.4% ±9.4%), and the second most abundant in the forest (36.2% ±19.1%). In the forest, the animal pathogen guild was the most relatively abundant guild with a proportion a relative abundance of 41.0% ±20.2%, whereas it was the second most abundant guild in short fallow (25.4% ± 13.8%) and long fallow, as well (40.0% ± 21.5%). The plant-pathogen guild was the third most relatively abundant guild for all conditions (SF: 13.6% ±8.3%, LF: 9.0% ±8.9%, F: 9.4% ±6.3%). The guild of ectomycorrhizal fungi was the fourth most relatively abundant guild in all conditions (SF: 8.9% ± 5.3%, LF: 5.3% ± 3.1%, F: 5.9% ± 3.6%). All other guilds showed a relative abundance inferior to 8%. When comparing the different conditions, the Kruskal-Wallis test revealed no significant variation in the relative abundances of these guilds. The relative abundance of each bacterial phyla (and the only archaeal phyla), and their corresponding functional traits (excluding Archaea) are presented respectively in . For the phylum composition, in the three conditions studied, two bacterial phyla dominated the soil communities, namely the Firmicutes (SF: 31.5% ±9.3%; LF: 27.3% ±1.5%; F: 38.9% ±5.4%) and the Verrucomicrobiota (SF: 17.4% ±9.3%; LF: 32.7% ±13.3%; F: 15.3% ±%5.4). The only detected archaeal phylum that was the Crenarchaeota, was also observed in relatively high proportions (SF: 25.9% ±12.5%; LF: 16.8% ±8.4%; F: 18.6% ±2.5%) . All these phyla did not show any significative differences in their relative abundances between the three compared treatments. The only phylum that presented significant variations in its proportions (SF: 4.9% ±0.6%; LF: 1.2% ±1.4%; F: 9.4% ±2.3%) was the Proteobacteria (Kruskal Wallis test p- value = 0.004723). Concerning the bacterial functional traits, the organotroph-chemotroph functional group was dominant in all conditions (SF: 80.4% ±2.7%, LF: 72.0% ±11.2%, and F: 80.9% ±2.1%), and was followed by the heterotroph group (SF: 11.7% ±2.7%, LF: 21.5% ±12.2%, F: 10.4% ±2.5%). The organotrophs and the composite group represented systematically less than 8%. The heterotrophs were the only functional group that showed a significant variation between conditions (Kruskal Wallis test, p- value = 0.0333). The NMDS ordination, based on the Bray-Curtis dissimilarity index, suggests that soil fungal communities were distinct between the studied conditions, particularly between short fallow and forest . The PERMANOVA analysis supports this observation (PERMANOVA: p- value = 0.018, R 2 = 0.277 for sites and 0.723 for residual, post-hoc test pairwise adonis F vs . SF p- value = 0.019; F vs . LF and LF vs . SF were non-significant). Conversely, for bacteria, no community structure was observed ( ; PERMANOVA non-significant). Thus, in contrast to fungi, bacterial communities did not exhibit any significant differences between land-use conditions. The soil texture ( i . e ., the proportions of clays, silt, and sand) was homogeneous among plots and, hence, among the three related conditions investigated ( , and Figs) and was classified as silt loam. The organic matter, carbon (C), nitrogen (N), and pH showed significant differences ( p- values < 0.05), with systematically higher values in the forest ( and ). In linked organic matter analysis, C/N showed significant differences ( p- value = 0.029), with a higher ratio in long fallow. Regarding the microbial biomass parameters, significant differences were observed ( and ). Indeed, the carbon, the estimated total microbial biomass, as well as the estimated related parameters nitrogen, phosphorus, potassium, calcium, and magnesium stored in microbial biomass showed significant differences between conditions ( p- values < 0.05). Looking at the pairwise comparisons, the soils in forests presented, for most parameters, higher significant values. In addition to these investigated parameters, significant differences were found in mineralized carbon (microbial activity), decreasing from forest to short fallow ( p -value = 0.015, ; ). Since a structure of communities according to the studied conditions was only observed for fungi , db-RDA analysis using soil physico-chemicals parameters as explanatory variables has only been proceeded on this microbial group. The db-RDA representation showed that fungal phyla were significantly related to the soil texture and not to other parameters ( –with only the significant parameter, i.e., the soil texture). The PERMANOVA (nPerm = 9999; 20% of the variance explained, p- value = 0.007) and post-hoc tests (Clays, p- value = 0.031; and Silt, p- value = 0.009) supported this relationship. More precisely, the Basidiomycota were found to be related to the silt content, whereas the Ascomycota were inversely related to the clay content . However, no relationships were detected regarding the soil fungal fallows and forest communities. So, the soil texture was homogeneous among plots, but the soil physico-chemical properties, including organic matter, carbon, nitrogen, and pH, showed significant differences between the forest and fallow conditions, with the forest having systematically higher values. Kruskal-Wallis tests, followed by Dunn post-hoc tests revealed significant differences in plant species composition between the forest and fallow conditions, with several species showing significant presence or absence in specific conditions, such as Acacia spirorbis ( p- value = 0.00621) being absent in the forest and Dodonaea viscosa ( p- value = 0.00327) being more present in short fallow. One nematode family, Aphelenchoididae (Kruskal-Wallis test: p-value = 0.00918), showed significant changes between conditions, with differing abundance in long-term and short-term fallows . Anaconda R package Eleven and 13 samples were used, respectively, for fungi and bacteria/archaea analyses (as a result of the deletion of samples due to the previous rarefaction step). An estimate of the dispersion by shrinkage can be visualised by plotting the dispersion estimates on the average ASVs presence strength (here ‘ASV abundance’ is used as a ‘count’) by adjusting only an intercept term. First, and following , the maximum likelihood estimate of the ASVs was obtained using only the respective ASVs data (black dots). Then, a curve (red) was fitted to the maximum likelihood estimate to capture the general trend of the dispersion-mean dependence. This fit was used as a prior mean for a second round of estimation, which resulted in the final estimates of the dispersion at the maximum a posteriori. This can be understood as a narrowing (blue circle) of the noisy estimates by ASVs towards the consensus represented by the red line. The black points circled in blue were detected as outliers of the dispersion and were not reconciled with the prior (the reconciliation would follow the dotted line). In our case, we see that few ASVs were not fitted in the (here, parametric) model (which is normal according to ) and that the results were very similar between the two kingdoms, although the bacteria showed few ASVs in comparison. The analysis of the inter-sample relationships after the previous transformation showed us that the variability observed in the previous analyses (e.g., sections 3.2 to 3.4) was well preserved. For example, the similarity between the NMDS and the PCA presented here was remarkable. Nevertheless, we can observe nuances in this variability. For fungi , the hierarchical clustering on Euclidean distances on logarithm-transformed ASVs abundance with average clustering method showed higher differences in sample relationships than in the PCA. As an example, the samples from the Forest condition (‘F’) were tightly grouped in this PCA whereas they were fitted in three different sub-clusters in the hierarchical clustering. The 31% (18%+13%) total explained variation in the PCA showed that a small part of the data explained this convergence. The sample-to-sample heatmap based on rlog transformation with trim on too low represented ASVs showed a certain homogeneity of the samples, which could illustrate a variability homogeneously explained by some ASVs (with a variability not pulled by only some ASVs in a specific way, but also by several ASVs in the same direction). For bacteria and archaea , the hierarchical clustering on Euclidean distances on logarithm-transformed ASVs abundance with average clustering method showed similar differences in sample relationships than in the PCA. As an example, the samples F1, F3, F4, SF1, and SF2 were at the margin of the other samples in the PCA, which was well highlighted by a similar sub-cluster in the hierarchical clustering. The 34% (19%+15%) total explained variation in the PCA showed that a small part of the data explained the presented variation. The sample-to-sample heatmap based on rlog transformation with trims on too low represented ASVs, showed higher heterogeneity within some of the samples, as for LF5-SF5 for example, this could display a variability explained by few ASVs in a heterogeny way (with a statistical variation pulled by few ASVs, and not a lot of ASVs). Such similarities (e.g., between the previous and the Anaconda analyses) with nuances allowed us to ensure that the variability structure of the dataset (e.g., few ASVs highly over or under-represented in a condition versus a lot of similar ASVs slightly over or under-represented in a condition) was maintained while allowing us to explore the smallest variation so that we can answer our biological/ecological question with further analysis (see below). The clustered heatmap of the 75 most abundant ASVs based on Euclidean distance with average clustering method for fungi and bacteria/archaea showed that there is no discernible pattern based on the most prevalent ASVs. This allows for condition-specific analyses to recover ASVs that are specifically over- or under-represented in the different conditions, which could therefore explain the observed variations. The DeSeq2 algorithm allowed such comparison, and here with P- adjusted < 0.05 and LogFoldChange > |2|, F vs . LF, F vs . SF, and SF vs . LF comparison hooked, respectively, 43, 96, and 43 significantly under- or over-represented ASVs respectively for fungi , and 33, 35, and 17 significantly under- or over-represented ASVs for bacteria . Venn diagram representations allowed us to recover species (ASVs) that were i ) specific to a comparison, and most importantly that were ii ) specific to a condition (the latter correspond to those with a common denominator, e.g., F vs . SF compare to F vs . LF will show ASVs specific to F). For fungi , of the 43 ASVs significantly over- or under-represented in the F vs . LF pairwise comparison, nine were specific to this comparison. Out of the 96 ASVs significantly over- or under-represented in the F vs . SF comparison, 40 were here specific to this comparison. For the SF vs . LF comparison, of the 43 ASVs significantly over- or under-represented, 13 were specific to this pairwise comparison. Thirty ASVs, 4 ASVs, and 26 ASVs significantly over- or under-represented were specific to the forest, the long fallow and the short fallow, respectively (condition-specific ASVs). For the bacteria/archaea , of the 33 ASVs significantly over- or under-represented in the F vs . LF comparison, two were only recovered from this comparison. Out of the 35 ASVs significantly over- or under-represented in the F vs . SF comparison, only one was specific to this pairwise comparison. Of the 17 ASVs significantly over- or under-represented in the SF vs . LF comparison, none were specific. Twenty-four, 7, and 10 ASVs that were significantly over- or under-represented were, respectively, restricted to the forest, the long fallow and the short fallow (condition-specific ASVs). Looking at the fungal ASVs ( p- value < 0.05; LogFoldChange > |2|) that were condition-specific , 15 ASVs were over-represented in short fallows (present only in short fallows), particularly Sarocladium kiliense , Acrocalymma fici and Exophiala aquamarina , and 11 were under-represented in short fallows (present in forests and long fallows, but not in short fallows), notably Acrocalymma walkeri , Angustimassarina acerina , and Mortierella minutissima . In long fallows condition-specific, three ASVs were found to be over-represented, such as Trechispora invisitata , and one was under-represented, namely Agaricales sp . 01 . In forests condition-specific, 20 ASVs were over-represented, like Mortierella bisporalis , Lycogalopsis solmsii and Hygrocybe sp .), and 10 were under-represented, like Spizellomyces punctatus , Botryosphaeria sp . and Mortierella alpina . For ASVs ( p- value < 0.05; LogFoldChange > |2|) that were condition-specific for bacteria and archaea , nine ASVs were over-represented, such as Burkholderiales, Gemmataceae, or Gaiella, and one was under-represented in short fallows, an ASV assigned at the Vicinamibacteraceae family. In long fallows, four ASVs were over-represented, for instance, bacteriap25 sp., Entotheonellaceae sp. and Vicinamibacteraceae sp., and three were under-represented, i . e ., Hyphomicrobium sp., Candidatus Udaeobacter and Nitrososphaeria sp. (archaea). Finally, nine ASVs were over-represented, like Candidatus xiphinematobacter , Bacillus sp . and Acidibacter sp ., and 15 were under-represented in F, such as three archaeal ASVs belonging to the Nitrososphaeria genus. So here, hierarchical clustering analysis revealed greater differences in fungal sample relationships, with distinct sub-clusters in Forest samples, while PCA showed a more compact grouping. Bacterial and archaeal samples exhibited similar patterns, with some distinct sub-clusters and others showing higher heterogeneity. Condition-specific ASVs were identified in fungi and bacteria/archaea, with distinct ASVs over- or under-represented in each condition, including Sarocladium kiliense , Acrocalymma fici , and Exophiala aquamarina in short fallows, and Mortierella bisporalis , Lycogalopsis solmsii , and Hygrocybe sp. in forests. Anaconda R package Concerning the ‘global’ analysis for fungi , 1174 ASVs matched to 653 NCBITaxon different terms, and 639 NCBITaxon were remaining. After the secondary clustering, the MWU test output 23, 31, and 21 NCBITaxon terms at 10% FDR for the pairwise comparisons F vs . SF, F vs . LF, and SF vs . LF, respectively. Here, compared to the ‘targeted’ analysis, when an affiliation is made at a higher rank than species (e.g., family, order, or genus), this corresponds to several ASVs that share the same taxonomic rank. When a group of ASVs are ascribed at the species level, it means that several ASVs share this taxonomic affiliation and can correspond to different sub-species, or strains. In the forest, compared to the short fallow, numerous ASVs assigned to the fungal entomopathogen Metarhizium robertsii were over-represented ( p- value < 0.05) . In contrast, other ASVs were under-represented, hence, over-represented in the short fallow, such as the ones ascribed to the Curvalaria plant pathogen genus ( p -value < 0.05), the potential plant pathogens that are Spizellomyces punctatus ( p- value < 0.01) and Fusarium oxysporum ( p- value < 0.05), and the potential human pathogen that is Exophiala equina ( p- value < 0.05) . In comparison to the long fallow, Metarhizium robertsii was again found in higher abundance than in the forest. Only Spizellomyces punctatus was significantly observed as over-represented ( p- value < 0.05) in the long fallow, but to a lesser extent than in the short fallow . The comparison between the short and the long fallow periods showed an over-representation in the former for numerous taxa: Sarocladium , Fusarium oxysporum species complex, Pyrenochaetopsis leptospora , Curvularia , Pleosporaceae, Alternaria , Chaetomella , Chytridiomycota , and Spizellomyces punctatus ( p- value < 0.01). Leotiomycetes and Talaromyces sect. Talaromyces were also over-represented in the short fallow ( p- value < 0.05). In the long fallow, Hymenochaete acerosa , Glutinoglossum , Lycogalopsis solmsii , Trechispora invisitata , Saksenaea trapezispora , Metarhizium robertsii , and Exophiala equina were found in higher abundances than in the short fallow ( p- value < 0.01). For bacteria and archaea, 486 ASVs matched to 108 NCBITaxon different terms. One hundred NCBITaxon were remaining. After the secondary clustering, the MWU test output zero NCBITaxon terms at 10% FDR for all comparisons. This result mirrors the ones presented in , which displays a variability explained by a few ASVs in a heterogeny way (with a variability pulled by only some ASVs in a specific way, and not by several ASVs in the same direction). Here it means that some ASVs are very strongly over- or underrepresented in a condition (which is why the ‘targeted’ analyses worked) but that there are not enough similar ASVs that are slightly over- or underrepresented in a condition in the same direction (which is why the ‘global’ analyses cannot be realised because it is not significant). To summarize, the analysis of fungal ASVs in the forest, short fallow, and long fallow conditions revealed significant differences in taxonomic affiliations, with Metarhizium robertsii being over-represented in the forest and short fallow, and Curvalaria , Spizellomyces punctatus , and Fusarium oxysporum being under-represented in the forest and over-represented in the short fallow, whereas Leotiomycetes and Talaromyces sect. Talaromyces were over-represented in the short fallow. The analysis of bacterial and archaeal ASVs revealed that only 108 taxonomic groups were significantly represented, with the majority of ASVs remaining unclassified, and no significant differences were found between the forest, short fallow, and long fallow conditions, indicating that a few ASVs strongly drive the variation in this community composition. package on ecological data To determine the usefulness of this tool for other types of data, the Anaconda package was used on plants’ and nematodes’ ecological data. Using the ‘targeted’ analysis, six plant species were observed as forest-specific ( p- value < 0.05; LogFoldChange > |2|), notably Aglaia elaeagnoidea , Diospyros fasciculosa , and Schefflera gabriellae . Acacia spirorbis , Dodonaea viscosa , and Psidium guajava were only encountered in the two fallowing periods . Acalypha grandis and Pitpturus argenteus were only observed in the short fallow ( p- value < 0.05; LogFoldChange > |2|), whereas Podonephelium homei and Polyscias bracteata were only found in the two other conditions. Schinus terebinthifolius was found significant in the short fallows compared to the forests ( p- value < 0.01), which means that this species was found in larger quantities in short fallows compared to forests. In forests vs . long fallows, Acronychia laevis ( p- value = < 0.01) and Glochidion billardieri ( p- value = < 0.05) were encountered in greater quantities in long fallows compared to forests, meaning that these species were mostly found in long fallows compared to forests. Finally, Diospyros samoensis ( p- value = < 0.05) was observed in higher abundance in forests compared to long fallows, indicating that it was mostly present in the forest and slightly in the long fallow land. For nematodes, the comparison between the forest and the short fallow showed significant variation in the abundance of one family, the Tylenchidae, which was more present in the forest ( p- value = 0.01665, LogFoldChange = 2.28). A significant variation in another family, the Aphelenchoididae, was also observed when the short and low fallow were compared, with a higher representation in the former ( p- value = 0.00974, LogFoldChange = -2.23). So here, the application of the ‘targeted’ analysis to plant and nematode ecological data revealed significant differences in species abundance between the forest and fallow conditions, with specific plant species showing preferences for either forest or fallow environments, and nematode families exhibiting altered abundance patterns in response to different land-use regimes. This clearly demonstrates the usefulness of this package for ‘classic’ ecological data, and its use can therefore be extended beyond metabarcoding data. ‘Classical’ community analysis: No effect of agricultural practice changes in the first instance The so-called ‘classical’ community analysis (which refers to the diversity, composition, and structure investigations that are commonly made in community analyses), revealed no effects of cultural practice changes on soil microbial communities. Indeed, no differences in microbial diversity were found between short- and long-term fallowing, and forest, for both fungi and bacteria/archaea. Variations in phylum composition were only observed for Proteobacteria, with a higher proportion in the forest, but not between fallow periods. Based on , the relative abundance of Proteobacteria may indicate soil and land degradation, suggesting that both short- and long-fallow periods (the latter lasting over a decade) could be considered as degraded systems. As for the diversity and the phyla composition, the functional groups did not reveal a clear tendency, especially in terms of pathogens and beneficial microbe relative abundances. Looking at the soil microbial community structure, a significant partitioning was only observed for fungi, which resulted from differences with the forest, not from any fallowing period effect. It’s noteworthy that despite our inability to detect soil microbial changes due to the agricultural practice, some ‘global’ tendencies seem to emerge from those ‘classical’ approaches. Indeed, for fungi, in all conditions Ascomycota was observed as the most abundant phylum, followed by Basidiomycota. In the literature, the dominance of Ascomycota over Basidiomycota has been recently suggested as an indicator of ecosystem degradation . This may thus suggest that even the forests used as reference ecosystems are in some extent degraded. Regarding bacteria, the Firmicutes and Verrucomicrobiota phyla dominated the soil communities. The Firmicutes have been classified as copiotrophic and Verrucomicrobiota as oligotrophs . However, a recent study has conversely shown a positive correlation between Verrucomicrobiota and soil carbon content . The high soil organic carbon content of Maré’s Gibbsic Ferrasols, even in cultivated soils , could thus be a probable explanation for the over-representation of these two bacterial groups. In addition to these bacterial phyla, the Crenarchaeota was also well-represented in all conditions and was the only archaeal representative. The dominance of archaeal communities by Crenarchaeota on Maré island is in accordance with the observations made by on divers’ soils at a worldwide scale. This group may play central roles in biochemical cycles in soils . However, deeper investigations are needed to better understand the roles of microorganisms in Gibbsic Ferralsols on Maré island. Indeed, except soil texture, environmental variables were not found to influence soil microorganisms. At this stage, based on “classical” analyses, we cannot conclude that changing agricultural practices at Maré Island have any effect on soil microbial communities. We cannot rule out a lack of effect, but we can also acknowledge the need for more in-depth approaches to highlight potential changes, particularly in the soil health and One Health context. Revolutionising soil health and One Health through advanced detection of soil pathogens with the Anaconda package The two newly developed statistical analyses implemented in the Anaconda package, namely the ‘targeted’ and ‘global’ analyses, highlighted the over-representation of microbial ASVs, particularly for fungi, ascribed to plant and animal pathogens, including humans, in the short fallow. Indeed, fungal pathogens such as Acrocalymma fici , known as a pathogen of cultivable trees , Chaetomella raphigera , recognised as a fruit rot pathogen , and Gibellulopsis chrysanthemi , identified as a root rot pathogen were detected in significantly higher proportions through the ‘target’ approach in the short-term fallow . Additionally, an undetermined species belonging to the Botryosphaeria genus, a taxon known to be associated with grapevine decline , was absent in the forest and present in both fallows, with higher abundance in the short-duration fallow. In congruence with all these results, an increase of plant fungal pathogens in the short fallow was observed using the ‘global’ statistical investigation. For instance, taxa such as Fusarium oxysporum , Alternaria and Curvularia , known to be pathogenic to many plant species , were particularly present in the short fallow compared to both the long fallow and the forest ecosystem. In addition to these plant-detrimental microbes, a fungal taxon of primary interest for Human health has also been detected in the short fallow soils, namely Sarocladium kiliense (formerly Acremonium kiliense ). S . kiliense is a soil saprophytic fungus that can cause opportunistic infections in immunocompetent and immunocompromised individuals, with diverse manifestations, such as dermatophytosis, onychomycosis, mycetoma, pneumonia and fungemia . Outbreaks of S . kiliense in immunodepressed patients have been reported in the literature . These clusters were likely linked to infections in clinical settings , but a probable environmental source has also been suggested by . Recently, a fatal disseminated infection in a diabetic patient with coronavirus disease 2019 (COVID-19) has been reported by in Iran. The severity of the diseases that can result from S . kiliense underlines the necessity of a high level of clinical attention in this area. In New Caledonia, at the public hospital, hitherto two cases involving undefined Sarocladium species have been reported (data on geographical origin and patient health non-available) (Arnaud Cannet, pers. com.). In light of the aforementioned fatal case in Iran , Sarocladium risk infections have to be in regards to the substantial diabetic population in New Caledonia (ASSNC, 2022), as well as the high prevalence of COVID-19 in the archipelago (WHO Coronavirus (COVID-19) Dashboard, https://covid19.who.int/ ). The ‘global’ analysis of this study shows an over-representation of the genus in the comparison of short versus long fallows, which also confirms the results of the ‘targeted’ analysis. The over-representation of this harmful fungus in a traditional agricultural system could result in a higher probability of infection and, therefore, support the need to raise awareness about this pathogen among healthcare workers and the local populations. From the ‘targeted’ analysis , Exophiala aquamarina , an opportunistic fungal pathogen causing cutaneous and disseminated infections in cold-blooded vertebrates (so far restricted to fishes) , was also found to be significantly over-represented in the short fallow. Based on the ‘global’ approach , another Exophiala species in the same phylogenetic clade, E . equina , was significantly present in soil samples from both short-term and long-term fallows, with greater representation in the latter. This suggests that agricultural establishment, regardless of the fallowing period, increased this pathogen in Maré’s soils. Similar to S . kiliense , this underscores the need to monitor potential human infections by E . equina , which, although rare, can cause cutaneous and subcutaneous infections . Supporting the necessity of paying attention to this genus, two cases of Exophialum infections have been to date reported for the public hospital in New Caledonia (data not available on the geographical origins of the patients) (Arnaud Cannet, pers. com.). Regarding bacteria, the Anaconda results were less clear than for fungi. Despite no findings from the ‘global’ analysis, likely due to high intra-sample variability, the ‘targeted’ analysis identified ASVs in the short fallow attributed to taxonomic groups containing or suspected of containing pathogens, such as the Gemmataceae (Planctomycetes) and Burkholderiales . Indeed, molecular-based detection has revealed the presence of Planctomycetes in the blood of leukemic of two aplastic patients with neutropenia, rash, diarrhoea and micronodular pneumonia . The phylogenetic analysis revealed for one of the clinical cases a close relationship to Gemmata obscuriglobus , a species that belongs to the Gemmataceae. For the second case, according to , when sequences of the 16S rRNA gene were compared, the second hit with a described taxon was with another Gemmata species, G . massiliana . This bacterium was originally recovered and characterised from a hospital water distribution system in France , thus in proximity to patients, as pointed out by . Gemmata -related sequences have also been found in human stool specimens, including individuals with infective endocarditis . From these constatations and other cellular and molecular findings, Gemmataceae representatives which clinical microbiologists have overlooked , have been suggested to potentially behave as opportunistic pathogens . Concerning the Burkholderiales (given as a second example), it encompasses a large variety of organisms, in particular plant and animal pathogens, including for humans . Certain Burkholderiales bacteria are considered particularly dangerous for individuals suffering from chronic lung diseases . Uncovering beneficial soil microbes and ecological links with Anaconda’ s statistical approaches As just seen above, the approach implemented in the Anaconda package revealed an increase of soil microbial pathogens with a reduction in the fallowing period. In complement to this compelling constatation, other soil microorganism types that displayed differences in their occurrence and deserve great attention were also recovered from Anaconda analyses. Indeed, several fungal saprophytes, i . e ., Glutinoglossum sp , Hymenochaete acerosa , Lycogalopsis solmsii , Trechispora invisitata and Sakseneae trapezispora , were detected in lower prevalence in the short fallow’s soils (Figs and ). It has been shown that saprophytic fungi can be involved in the regulation of pathogens . Competition for resources and antagonist interactions, via saprophyte fungi promoting soil antifungal bacteria , are underlying mechanisms leading to soil pathogen suppression. The lowest value of mineralised carbon in the short fallow, which reflects a lower microbial activity, argues in favour of a diminution of saprophyte activity. We could, thereby, hypothesise that the specific decrease of these saprophytic fungi has favoured the increase of the detrimental microorganisms observed in the short fallow plots. Alongside saprophytes, the fungal animal pathogen Metarhizium robertsii was in a decreasing order well-represented in the forest, then in the long and the short fallow . This fungus is an entomopathogen infecting a wide range of arthropods, and can consequently be involved in insect pests’ regulation . It can also establish itself as a root endophyte and favour plant growth and defence against plant pathogens . The specific lower abundance of this entomopathogenic and plant-endophyte fungus in the short fallow may similarly favour an increase of detrimental organisms. Thus, in the context of soil suppressiveness ( i . e ., the capacity of any given soil to reduce pathogens and disease incidence), specific suppression mechanisms, through individual species or selected groups of antagonist microorganisms , seem to regulate soil borne-pathogens in our system, rather than microbial diversity . Conversely, to the reduction of saprophyte and entomopathogen-plant endophyte fungi, an over-representation of the chemoorganotroph Gaiella bacterial genus was observed in the short-term fallow via the ‘targeted’ analysis . In tomato cropping soils, after organic amendment, a strong relationship was observed between this genus and the inhibition of the soil pathogen responsible for Fusarium wilt . Therefore, in our short fallow system, certain beneficial taxon acting against detrimental soil microorganisms may also be present. The intrinsic balance of soil between its relative abundance of beneficial and detrimental microbes is a crucial factor in determining its capacity to express or suppress diseases. One of the major questions consequently arising is when this threshold leading to one situation or the other would be met . Other biotic components of the soil environment than fungi and bacteria can contribute to soil suppressiveness . As earlier seen, the fungal pathogenic Botryosphaeria genus (Botryosphaeriaceae) was present in both fallows, but particularly in the short one. A Botryosphaeria species has been recovered from Acacia plant species (Fabaceae) in Australia . According to Anaconda results on plant communities, Acacia spirorbis was significantly present in both fallows, with higher relative abundances in the short fallow. The larger abundance of this fungus could thus be related to Acacia ’s abundance. Interestingly, another microorganism type, a nematode of the Aphelenchoididae family has been experimentally demonstrated to feed on a Botryosphaeriaceae member . Using again the Anaconda package, significant variations in the abundance of this nematode family were detected, with higher abundances in the short fallow. Preferential grazing of an Aphelenchoididae species on ectomycorrhizal fungi has also been revealed in the literature . A . spirorbis is recognised as an ectomycorrhizal shrub , which by the way would explain the over-representation of ectomycorrhizal fungi in the short-term fallowing ( i . e ., Thelephora ceae and Cortinarius species) . Multiple biotic interactions may thus intervene in the regulation of Botryosphaeria in soil. This fungus could, as previously indicated, benefit from the larger abundance of Acacia , but, at the same time, may be regulated by the predation of nematodes, which are also stimulated by the presence of ectomycorrhizal fungi. A . spirorbis is also able to form another type of mycorrhiza, i . e ., endomycorrhiza. This characteristic could explain the over-representation in the short fallow, underlined by the ‘global’ analysis , of Spizellomyces punctatus , a chytrid species that has been suggested to attack and colonises dead endomycorrhizal spores . S . punctatus could also be an indicator of perturbation. Lozupone and Klein (2002) showed that Spizellomyces populations increased in response to disturbance ( i . e ., after experiencing agricultural cultivation); an observation supporting the aforementioned facts that short fallow constitutes a degraded system. The statistical approaches implemented in our Anaconda package may, thus, help to disentangle and better understand the multiple biological interactions occurring in a given ecosystem, particularly those leading to an over-representation of certain harmful microbes in soil . It can, additionally, participate in defining ‘targeted’ agricultural management practices to control pathogen populations, for instance, here, by regulating A . spirorbis occurrence. Besides biotic factors, abiotic soil properties can also, directly and indirectly (via influencing other soil organisms), be involved in regulating plant and human pathogens populations in soil . Soil attributes, such as pH, soil moisture, organic matter content, and nutrient availability, can have a role in soil pathogen’s establishment, survival and growth . However, in our study, when significant differences occurred ( e . g ., pH, organic matter content, carbon content, and C/N ratio), they were mostly between the short fallow and the ecosystem of reference (not with the long fallow). It seems likely that biotic rather than abiotic factors regulate plant and human pathogens in our traditional agricultural system. The so-called ‘classical’ community analysis (which refers to the diversity, composition, and structure investigations that are commonly made in community analyses), revealed no effects of cultural practice changes on soil microbial communities. Indeed, no differences in microbial diversity were found between short- and long-term fallowing, and forest, for both fungi and bacteria/archaea. Variations in phylum composition were only observed for Proteobacteria, with a higher proportion in the forest, but not between fallow periods. Based on , the relative abundance of Proteobacteria may indicate soil and land degradation, suggesting that both short- and long-fallow periods (the latter lasting over a decade) could be considered as degraded systems. As for the diversity and the phyla composition, the functional groups did not reveal a clear tendency, especially in terms of pathogens and beneficial microbe relative abundances. Looking at the soil microbial community structure, a significant partitioning was only observed for fungi, which resulted from differences with the forest, not from any fallowing period effect. It’s noteworthy that despite our inability to detect soil microbial changes due to the agricultural practice, some ‘global’ tendencies seem to emerge from those ‘classical’ approaches. Indeed, for fungi, in all conditions Ascomycota was observed as the most abundant phylum, followed by Basidiomycota. In the literature, the dominance of Ascomycota over Basidiomycota has been recently suggested as an indicator of ecosystem degradation . This may thus suggest that even the forests used as reference ecosystems are in some extent degraded. Regarding bacteria, the Firmicutes and Verrucomicrobiota phyla dominated the soil communities. The Firmicutes have been classified as copiotrophic and Verrucomicrobiota as oligotrophs . However, a recent study has conversely shown a positive correlation between Verrucomicrobiota and soil carbon content . The high soil organic carbon content of Maré’s Gibbsic Ferrasols, even in cultivated soils , could thus be a probable explanation for the over-representation of these two bacterial groups. In addition to these bacterial phyla, the Crenarchaeota was also well-represented in all conditions and was the only archaeal representative. The dominance of archaeal communities by Crenarchaeota on Maré island is in accordance with the observations made by on divers’ soils at a worldwide scale. This group may play central roles in biochemical cycles in soils . However, deeper investigations are needed to better understand the roles of microorganisms in Gibbsic Ferralsols on Maré island. Indeed, except soil texture, environmental variables were not found to influence soil microorganisms. At this stage, based on “classical” analyses, we cannot conclude that changing agricultural practices at Maré Island have any effect on soil microbial communities. We cannot rule out a lack of effect, but we can also acknowledge the need for more in-depth approaches to highlight potential changes, particularly in the soil health and One Health context. Anaconda package The two newly developed statistical analyses implemented in the Anaconda package, namely the ‘targeted’ and ‘global’ analyses, highlighted the over-representation of microbial ASVs, particularly for fungi, ascribed to plant and animal pathogens, including humans, in the short fallow. Indeed, fungal pathogens such as Acrocalymma fici , known as a pathogen of cultivable trees , Chaetomella raphigera , recognised as a fruit rot pathogen , and Gibellulopsis chrysanthemi , identified as a root rot pathogen were detected in significantly higher proportions through the ‘target’ approach in the short-term fallow . Additionally, an undetermined species belonging to the Botryosphaeria genus, a taxon known to be associated with grapevine decline , was absent in the forest and present in both fallows, with higher abundance in the short-duration fallow. In congruence with all these results, an increase of plant fungal pathogens in the short fallow was observed using the ‘global’ statistical investigation. For instance, taxa such as Fusarium oxysporum , Alternaria and Curvularia , known to be pathogenic to many plant species , were particularly present in the short fallow compared to both the long fallow and the forest ecosystem. In addition to these plant-detrimental microbes, a fungal taxon of primary interest for Human health has also been detected in the short fallow soils, namely Sarocladium kiliense (formerly Acremonium kiliense ). S . kiliense is a soil saprophytic fungus that can cause opportunistic infections in immunocompetent and immunocompromised individuals, with diverse manifestations, such as dermatophytosis, onychomycosis, mycetoma, pneumonia and fungemia . Outbreaks of S . kiliense in immunodepressed patients have been reported in the literature . These clusters were likely linked to infections in clinical settings , but a probable environmental source has also been suggested by . Recently, a fatal disseminated infection in a diabetic patient with coronavirus disease 2019 (COVID-19) has been reported by in Iran. The severity of the diseases that can result from S . kiliense underlines the necessity of a high level of clinical attention in this area. In New Caledonia, at the public hospital, hitherto two cases involving undefined Sarocladium species have been reported (data on geographical origin and patient health non-available) (Arnaud Cannet, pers. com.). In light of the aforementioned fatal case in Iran , Sarocladium risk infections have to be in regards to the substantial diabetic population in New Caledonia (ASSNC, 2022), as well as the high prevalence of COVID-19 in the archipelago (WHO Coronavirus (COVID-19) Dashboard, https://covid19.who.int/ ). The ‘global’ analysis of this study shows an over-representation of the genus in the comparison of short versus long fallows, which also confirms the results of the ‘targeted’ analysis. The over-representation of this harmful fungus in a traditional agricultural system could result in a higher probability of infection and, therefore, support the need to raise awareness about this pathogen among healthcare workers and the local populations. From the ‘targeted’ analysis , Exophiala aquamarina , an opportunistic fungal pathogen causing cutaneous and disseminated infections in cold-blooded vertebrates (so far restricted to fishes) , was also found to be significantly over-represented in the short fallow. Based on the ‘global’ approach , another Exophiala species in the same phylogenetic clade, E . equina , was significantly present in soil samples from both short-term and long-term fallows, with greater representation in the latter. This suggests that agricultural establishment, regardless of the fallowing period, increased this pathogen in Maré’s soils. Similar to S . kiliense , this underscores the need to monitor potential human infections by E . equina , which, although rare, can cause cutaneous and subcutaneous infections . Supporting the necessity of paying attention to this genus, two cases of Exophialum infections have been to date reported for the public hospital in New Caledonia (data not available on the geographical origins of the patients) (Arnaud Cannet, pers. com.). Regarding bacteria, the Anaconda results were less clear than for fungi. Despite no findings from the ‘global’ analysis, likely due to high intra-sample variability, the ‘targeted’ analysis identified ASVs in the short fallow attributed to taxonomic groups containing or suspected of containing pathogens, such as the Gemmataceae (Planctomycetes) and Burkholderiales . Indeed, molecular-based detection has revealed the presence of Planctomycetes in the blood of leukemic of two aplastic patients with neutropenia, rash, diarrhoea and micronodular pneumonia . The phylogenetic analysis revealed for one of the clinical cases a close relationship to Gemmata obscuriglobus , a species that belongs to the Gemmataceae. For the second case, according to , when sequences of the 16S rRNA gene were compared, the second hit with a described taxon was with another Gemmata species, G . massiliana . This bacterium was originally recovered and characterised from a hospital water distribution system in France , thus in proximity to patients, as pointed out by . Gemmata -related sequences have also been found in human stool specimens, including individuals with infective endocarditis . From these constatations and other cellular and molecular findings, Gemmataceae representatives which clinical microbiologists have overlooked , have been suggested to potentially behave as opportunistic pathogens . Concerning the Burkholderiales (given as a second example), it encompasses a large variety of organisms, in particular plant and animal pathogens, including for humans . Certain Burkholderiales bacteria are considered particularly dangerous for individuals suffering from chronic lung diseases . Anaconda’ s statistical approaches As just seen above, the approach implemented in the Anaconda package revealed an increase of soil microbial pathogens with a reduction in the fallowing period. In complement to this compelling constatation, other soil microorganism types that displayed differences in their occurrence and deserve great attention were also recovered from Anaconda analyses. Indeed, several fungal saprophytes, i . e ., Glutinoglossum sp , Hymenochaete acerosa , Lycogalopsis solmsii , Trechispora invisitata and Sakseneae trapezispora , were detected in lower prevalence in the short fallow’s soils (Figs and ). It has been shown that saprophytic fungi can be involved in the regulation of pathogens . Competition for resources and antagonist interactions, via saprophyte fungi promoting soil antifungal bacteria , are underlying mechanisms leading to soil pathogen suppression. The lowest value of mineralised carbon in the short fallow, which reflects a lower microbial activity, argues in favour of a diminution of saprophyte activity. We could, thereby, hypothesise that the specific decrease of these saprophytic fungi has favoured the increase of the detrimental microorganisms observed in the short fallow plots. Alongside saprophytes, the fungal animal pathogen Metarhizium robertsii was in a decreasing order well-represented in the forest, then in the long and the short fallow . This fungus is an entomopathogen infecting a wide range of arthropods, and can consequently be involved in insect pests’ regulation . It can also establish itself as a root endophyte and favour plant growth and defence against plant pathogens . The specific lower abundance of this entomopathogenic and plant-endophyte fungus in the short fallow may similarly favour an increase of detrimental organisms. Thus, in the context of soil suppressiveness ( i . e ., the capacity of any given soil to reduce pathogens and disease incidence), specific suppression mechanisms, through individual species or selected groups of antagonist microorganisms , seem to regulate soil borne-pathogens in our system, rather than microbial diversity . Conversely, to the reduction of saprophyte and entomopathogen-plant endophyte fungi, an over-representation of the chemoorganotroph Gaiella bacterial genus was observed in the short-term fallow via the ‘targeted’ analysis . In tomato cropping soils, after organic amendment, a strong relationship was observed between this genus and the inhibition of the soil pathogen responsible for Fusarium wilt . Therefore, in our short fallow system, certain beneficial taxon acting against detrimental soil microorganisms may also be present. The intrinsic balance of soil between its relative abundance of beneficial and detrimental microbes is a crucial factor in determining its capacity to express or suppress diseases. One of the major questions consequently arising is when this threshold leading to one situation or the other would be met . Other biotic components of the soil environment than fungi and bacteria can contribute to soil suppressiveness . As earlier seen, the fungal pathogenic Botryosphaeria genus (Botryosphaeriaceae) was present in both fallows, but particularly in the short one. A Botryosphaeria species has been recovered from Acacia plant species (Fabaceae) in Australia . According to Anaconda results on plant communities, Acacia spirorbis was significantly present in both fallows, with higher relative abundances in the short fallow. The larger abundance of this fungus could thus be related to Acacia ’s abundance. Interestingly, another microorganism type, a nematode of the Aphelenchoididae family has been experimentally demonstrated to feed on a Botryosphaeriaceae member . Using again the Anaconda package, significant variations in the abundance of this nematode family were detected, with higher abundances in the short fallow. Preferential grazing of an Aphelenchoididae species on ectomycorrhizal fungi has also been revealed in the literature . A . spirorbis is recognised as an ectomycorrhizal shrub , which by the way would explain the over-representation of ectomycorrhizal fungi in the short-term fallowing ( i . e ., Thelephora ceae and Cortinarius species) . Multiple biotic interactions may thus intervene in the regulation of Botryosphaeria in soil. This fungus could, as previously indicated, benefit from the larger abundance of Acacia , but, at the same time, may be regulated by the predation of nematodes, which are also stimulated by the presence of ectomycorrhizal fungi. A . spirorbis is also able to form another type of mycorrhiza, i . e ., endomycorrhiza. This characteristic could explain the over-representation in the short fallow, underlined by the ‘global’ analysis , of Spizellomyces punctatus , a chytrid species that has been suggested to attack and colonises dead endomycorrhizal spores . S . punctatus could also be an indicator of perturbation. Lozupone and Klein (2002) showed that Spizellomyces populations increased in response to disturbance ( i . e ., after experiencing agricultural cultivation); an observation supporting the aforementioned facts that short fallow constitutes a degraded system. The statistical approaches implemented in our Anaconda package may, thus, help to disentangle and better understand the multiple biological interactions occurring in a given ecosystem, particularly those leading to an over-representation of certain harmful microbes in soil . It can, additionally, participate in defining ‘targeted’ agricultural management practices to control pathogen populations, for instance, here, by regulating A . spirorbis occurrence. Besides biotic factors, abiotic soil properties can also, directly and indirectly (via influencing other soil organisms), be involved in regulating plant and human pathogens populations in soil . Soil attributes, such as pH, soil moisture, organic matter content, and nutrient availability, can have a role in soil pathogen’s establishment, survival and growth . However, in our study, when significant differences occurred ( e . g ., pH, organic matter content, carbon content, and C/N ratio), they were mostly between the short fallow and the ecosystem of reference (not with the long fallow). It seems likely that biotic rather than abiotic factors regulate plant and human pathogens in our traditional agricultural system. Despite some tendencies, notably in terms of global microbial phyla dominance, ‘classical’ community analysis failed to detect significant changes in microbial diversity, composition, and structure in response to agricultural practices on Maré island. By contrast, our newly developed statistical approaches for community investigation implemented in the Anaconda package (i.e., the ‘targeted’ and ‘global’ analyses), clearly revealed differences in the occurrence of soil organisms among the studied systems, especially for fungi. Indeed, a significant over-representation of harmful plant and human fungal pathogens was observed in the short fallow soil. At the same time, an under-representation of beneficial soil microorganisms, such as saprophytic, entomopathogenic and plant-endophyte fungi, was detected. The specific shifts in fungal and bacterial taxa, in combination with the characterisation of other biotic and biotic features, allowed us to infer hypothetical links between these diverse soil environmental components and assume their potential implication in soil pathogen suppression . Our findings undeniably support the major interest in using next-generation sequencing technologies, in combination with more classical ecological inventories, and appropriate statistical methods to establish sensitive, informative and reproducible biological indicators, and subsequently assess disease potential in soils. They also highlight the significance of picking into the omics toolbox by using and transferring, here, methodologies initially developed for genomics-transcriptomics in metabarcoding. In addition to the insights gained from the classical community analysis and the Anaconda package, it is important to note that the cultivation of yams holds great cultural and symbolic significance for the local people of Maré Island in New Caledonia. Thus, the impact of changes in agricultural practices on soil health extends beyond the purely ecological and must also be considered within a cultural context. This new tool that is Anaconda could further be used for determining the impact in various crop systems of different agricultural practices (e.g., organic amendments and cover crops) on soil microorganisms, and consequently help to find solutions for regulating detrimental microorganisms. Such a combination of ‘targeted’ and ‘global’ analyses could promote the use of eDNA metabarcoding in biomonitoring and represent the next breakthrough in soil health and One Health assessment, as well as in various ecological domains. S1 Fig Sampling plan. Five plots of 20 x 20m were established per condition, providing 15 plots in total. In each of the 20 x 20m, four 5 x 5m sub-plots were positioned in the four corners and one in the centre. Within each of these sub-plots, five soil samples were collected at 0–15 cm depth using a five cm diameter auger. All soil samples collected in a given plot were then mixed to form a composite soil sample. Thus, each composite sample corresponds to one plot. A total of 15 composite samples was finally obtained and corresponded to the 15 plots set up in the present work. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (PDF) S2 Fig Alpha rarefaction plots (observed ASVs, Shannon, and Faith PD) for fungi (ITS2). The alpha rarefaction plots for fungi typically show three curves: observed ASVs, Shannon index, and Faith PD. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (PDF) S3 Fig Alpha rarefaction plots (observed ASVs, Shannon, and Faith PD) for bacteria (16S). Same legend as the . (PDF) S4 Fig Fungi diversity boxplots. The fungi diversity boxplots represent various metrics used to assess the diversity of fungal communities. These metrics include observed ASVs (Amplicon Sequence Variants), Chao1, Simpson, Shannon entropy, Faith PD, Simpson evenness, Pielou evenness, and Fisher alpha. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (PDF) S5 Fig Bacteria diversity boxplots. Same legend as the . (PDF) S6 Fig Soil texture triangle. At the corners of the triangle are three main soil components: sand, silt, and clay. Each dot is a sample that falls within one of the twelve sections. (PDF) S7 Fig Anaconda R package schema to understand the links between different files and analysis portions. For a better understanding, please refer to the readme document at ‘ https://github.com/PLStenger/Anaconda ’. (PDF) S8 Fig Physico-chemical analysis. Granulometric fraction, physical, linked organic matter, free organic matter, microbial biomass analysis boxplots, microbial biomass, mineralised carbon balance (microbial activity), and mineralised nitrogen balance (microbial activity) analysis boxplot. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (PDF) S9 Fig db-RDA (distance-based redundancy analysis) plot of the fungal phyla in relation to the granulometric fractions (clay, silt, and sand; in per cent) of soil samples collected from three different land-use types: Short fallow (SF), long fallow (LF), and forest (F). The plot displays the distribution of the fungal phyla in relation to the granulometric fractions of the soil samples, with each point representing a sample. (PDF) S10 Fig Dispersion (A and C) and sparsity (B and D) plot for fungi (A and B) and bacteria (C and D). Dispersion and sparsity plots are used to assess the data quality and the statistical model’s appropriateness. A dispersion plot shows the relationship between the mean of normalised counts and their variance (or dispersion) for each ASV. The dispersion estimates are calculated using a negative binomial model, and the plot is typically shown on a logarithmic scale to visualise the trend. A good dispersion plot shows a relatively constant dispersion across all normalised count levels, which indicates that the negative binomial model is appropriate for the data. A sparsity plot shows the proportion of ASVs with a given number of counts in the sample. It is used to assess the overall level of sequencing depth and the quality of the normalisation procedure. The plot typically shows a decreasing trend, with the majority of ASVs having low counts and a smaller proportion having higher counts. If the sparsity plot shows a high proportion of ASVs with low counts, it suggests that the sequencing depth is insufficient, or the normalisation procedure is inadequate. In contrast, if the sparsity plot shows a high proportion of ASVs with very high counts, it may indicate a technical artefact or batch effect that needs to be addressed. (PDF) S11 Fig Pheatmap log2 norm counts with taxonomy for fungi from the Anaconda R package. The heatmap displays the relative abundance of the 75 most abundant fungal Amplicon Sequence Variants (ASVs) across multiple samples. The log2 normalised counts of each ASV were used to generate the heatmap, which allows for the comparison of relative abundance between different ASVs and samples. The heatmap also includes taxonomic information for each ASV, which allows for the identification of taxonomic groups that are more abundant in certain samples or conditions. The heatmap is clustered based on the Euclidean distance between samples and ASVs using the average clustering method, which groups samples and ASVs with similar abundance patterns together. This allows for the identification of clusters of samples or ASVs that share similar characteristics or respond similarly to certain conditions. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (PDF) S12 Fig Pheatmap log2 norm counts with taxonomy for bacteria from the Anaconda R package. Same legend as the . (PDF) S1 Table MultiQC results for fungi. Total number of sequences and their means (and standard deviation) by condition (SF is for Short Fallow; LF is for Long Fallow, and F is for Forest), before and after the Trimmomatic step, percentage of kept sequences. (XLSX) S2 Table QIIME2 stats for fungi. Total number of sequences and their means (and standard deviation) by condition (SF is for Short Fallow; LF is for Long Fallow, and F is for Forest) for each QIIME2 step (input, filtered, percentage of input passed filter, denoised, merged, percentage of input merged, mean, SD, non-chimeric, percentage of input non-chimeric, mean, SD, Table, ConTable, and Rarefaction). (XLSX) S3 Table MultiQC results for bacteria. Same legend as the . (XLSX) S4 Table QIIME2 stats for bacteria. Same legend as the . (XLSX) S5 Table Organophysico-chemicals analysis. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (XLSX) S6 Table Plantae statistics results for the 29 found plant species. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (XLSX) S7 Table Nematoda statistics results for the 36 found nematoda families. SF is for Short Fallow; LF is for Long Fallow, and F is for Forest. (XLSX) |
Research note: A scald water surfactant combined with an organic acid carcass dip reduces microbial contaminants on broiler carcasses during processing | bf893871-dccb-4b34-b7b8-705eb6e78dcc | 11141257 | Microbiology[mh] | Microbial contamination of poultry products during processing results in reduced shelf life from the introduction of spoilage microorganisms such as Pseudomonas and Lactobacillus spp. and can present a significant food safety threat to consumers due to Salmonella and Campylobacter spp. contamination . Thus, food safety programs have relied on multiple microbial reduction interventions throughout the sequential steps of poultry processing to reduce or eliminate microbial contaminants through a multi-hurdle approach . Organic acid ( OA ) carcass dips or sprays have been a common intervention to reduce microbial contamination on carcasses postslaughter due to their bactericidal effects . Another intervention has been amending scald water with surfactant agents that aid in feather removal through skin follicle protein denaturation and improve the efficacy of additional interventions such as OA dips and sprays . A combination of multiple interventions with synergistic effects is ideal for a multi-hurdle food safety approach, but less is known about the interactions of commonly available interventions relative to their independent effects on reducing microbial contamination of broiler carcasses. Microbial contaminants can remain firmly adhered to poultry carcasses despite chemical and physical applications during processing . Topographical changes to chicken skin are induced by scalding and picking and include exposed feather follicles and microabrasions, which facilitate deep attachment of coliform bacteria and physical protection from OA treatments . Previous studies have reported synergism among surfactants such as sodium dodecyl sulfate and OA treatments in the reduction of microbial contaminants on chicken skin during processing . There is an ongoing need to evaluate combinations of surfactant scald additives and organic acid carcass dips in broiler processing as producers consider benefits of multiple approaches to maintain process control. We hypothesized that a surfactant scald additive ( SA ) labeled for feather removal combined with an organic acid blend carcass dip would reduce microbial surface contaminants after feather picking. This study was conducted using whole broiler carcasses in a small-scale processing facility with commercially available SA and OA solutions.
Broiler Husbandry: All procedures were conducted in accordance with the principles and specific guidelines of the Guide for the Care and Use of Agricultural Animals in Research and Teaching and approved by the North Carolina State University Institutional Animal Care and Use Committee (Protocol # 16-122-T). Ross 708 × YPM broilers were reared in a curtain-sided, fan-ventilated, litter floor pen house. Mixed-sex broilers were placed in uniform pens (1.2 m × 4.0 m; 4.8 m 2 ) in groups of 20 birds per pen for a stocking density of 0.24 m 2 per bird. Each pen was equipped with Plasson water drinkers and tube feeders that supplied commercial diets in starter, grower, and finisher phases. Feed was withdrawn 12 h prior to loading and transporting broilers to the processing facility. Broilers were reared for poultry processing teaching and research studies, received no feed additives, and were not intentionally exposed to or challenged with coliform bacteria at any point. One week prior to processing, the litter of all pens was tested for the presence of Salmonella spp. with sterile, pre-enriched socks and detection by an enzyme-linked fluorescence assay using previously published methods . No Salmonella spp. were detected, and further testing of this pathogen was not pursued. Broiler Processing and Sampling. Broiler processing steps and sampling procedures are presented in . Seventy-five broilers were randomly selected and loaded into transport crates and moved to an on-site pilot processing facility. Groups of 15 birds were unloaded, individually weighed, hung on a shackle line, electrically stunned for 11 s, and exsanguinated for approximately 1 min 45 s after the carotid artery and jugular vein were severed manually by a trained individual using a sharp knife cut. An undisturbed feathered neck skin sample was collected from the first group of broilers after exsanguination and sampled as described below to determine the initial surface microbial contamination. These initial 15 birds were not subjected to additional processing. All remaining broilers subjected to experimental treatments were processed in 4 groups of 15 birds to complete a 2 × 2 factorial arrangement . Bled birds were scalded in an agitated rotary scald bath heated to 57°C for 80 s 3 birds at a time for a total of 5 scalding rounds per 15 bird group. For the SA treatment, the scald water was either untreated (control) or amended with a commercially available scalding agent (Turkleen, Birko Corporation, Henderson, CO). The agent was a slightly alkaline (Ph = 7.3), biodegradable combination of surfactants and emollients labelled for efficient feather removal. According to label instructions, 33.1 mL of the additive was added to 132.4 L of water in the scald tank to achieve a final concentration of 2.5 ppm. The scald water tank was drained, disinfected with a commercial foaming solution, rinsed, and refilled between scalding carcasses subjected to the control and SA treatments. After scalding, feathers were removed with an automated picker for 30 s, which was also disinfected with a commercial-grade foaming disinfectant between treatment groups, and the carcasses were gently rinsed with municipal water to remove residual feathers prior to the OA treatment. Carcasses were either immersed in a room temperature (20°C) municipal water bath (control) or a 2% solution of the same water and a commercially available lactic and citric acid blend (Chicxcide, Birko Corporation, Henderson, CO) for 20 s prior to hanging for 1 min. The control water and treatment OA dips were replenished for the 4 treatment groups followed by pH measurements. Neck skin samples were then aseptically collected with sterile surgical instruments. Skin samples were immediately placed in sterile filter bags (VWR International, Radnor, PA) that contained approximately 40 mL of chilled 1% buffered peptone water (Thermo Scientific Oxoid product #CM0509B) with a neutralizing agent (sodium thiosulfate, Thermo Scientific Chemicals product #AC202875000) added at 1 g/L. Consecutive pH measurements were taken of the neutralizing buffered peptone water at 0, 1, and 5 min after adding the acid-dipped skin to verify the neutralization of the chosen organic acid blend for a subset of skin samples. The mass of these neutralizing media sample bags was determined prior to and after skin sample addition to determine final skin sample weights for calculating CFU/g of skin. Samples were placed on ice and transported to a Biosafety Level 2 laboratory for processing. Samples were homogenized in a mechanical stomacher for 30 s and duplicate aliquots were serially diluted in sterile phosphate buffered saline prior to plating 1 mL of each dilution on 3M Petrifilm Aerobic Count Plates (Neogen product# 700002116) and 3M Petrifilm E. coli /Coliform Count Plates (Neogen product #700002271) and aerobic incubation according to the manufacturer's instructions. E. coli colonies were color-differentiated from other coliforms and reported separately from the total coliform count according to the manufacturer's interpretation guide. Plate dilutions with countable colonies were manually counted by a blinded individual and the average number obtained from duplicate dilutions of each sample were included in the final analysis. Statistical Analysis. Plate count data from treatment groups were normalized to CFU/gram of the neck skin sample, log transformed, and subjected to a 2-way ANOVA using the GLM procedure of JMP (SAS Institute Inc 2010, Cary, NC). The models included the scald water treatments (no additive and surfactant) and organic acid treatments (water and 2%) as the main factors in addition to the 2-way interactions. Least-squared means were compared with the Student's t test for main effects and Tukey's Honest Significant Difference for interaction effects. Differences were considered statistically significant when P ≤ 0.05.
Microbial contaminants on broiler carcass skin were investigated after implementation of a surfactant scald water additive and OA blend carcass dip alone and in combination. The mean live weight of processed broilers was 3.16 ± 0.246 kg (range: 2.80 – 3.66 kg). Microbial contamination of unprocessed skin was first established by sampling a feathered neck skin sample. Unprocessed neck skin contained up to 4.74 log CFU per gram of skin, while aerobic counts were nearly double at 8.25 log CFU per gram of skin . The scald water additive did not significantly reduce carcass neck skin microbial contamination compared to scald water with no additive. This was expected as the surfactant agent used was labeled for feather removal and was only slightly alkaline at pH of 7.3. Alkalizing scald additives containing sodium hydroxide (pH = 11) were more efficaciousness in reducing carcass microbial contaminants when employed as sole interventions . In the present study, bactericidal action was attributed to the post-defeathering OA carcass dip (pH = 2.4 ± 0.1), which resulted in a significant reduction of all microbial contaminants tested ( P < 0.01). A 0.61, 0.76, and 1.59 log reduction in general coliform, E. coli , and aerobic counts, respectively, was observed regardless of the scald water treatment . The combined SA and OA treatment resulted in a 1.8 log reduction in aerobic counts compared to the control treatment ( P = 0.053) . These results indicated a combination of SA and OA were efficacious in reducing contamination of broiler neck skin by coliform and aerobic bacteria after scalding and feather picking. Chicken skin attachment studies have investigated interactions between transdermal surfactants and organic acids in vitro and reported mechanisms of synergism. Surfactant agents such as sodium lauryl sulfate and sodium dodecyl sulfate ( SDS ) were proposed to increase permeation of organic acids into skin and feather follicles where bacteria were attached, but there has been more robust evidence of their solvent action that interferes with bacterial attachment to skin and feather follicles that increases OA bactericidal action . However, sodium dodecyl sulfate did not increase the efficacy of 0.2% peracetic acid when the two were combined and applied to chicken skin post-defeathering . The results of the present study indicate that a surfactant de-feathering aid added during the scalding process had potential synergism with the OA carcass dip blend of lactic and citric acids applied post-defeathering. Federally regulated chicken slaughter establishments in the United States have utilized microbial reduction interventions during processing to improve product safety and were able to increase efficiency through line speed waivers with modified testing programs that demonstrate process control. Approved antimicrobial interventions for use in poultry processing included many organic acid applications and are described in The United States Department of Agriculture Food Safety Inspection Service (USDA-FSIS) Directive 7120.1, Revision 58 . The USDA-FSIS also approved maximum line speed waiver requests for young chicken slaughter establishments that operated under the New Poultry Inspection System. The line speed waiver request was amended to require daily aerobic plate count data collection in addition to other criteria that demonstrated adequate process control . The present study exemplified the practicality of reducing carcass contamination with a combined scald water surfactant and OA treatment in a small-scale operation as demonstrated by aerobic plate count data. From a commercial production perspective, food safety interventions can be difficult to implement due to product volume and cost-benefit ratios. However, smaller volume processors may have more flexibility in tailoring a food safety program to their unique production and consumer needs. Limitations of this study include a reduced sample size and sample collection after picking before evisceration, chilling, and further processing steps were completed. Carcass quality parameters including color and enumeration of whole-carcass rinse samples post-chilling should be evaluated when assessing similar interventions in food safety programs. The focus of this study was the combined efficacy of a scald water surfactant and OA dip immediately after scalding and picking. We used an applied approach to determine interactions between these two common and widely available processing interventions using equipment and procedures more aligned with small scale poultry processors. These findings are informative to larger commercial operations as they are to lower volume processors who seek multiple, complementary methods to reduce contamination on finished products and improve operational efficiency through line speed waivers . Overall, a combination of surfactant-type scald water additives and organic acids may contribute to poultry food products with less risk of microbial contamination. Interactions of these interventions that result in enhanced bactericidal activity on poultry carcasses warrant further investigation.
The authors declare no conflicts of interest.
|
Pressure-clamped single-fiber recording technique: A new recording method for studying sensory receptors | 2b161eae-7ca1-4d36-92fb-9d6f31a4e5a4 | 7235654 | Physiology[mh] | Recordings of afferent nerve impulses are useful in studying sensory physiology and pain. Conventional extracellular recordings such as those using suction electrodes or with a pair of metal wires allow one to record compound action potentials propagated on an afferent nerve bundle. However, these recording techniques are not well suited for studying modality-specific sensory receptors such as mechanical receptors, thermal receptors, and nociceptors. This is because compound action potentials are impulses from many different types rather than a specific type of afferent nerve fibers. To solve this issue, researchers have developed several types of single-unit recording methods. For example, a sharp electrode can be inserted into a neuron of a sensory ganglion from which intracellular single-unit recordings are made. However, the probability is low to insert a recording electrode into the sensory neuron that innervates a receptive field of interest such as a particular area of the skin. The most widely used single-unit recording approach is the teased-fiber single-unit recording technique. , , In this recording method, a nerve trunk is carefully isolated, severed, and successively split or teased into fine filaments that contain a few nerve fibers. The teased-nerve fibers are then placed over a recording electrode made by a pair of platinum or silver wires. This recording approach has provided an important tool to study different sensory receptors and their physiological and pathophysiological functions including pain. – Although the teased-fiber single-unit recording technique has continued to be a main method to study sensory receptors and their roles in pain, , , , the technique suffers from a number of disadvantages. First, many nerve fibers in the nerve trunk were severely injured during mechanical separation procedures of preparing teased fibers. As such, only a few nerves can be recorded in each experiment, and these nerves may not fully represent those that innervate the receptive field of interest. Furthermore, in the teased-fiber single-unit recordings, the injured nerve fibers may have altered electrophysiological properties. Second, the procedures for preparing teased fibers are very delicate, tedious, and time consuming. In addition, spike discrimination and sorting need to be performed to analyze spikes in order to differentiate between single units and multiple units in a train of spikes recorded by this technique. Third, teased-fiber single-unit recording is not a true single-fiber recording method because there are still a number of fibers in each teased nerve. This compromises the precision in determining functional properties of a specific type of sensory receptors of interest. To avoid the technical weaknesses of the teased-fiber single-unit recording technique, we have developed the pressure-clamped single-fiber recording technique, a simple and reliable method to record impulses on individual nerve fibers in a nonteased nerve bundle. We have applied this new approach to successfully record impulses following the activation of mechanoreceptors in whisker hair follicles and in skin-nerve preparations of mice. The technique allowed us to record impulses conveyed by Aβ-, Aδ-, and C-afferent nerve fibers of mice.
Whisker hair follicle and skin nerve preparations Male C57BL/6 mice aged 8–10 weeks were used. Animal care and use conformed to National Institutes of Health guidelines for care and use of experimental animals. Experimental protocols were approved by the Institutional Animal Care and Use Committee at the University of Alabama at Birmingham. Whisker hair follicle preparations were made based on the method of our previous studies. – In brief, mice were anesthetized with 5% isoflurane and then sacrificed by decapitation. Whisker pads were dissected out and placed in a 35-mm Petri dish that contained 5 ml ice-cold L-15 medium. Each whisker hair follicle together with its afferent bundle and hair shaft was then gently pulled out. The capsule of each whisker hair follicle was cut open to two small holes with one hole at the enlargement part and the other hole at the end of the capsule to facilitate solution exchange. The whisker hair follicle preparations were affixed in a recording chamber by a tissue anchor and submerged in a Krebs bath solution, and the recording chamber was then mounted on the stage of an Olympus BX50WI microscope and perfused with the Krebs bath solution at the flow rate of 2 ml/min. The Krebs solution contained (in mM): 117 NaCl, 3.5 KCl, 2.5 CaCl 2 , 1.2 MgCl 2 , 1.2 NaH 2 PO 4 , 25 NaHCO 3 , and 11 glucose (pH 7.3 and osmolarity 325 mOsm) and was saturated with 95% O 2 and 5% CO 2 . Unless otherwise indicated, the Krebs bath solution was maintained at a room temperature of 24°C. Skin-nerve preparations were made based on previous studies with modifications. , In brief, mice were euthanized by overdose of isoflurane followed by decapitation. Hairy skin of the hindlimb was shaved and then dissected out together with the saphenous nerve. The skin and its attached saphenous nerve were placed in a recording chamber that contained the aforementioned Krebs bath solution, and fat and connective tissues in the skin were carefully removed. The skin was then affixed by tissue pins and the saphenous nerve affixed by a tissue anchor in the recording chamber. The recording chamber was mounted on the stage of the Olympus BX50WI upright microscope. The skin-nerve preparation was continuously perfused by the Krebs bath solution at the room temperature of 24°C The pressure-clamped single-fiber recording technique For pressure-clamped single-fiber recordings made from afferent nerves innervating whisker hair follicles, whisker hair follicle preparations on the stage of the microscope were exposed to a mixture of 0.05% dispase II plus 0.05% collagenase for 3–5 min. The enzymes were then washed off with the continuous perfusion of Krebs bath solution. Recording electrodes were made by thin-walled borosilicate glass tubing without filament (inner diameter 1.12 mm, outer diameter 1.5 mm). They were fabricated using a P-97 Flaming/Brown Micropipette Puller and fire polished to make tip diameter at 3 to 6 µm. The recording electrode was filled with the Krebs bath solution, mounted onto an electrode holder which was connected to a high-speed pressure-clamp device (ALA Scientific Instruments, Farmingdale, NY). Under a 40× objective, individual fibers in the cutting end of the whisker afferent nerve bundle were separated by a positive pressure of approximately +10 mmHg delivered from the recording electrode. The end part of a single nerve fiber was then aspirated into the recording electrode by a negative pressure at approximately −10 mmHg. Once the nerve end reached approximately 10 µm in length within the recording electrode, the pressure in the recording electrode was readjusted to −5 to −1 mmHg and maintained throughout the experiment. Nerve impulses were recorded using a Multiclamp 700 A amplifier and signals sampled at 20 KHz with low pass filter set at 1 KHz. Unless otherwise indicated, all experiments were performed at 24°C. For pressure-clamped single-fiber recordings made from saphenous nerves innervating the skin of the hindpaw, the aforementioned recording procedures were applied to the cutting end of the saphenous nerve. In some experiments, a small segment of the saphenous nerve was aspirated into a suction stimulation electrode. The suction stimulation electrode had a funnelform tip to help to aspirate nerve segment into the stimulation electrode. The stimulation site was approximately 15 mm away from the recording electrode. The stimulation electrode was used to deliver square pulse stimuli each at the duration of 50 µs to evoke nerve impulses. This allowed us to measure conduction velocity of each nerve fiber recorded so that a recorded nerve fiber could be categorized based on its conduction velocity. In a different set of experiments, the pressure-clamped single-fiber recordings were applied to isolated sciatic nerves without attached skin. In these experiments, animals were euthanized, the skin was cut open at the gluteal area, and the sciatic nerve trunk was dissected out. The sciatic nerve trunk was from proximal site at sacral foramen level to distal site at the ankle level, which was in the length of approximately 30 mm. The distal end of the nerve was then divided into several fascicles manually and enzymatically treated as described above, and the pressure-clamped single-fiber recording was then applied to individual fibers. Impulses were evoked by electrical stimulation at the proximal site of the sciatic nerve trunk using a suction electrode. The individual nerve fibers recorded were classified into Aβ-, Aδ-, or C-fiber based on the conduction velocity of the antidromic impulses. Stimulation of mechanoreceptors in whisker hair follicles Mechanical stimulation was applied to the body of each whisker hair follicle using a blunted 20-gauge needle as a probe. The needle was mounted on a holder and attached to a piezo device. The tip of the needle was positioned at an angle of 45° to the surface of the whisker hair follicle. The piezo device with the mechanical probe was mounted on a Sutter MPC-200 micromanipulator. The piezo device was computer-programmable with the pCLAMP10 software to deliver forward stepwise mechanical stimulation. In each experiment, a receptive field was first probed manually with the mechanical probe controlled by the micromanipulator. Once identified, the vertical position of the probe tip was adjusted such that no nerve impulses were evoked at this position but a 1-µm forward movement of the probe would evoke nerve impulses. Unless otherwise indicated, the stepwise forward movement of the probe consisted of a 100-ms ramp to 38-µm step (dynamic phase) followed by a 2500-ms holding position at the 38-µm step (static phase) and then a 100-ms ramp back to baseline. Stimulation of mechanoreceptors in the skin Receptive fields of mechanoreceptors were identified by skin indentation using either von Frey filaments or a mechanical probe fabricated by a glass electrode. The mechanical probe was fire-polished to 500 µm in diameter. It was attached to an electrode holder and positioned vertically to the surface of the skin. The movement of mechanical probe was controlled by a Sutter MPC-200 micromanipulator. In some experiments, the mechanical probe was attached to a piezo device whose movement was computer programmed to produce displacement steps in a ramp-and-hold manner. Unless otherwise indicated, the stepwise forward movement of the probe consisted of a 100-ms ramp to 38-µm step followed by a 2500-ms holding position at the 38-µm step (static phase) and then a 100-ms ramp back to baseline.
Male C57BL/6 mice aged 8–10 weeks were used. Animal care and use conformed to National Institutes of Health guidelines for care and use of experimental animals. Experimental protocols were approved by the Institutional Animal Care and Use Committee at the University of Alabama at Birmingham. Whisker hair follicle preparations were made based on the method of our previous studies. – In brief, mice were anesthetized with 5% isoflurane and then sacrificed by decapitation. Whisker pads were dissected out and placed in a 35-mm Petri dish that contained 5 ml ice-cold L-15 medium. Each whisker hair follicle together with its afferent bundle and hair shaft was then gently pulled out. The capsule of each whisker hair follicle was cut open to two small holes with one hole at the enlargement part and the other hole at the end of the capsule to facilitate solution exchange. The whisker hair follicle preparations were affixed in a recording chamber by a tissue anchor and submerged in a Krebs bath solution, and the recording chamber was then mounted on the stage of an Olympus BX50WI microscope and perfused with the Krebs bath solution at the flow rate of 2 ml/min. The Krebs solution contained (in mM): 117 NaCl, 3.5 KCl, 2.5 CaCl 2 , 1.2 MgCl 2 , 1.2 NaH 2 PO 4 , 25 NaHCO 3 , and 11 glucose (pH 7.3 and osmolarity 325 mOsm) and was saturated with 95% O 2 and 5% CO 2 . Unless otherwise indicated, the Krebs bath solution was maintained at a room temperature of 24°C. Skin-nerve preparations were made based on previous studies with modifications. , In brief, mice were euthanized by overdose of isoflurane followed by decapitation. Hairy skin of the hindlimb was shaved and then dissected out together with the saphenous nerve. The skin and its attached saphenous nerve were placed in a recording chamber that contained the aforementioned Krebs bath solution, and fat and connective tissues in the skin were carefully removed. The skin was then affixed by tissue pins and the saphenous nerve affixed by a tissue anchor in the recording chamber. The recording chamber was mounted on the stage of the Olympus BX50WI upright microscope. The skin-nerve preparation was continuously perfused by the Krebs bath solution at the room temperature of 24°C
For pressure-clamped single-fiber recordings made from afferent nerves innervating whisker hair follicles, whisker hair follicle preparations on the stage of the microscope were exposed to a mixture of 0.05% dispase II plus 0.05% collagenase for 3–5 min. The enzymes were then washed off with the continuous perfusion of Krebs bath solution. Recording electrodes were made by thin-walled borosilicate glass tubing without filament (inner diameter 1.12 mm, outer diameter 1.5 mm). They were fabricated using a P-97 Flaming/Brown Micropipette Puller and fire polished to make tip diameter at 3 to 6 µm. The recording electrode was filled with the Krebs bath solution, mounted onto an electrode holder which was connected to a high-speed pressure-clamp device (ALA Scientific Instruments, Farmingdale, NY). Under a 40× objective, individual fibers in the cutting end of the whisker afferent nerve bundle were separated by a positive pressure of approximately +10 mmHg delivered from the recording electrode. The end part of a single nerve fiber was then aspirated into the recording electrode by a negative pressure at approximately −10 mmHg. Once the nerve end reached approximately 10 µm in length within the recording electrode, the pressure in the recording electrode was readjusted to −5 to −1 mmHg and maintained throughout the experiment. Nerve impulses were recorded using a Multiclamp 700 A amplifier and signals sampled at 20 KHz with low pass filter set at 1 KHz. Unless otherwise indicated, all experiments were performed at 24°C. For pressure-clamped single-fiber recordings made from saphenous nerves innervating the skin of the hindpaw, the aforementioned recording procedures were applied to the cutting end of the saphenous nerve. In some experiments, a small segment of the saphenous nerve was aspirated into a suction stimulation electrode. The suction stimulation electrode had a funnelform tip to help to aspirate nerve segment into the stimulation electrode. The stimulation site was approximately 15 mm away from the recording electrode. The stimulation electrode was used to deliver square pulse stimuli each at the duration of 50 µs to evoke nerve impulses. This allowed us to measure conduction velocity of each nerve fiber recorded so that a recorded nerve fiber could be categorized based on its conduction velocity. In a different set of experiments, the pressure-clamped single-fiber recordings were applied to isolated sciatic nerves without attached skin. In these experiments, animals were euthanized, the skin was cut open at the gluteal area, and the sciatic nerve trunk was dissected out. The sciatic nerve trunk was from proximal site at sacral foramen level to distal site at the ankle level, which was in the length of approximately 30 mm. The distal end of the nerve was then divided into several fascicles manually and enzymatically treated as described above, and the pressure-clamped single-fiber recording was then applied to individual fibers. Impulses were evoked by electrical stimulation at the proximal site of the sciatic nerve trunk using a suction electrode. The individual nerve fibers recorded were classified into Aβ-, Aδ-, or C-fiber based on the conduction velocity of the antidromic impulses.
Mechanical stimulation was applied to the body of each whisker hair follicle using a blunted 20-gauge needle as a probe. The needle was mounted on a holder and attached to a piezo device. The tip of the needle was positioned at an angle of 45° to the surface of the whisker hair follicle. The piezo device with the mechanical probe was mounted on a Sutter MPC-200 micromanipulator. The piezo device was computer-programmable with the pCLAMP10 software to deliver forward stepwise mechanical stimulation. In each experiment, a receptive field was first probed manually with the mechanical probe controlled by the micromanipulator. Once identified, the vertical position of the probe tip was adjusted such that no nerve impulses were evoked at this position but a 1-µm forward movement of the probe would evoke nerve impulses. Unless otherwise indicated, the stepwise forward movement of the probe consisted of a 100-ms ramp to 38-µm step (dynamic phase) followed by a 2500-ms holding position at the 38-µm step (static phase) and then a 100-ms ramp back to baseline.
Receptive fields of mechanoreceptors were identified by skin indentation using either von Frey filaments or a mechanical probe fabricated by a glass electrode. The mechanical probe was fire-polished to 500 µm in diameter. It was attached to an electrode holder and positioned vertically to the surface of the skin. The movement of mechanical probe was controlled by a Sutter MPC-200 micromanipulator. In some experiments, the mechanical probe was attached to a piezo device whose movement was computer programmed to produce displacement steps in a ramp-and-hold manner. Unless otherwise indicated, the stepwise forward movement of the probe consisted of a 100-ms ramp to 38-µm step followed by a 2500-ms holding position at the 38-µm step (static phase) and then a 100-ms ramp back to baseline.
shows the arrangement of the pressure-clamped single-fiber recording setup. The setup consists of an Olympus BX50WI upright microscope, a custom-made recording chamber, an electrophysiology recording system whose recording electrode holder was connected with a pressure-clamp device, and a computer-programmable piezo device . The microscope was equipped with a 4× objective for observing tissue samples and a 40× water immersion objective for observing individual nerve fibers. The recording chamber with tissue preparations was mounted on the stage of the microscope and perfused with Krebs bath solution. The pressure-clamp device was used for aspirating a single nerve fiber into recording electrode and for clamping the nerve fiber within the recording electrode during recordings of nerve impulses. In each recording, a fire-polished recording electrode was used and the tip size of the electrode was 3–6 µm in diameters . The recording electrode could be reused multiple times for the pressure-clamped single-fiber recordings. shows an example of our pressure-clamped single-fiber recordings from whisker afferent nerves to study mechanoreceptors in mouse whisker hair follicles. In each experiment, the whisker afferent nerve bundle was first transected with a sharp surgical knife and then briefly enzyme-treated to facilitate the aspiration of individual nerve fibers at the cutting end . To access a single fiber by the recording electrode, we applied a positive pressure of approximately +10 mmHg into the recording electrode to separate individual nerve fibers at the cutting end of the whisker afferent nerve bundle. Then, a negative pressure of approximately −10 mmHg was applied into the recording electrode to gently aspirate the end of a single afferent fiber into the recording electrode. The negative pressure was then adjusted to −5 mmHg to −1 mmHg in the recording electrode, which could maintain stable recordings for more than 2 h . Sample traces in show three types of mechanical responses recorded from three different individual whisker afferent nerve fibers following mechanical displacements of the capsules of whisker hair follicles . These three types of mechanical responses were rapidly adapting (RA) impulses ( , left), slowly adapting type 1 (SA1) impulses ( , middle), and slowly adapting type 2 (SA2) impulses ( , right). In a recent study, we have characterized these three types of mechanoreceptors by using this single-fiber recording technique. In the present study, we focused on the technical aspects of this new recording method. We explored feasibility of applying the pressure-clamped single-fiber recordings to study mechanoreceptors in the skin-nerve preparations of mice. shows the skin-nerve preparation made with the shaved hairy skin of a hindpaw and the saphenous nerve that innervated the skin area of the hindpaw. The skin was affixed by tissue pins to the bottom of a recording chamber with the inside of the skin facing up. The saphenous nerve bundle was affixed in the recording chamber with a tissue anchor. shows recording setup in the recording chamber, and is the schematic diagram of the recording setup. In this set of experiments, the pressure-clamped single-fiber recordings were applied to individual afferent fibers while mechanical stimuli were applied onto the skin. In addition, a suction stimulation electrode was used to deliver electrical stimulation, which allowed us to classify afferent nerve types based on their conduction velocities . shows an example of impulses elicited by mechanical stimuli. Impulses were evoked by multiple brief indentations of the skin at 100 µm displacement with the mechanical probe fabricated with a glass tubing ( , left panel). When the indentation was applied stepwise in a ramp-and-hold manner, the mechanical response was shown to be rapidly adapting ( , right panel). The site was also probed with von Frey filaments, which shows a mechanical threshold of 0.07 g in this receptive field. In addition, the conduction velocity of the nerve fiber was measured to be 2.86 m/s based on impulses elicited by electrical stimulation. In a different site, probing with von Frey filament elicited mechanical responses with a threshold of 0.16 g. We applied skin indentation at this site either manually via a manipulator ( , left) or using computerized piezo movement ( , right) for a prolonged period of time, both stimulation methods elicited slowly adapting responses. The conduction velocity of this fiber was 4.56 m/s based on impulse elicited by electrical stimulation. We determined whether the pressure-clamped single-fiber recording may be applied to different types of afferent fibers classified by their conduction velocities. In this set of experiments, sciatic nerve fibers with different diameters were tested . The recordings were made from distal end of the common peroneal branch of the sciatic nerve while electrical stimulation was delivered to the proximal site of the sciatic nerve . When recordings were made from fibers of large diameters, we found that the latency of the impulses elicited by electrical stimulation was very short . The conduction velocities calculated based on the latencies and the lengths of the nerve fibers were 17.41 ± 1.1 m/s (n = 15), fell into the category of Aβ-fibers of mice . Overall, these Aβ-fibers had the diameters of 5.76 ± 0.57 µm (n = 15, ). When recordings were made from fibers of medium diameter, the latencies of the impulses were longer and conduction velocities of these fibers were 7.4 ± 0.49 ms (n = 23, , ), fell into the category of Aδ-fibers. These Aδ-fibers had the diameters of 3.96 ± 0.27 µm (n = 23, ). We made recordings from the finest fibers in the sciatic nerve bundle whose diameters were 1.41 ± 0.06 µm (n = 11). We found that the latencies of the impulses were very long and the conduction velocities were 0.33 ± 0.02 µm (n = 11, , ), fell into the category of C-fibers. Interestingly, some of the C-fibers fired two to three impulses in response to a single electrical stimulation . Overall, of the 11 C-fibers recorded in response to a single electrical stimulation, five of them each fired a single impulse, and 6 of them each fired two to three impulses . Impulses recorded by the pressure-clamped single-fiber recording usually showed high signal-to-noise ratio. However, the signal, i.e., amplitude of the impulses, could gradually become smaller over the time of recordings in some experiments . We found that the reduction of impulse amplitudes was usually due to recording electrode drift such that the nerve segment inside the recording electrode became too short. In this case, repositioning the recording electrode and aspirating an appropriate length of nerve segment back into the recording electrode usually could restore the amplitude of the impulses. We also found that the amplitudes of impulses could be small if a nerve segment was aspirated too long into the recording electrode. In this case, readjusting the pressure within the electrode is needed to optimize the length of the nerve segment within the electrode, which could enhance the amplitude of the impulses . Overall, the optimal length of the nerve segment within the recording electrode should be kept in approximately 10 to 15 µm.
In the present study, we have described methodological aspects of our newly developed pressure-clamped single-fiber recording technique. This recording method is a true single-fiber recording approach, a key feature different from the commonly used teased-fiber single-unit recording method. We have validated our new recording method on tissue preparations including whisker hair follicles, skin-nerve preparations, and sciatic nerves. We show that this new recording method can be conveniently used to reliably record responses of mechanoreceptors in both whisker hair follicle and skin nerve preparations. Furthermore, we have demonstrated that the pressure-clamped single-fiber recording method is applicable for recording impulses carried by different types of afferents including Aβ-, Aδ-, and C-afferent fibers. Thus, the recording method described here provides a new tool that can be used to explore functions of specific sensory receptors including mechanoreceptors, thermal receptors, and nociceptors. Using the pressure-clamped single-fiber recording technique, we have recently characterized properties of low threshold mechanoreceptors in whisker hair follicles. In the present study, we have further illustrated technical details of this recording method so that other researchers in the field can adopt this new recording technique for their studies in sensory physiology and pain. We have demonstrated that our new recording technique is not only suitable for studying mechanical receptors in whisker hair follicles but also can be used for investigating sensory receptors in the skin with the use of nerve-skin preparations. Previously, the teased-fiber single-unit recording technique is the main approach to study mechanoreceptors as well as nociceptors in the skin using nerve-skin preparations. , Although we have not applied our new recording method to study nociceptors in the present study, we have shown with sciatic nerve preparations that the new recording technique can be used to record impulses conveyed by different types of afferents including Aβ-, Aδ-, and C-fibers. Therefore, the pressure-clamped single-fiber recording technique is ready to be used to study nociceptors whose signals are conveyed by Aδ- and C-fibers. There are a number of technical advantages of our pressure-clamped single-fiber recordings over the teased-fiber single-unit recording method. First, unlike the teased-fiber single-unit recording which is tedious and very time consuming for nerve preparations, our new recording method is simple and takes short time to make nerve preparations. While mechanically splitting nerve bundle into fine filaments is required for teased-fiber single-unit recordings, this delicate procedure is not required in our pressure-clamped single-fiber recording technique. This avoids mechanical damage of nerve fibers that would inevitably occur in teased-fiber single-unit recording technique. In teased-fiber single-unit recordings, spike analysis including spike discrimination and sorting are needed to differentiate between single-unit spikes and multiple unit spikes. In contrast, such spike analysis is not needed for the pressure-clamped single-fiber recordings because all impulses recorded come from a single nerve fiber in each recording. The pressure-clamped single-fiber recording technique allows prolonged and stable recordings of impulses for hours in a manner of high signal-to-noise ratio. However, in some cases, the amplitudes of impulses became reduced over long time of recordings, which appears to be mainly due to the changes in the fitting of the nerve segment within the recording electrode. This weakness can be solved by monitoring and readjusting the fitting of the nerve segment within the recording electrode, which can be easily achieved by adjusting the pressures in the recording electrode using the pressure clamp device. In conclusion, the pressure-clamped single-fiber recording technique provides a novel, reliable, and convenient approach to study different types of sensory receptors in different structures including whisker hair follicles, skin tissues, and other sensory organs.
|
Genetic modification of the shikimate pathway to reduce lignin content in switchgrass ( | 2331f63d-963f-4624-a061-81ddf0775fd4 | 11705929 | Microbiology[mh] | The biofuel industry has developed significantly over the past two decades given the impending need to replace fossil fuels and mitigate climate change . Switchgrass ( Panicum virgatum L.) is a perennial grass with C4 photosynthesis, an adaptation that anatomically separates the assimilation and reduction of CO 2 , thereby reducing photorespiration. Switchgrass is also a flagship sustainable biofuel feedstock species in North America given its wide native range, fast growth, high cellulose content, and relatively low requirements for water, nutrients, and pesticides . Lignocellulosic material is the cheapest feedstock to produce biofuels , and nearly 80% of switchgrass dry-weight biomass is composed of cellulose, hemicellulose, and lignin . Lignin is a major plant cell wall component and in grasses is composed of large branched and oxygenated polyaromatic compounds consisting of monomer units of coniferyl, sinapyl, and p -coumaryl alcohols . Since lignin contributes to biomass recalcitrance to deconstruction, reducing lignin content in feedstocks facilitates cellulose and hemicellulose hydrolysis, thus, increasing fermentable sugar yields from biomass and improving its conversion efficiency to bioenergy and advanced bioproducts . Several genetic engineering techniques have been used to reduce lignin content in plants . These include the silencing of genes encoding lignin biosynthetic enzymes such as 4-coumarate: CoA ligase and caffeate O -methyltransferase . Another promising strategy for reducing lignin in bioenergy crops involves the expression of bacterial 3-dehydroshikimate dehydratase (QsuB), which reduces the pool of precursors necessary for lignification . In the shikimate pathway, 3-dehydroshikimate is the precursor of shikimate and phenylalanine, which are key metabolites involved in lignin biosynthesis . QsuB converts 3-dehydroshikimate into protocatechuate and thereby limits lignin biosynthesis. Such genetic modifications have been shown to improve the saccharification of biomass compared to wild-type plants . For example, the expression of QsuB in switchgrass resulted in a 12%–21% reduction in lignin content and 5%–30% increase in saccharification efficiency, as well as greater bioaccumulation of protocatechuate . Plant-associated microbiomes are composed of populations of diverse bacteria and fungi that colonize internal and external plant tissues and may include beneficial, commensal, and pathogenic organisms. Microbiomes have been shown to be important for maintaining plant health and can be leveraged to increase biomass yield , enhance plant nutrient availability , improve drought tolerance , and further provide other ecosystem services related to soil structure, water retention, and carbon storage . For example, switchgrass plants inoculated with Serendipita vermifera (originally Sebacina ) produced as much as 75% and 113% more shoot biomass at 2-month and 3.5-month harvests, respectively . Plant genotype has been shown to be a factor involved in structuring plant microbiome, as was found for bacterial communities in the switchgrass rhizosphere, as well as aboveground and belowground fungal and bacterial microbiomes of switchgrass . Similar results were found for switchgrass phyllosphere microbiomes in the field . Changes in microbial community between highly productive and less productive switchgrass cultivars can be linked to the greater and lower microbial nitrogenase activity, respectively, which suggested a possible linkage between microbiomes and cultivar yields . Genetic engineering can improve plant biomass yield and chemical properties, but it may also have unexpected impacts on plant-microbe interactions. For example, the silencing of cinnamoyl-CoA reductase gene reduced lignin in poplar trees but also significantly changed the bacterial community in roots, stems, and leaves . Similarly, poplar trees downregulated in genes encoding for the lignin biosynthetic enzymes caffeoyl-CoA O-methyltransferase, caffeic acid O-methyltransferase, cinnamoyl-CoA reductase, and cinnamyl alcohol dehydrogenase all displayed a lower mycorrhizal colonization in vitro . DeBruyn et al. reported that lower lignin lines of COMT (caffeic acid O-methyltransferase)-downregulated switchgrass plants had no effects on bacterial diversity, richness, or community composition of soil samples, but they did not investigate the fungal community and other plant compartments. Thus, although growing engineered switchgrass with reduced lignin could have obvious industrial advantages regarding deconstruction and conversion processes, with no phenotypic differences noted, the impact of the engineered trait on the structure and functioning of the plant microbiome needs to be evaluated. In this study, we assessed the impact of QsuB-engineered switchgrass plants on the microbiome across plant compartments. We accomplished this by characterizing both fungal and bacterial communities, within bulk soil, rhizosphere, root, leaf, and inflorescence of QsuB and wild-type Cave-in-Rock switchgrass. We hypothesized that QsuB-engineered traits would alter the structure of fungal and bacterial microbiomes by reducing species richness and evenness, particularly in belowground samples that support high amounts of microbial diversity.
Plant growth and transplant The transgenic switchgrass line pZmCesa10: QsuB-5 and parental wild-type (cultivar Alamo-A4) used in this study have been described previously . Three transgenic and three wild-type plants reared from tissue culture were raised in axenic conditions for 3 months and were then planted in sterile potting mix (Sure Mix, Michigan Grower Products Inc., Galesburg, MI, U.S.) and grown vegetatively (16 hr light and 8 hr dark at 23°C) for 6 months to establish sufficient biomass to allow each plant to be split into three genetically identical individuals. Deionized water was applied every other day, and 1:10 Hoagland fertilizer was applied every other week to prevent nutrient deficiency under potting condition. Splitting was done by excising each plant at the crown into three at approximately equal crown size with a sterilized scissor. The senesced aboveground tissues and old structural roots were trimmed off to only retain green aboveground tissue and minimum non-lignified young roots. After splitting, switchgrass plants were planted into new pots with sand blended in with field soil, to provide a diverse microbiome inoculum. Field soil was collected from the top 20 cm of a switchgrass field in the long-term ecological research station for bioenergy cropping systems in Hickory Corner, MI, in August 2021 and was sieved through a 1-cm hardware cloth to homogenize and remove root fragments and organic debris before mixing with sterile sand. Homogenized sieved field soil was then mixed with double-autoclaved play sand 50/50 (vol/vol) to provide proper drainage of water in the pots. For microbiome analyses, nine biological replicates were used for both the wild-type and QsuB genotypes. Split plants were raised under the same conditions as described and used prior to splitting. After 3 months, the final microbiome sampling was conducted, and the experiment was terminated. Sample collection and processing Sampling of above and belowground switchgrass-associated microbiomes was carried out at two separate occasions: after splitting prior to planting in field soil (as pre-transplant sampling status) and 3 months after splitting and planting in the field soil (as post-transplant sampling status). Samples were collected from two soil niches (i.e., bulk, rhizosphere) and four plant niches (root endosphere, leaf, inflorescence, senesced leaves). Bulk soil from triplicate plant splits was collected with a sterile spatula avoiding root zones. Rhizosphere soil was sampled from each replicate by collecting three young lateral roots up to 3 cm in length with root hairs included from each plant. Roots were vigorously agitated by hand to detach the loosely attached soil prior to washing roots. The roots were then collected in 2-mL Eppendorf tubes, filling 1/3 of the volume, and vortexed in ddH 2 O containing 0.05% Tween 20 for 20 min to dislodge the tightly attached soil. These root washes were kept as rhizosphere soil samples, which contained both rhizosphere and rhizoplane communities. Washed roots were then surface sterilized in 6% hydrogen peroxide solution for 30 s, rinsed twice with sterile ddH 2 O, and kept as root endophytic samples. Expanded young healthy leaves of each plant were sampled at splitting. Other aboveground tissues including inflorescence and senesced leaves were also sampled from each plant at the end of the experiment by using sterile scissors. All aboveground tissues were sampled at 5 cm below the tips for approximately 1 cm from three randomly picked tissue objects. DNA extraction and Illumina MiSeq sequencing Samples were flash frozen in liquid nitrogen within 1 hr after collecting. Samples were then freeze-dried with a SpeedVac (Thermo Fisher, Waltham, MA, U.S.), placed in 2-mL centrifuge tubes together with three metal beads in each tube, and ground to a powder with a TissueLyser II (Qiagen, Hilden, Germany) at maximum speed for 40 s. Microbial DNA was extracted from soil samples with a MagAttract PowerSoil DNA kit (Qiagen, Hilden, Germany) and from plant samples with E.Z.N.A. Plant DNA kit (Omega Bio-Tek, Norcross, GA, U.S.). Libraries were prepared as previously described, including black samples as negative controls with some modifications. Briefly, extracted DNA was amplified with primer sets 515 f and 806 r for bacterial communities targeting the 16S rDNA V4 region and primers 5.8 f and ITS4r for fungal communities targeting ITS2 rDNA . Following the initial amplification, amplicons were PCR-ligated onto Illumina sequencing adapters and customized barcodes and normalized with a Norgen DNA purification kit (Norgen Biotek Corp., Thorold, ON, Canada). Pooled barcoded amplicons were then purified and concentrated with Amicon centrifugal units (Sigma-Aldrich, St. Louis, MO, U.S.) and further purified with a HighPrep PCR Clean-up System (MAGBIO Genomics, Gaithersburg, MD, U.S.). Sequencing was conducted at the Michigan State University RTSF Genomic Cores (East Lansing, MI, U.S.) with a v3 kit on an Illumina MiSeq sequencer. The raw sequences were demultiplexed with default setting in bcl2fastq, filtered, and clustered into amplicon sequence variants (ASVs) using DADA2 in R 4.0.2. ASV taxonomic annotations were generated using CONSTAX2 v2.0.18 with SILVA v138 for the 16S and UNITE 9.0 for the ITS regions, respectively. Raw 16S and ITS sequences data were deposited in NCBI under BioProject ID PRJNA1002602 and PRJNA1002603 , respectively. Statistical analysis and data visualization Microbial 16S and ITS rRNA amplicon sequence variant (ASV) tables, taxonomy tables, and metadata were imported into the R software for statistical computing and graphics. We removed ITS sequences with BLAST identity and coverage of ≤60% to the UNITE fungal database v 9.0 . Pre-transplant leaf samples and post-transplant senescence leaf samples were dominated by plant organelles (e.g., mitochondria, chloroplast) with a very low number of fungal and bacterial sequences; therefore, we removed these samples from our analysis (Fig. S1 and S2). Mitochondria and chloroplast sequences were also removed from the overall 16S data set. Sequence distributions allowed for the detection and removal of outlier samples, one post-transplant non-QsuB leaf sample and one post-transplant non-QsuB root sample, having low fungal read counts (Fig. S3 and S4). Samples with low number of reads (i.e., distribution outliers) were removed by adopting rarefaction cutoffs of 2,948 and 12,126 sequence reads per sample for fungi and bacteria, respectively. Rarefaction curves were calculated in the vegan package and plotted in the ggplot2 package . Rarefaction curves showed that most samples recover the whole diversity present in each sample, and rarefaction only marginally affected the total number of ASV detected across the entire data sets (Fig. S5). Rarefied ASV richness and Shannon diversity index were calculated in vegan . Beta-diversity Bray-Curtis distance matrices were assessed to illustrate the community structures between samples and sample groups. We used nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) ordinations to visualize beta-diversity. Permutational multivariate analysis of variance (PERMANOVA) was performed to test the statistical differences of beta-diversity between sample groups. We tested the interaction between Niche and Treatment, Status and Treatment, and Status and Niche while controlling for Status, Niche, and Treatment, respectively. Since PERMANOVA (“adons2”, vegan R package) does not allow specifying random effects, we took advantage of the sequential nature of the function in calculating the sums of squares and specified the fix/random factor as first term in the model. To assess differences in dispersion between groups (i.e., multivariate heteroscedasticity) that can contribute to the group difference detected with adonis2, a multivariate dispersion analysis was used as implemented in the R function “betadisper”. To compare alpha diversity measures between groups (i.e., ASV richness and Shannon index), we used a nonparametric Wilcoxon signed-rank test with P values corrected for multiple comparisons using the Benjamin-Hochberg method. Stacked bar charts were generated to show the relative abundance of lineage-level bacteria and fungi in sample groups. To identify differentially abundant ASV across sample groups, we used a pairwise Wilcoxon test and DESeq2 in the stats and DESeq2 R packages, respectively. All analysis and figures were generated in R , and the R code to reproduce the analysis is available here: https://github.com/Gian77/Scientific-Papers-R-Code/ .
The transgenic switchgrass line pZmCesa10: QsuB-5 and parental wild-type (cultivar Alamo-A4) used in this study have been described previously . Three transgenic and three wild-type plants reared from tissue culture were raised in axenic conditions for 3 months and were then planted in sterile potting mix (Sure Mix, Michigan Grower Products Inc., Galesburg, MI, U.S.) and grown vegetatively (16 hr light and 8 hr dark at 23°C) for 6 months to establish sufficient biomass to allow each plant to be split into three genetically identical individuals. Deionized water was applied every other day, and 1:10 Hoagland fertilizer was applied every other week to prevent nutrient deficiency under potting condition. Splitting was done by excising each plant at the crown into three at approximately equal crown size with a sterilized scissor. The senesced aboveground tissues and old structural roots were trimmed off to only retain green aboveground tissue and minimum non-lignified young roots. After splitting, switchgrass plants were planted into new pots with sand blended in with field soil, to provide a diverse microbiome inoculum. Field soil was collected from the top 20 cm of a switchgrass field in the long-term ecological research station for bioenergy cropping systems in Hickory Corner, MI, in August 2021 and was sieved through a 1-cm hardware cloth to homogenize and remove root fragments and organic debris before mixing with sterile sand. Homogenized sieved field soil was then mixed with double-autoclaved play sand 50/50 (vol/vol) to provide proper drainage of water in the pots. For microbiome analyses, nine biological replicates were used for both the wild-type and QsuB genotypes. Split plants were raised under the same conditions as described and used prior to splitting. After 3 months, the final microbiome sampling was conducted, and the experiment was terminated.
Sampling of above and belowground switchgrass-associated microbiomes was carried out at two separate occasions: after splitting prior to planting in field soil (as pre-transplant sampling status) and 3 months after splitting and planting in the field soil (as post-transplant sampling status). Samples were collected from two soil niches (i.e., bulk, rhizosphere) and four plant niches (root endosphere, leaf, inflorescence, senesced leaves). Bulk soil from triplicate plant splits was collected with a sterile spatula avoiding root zones. Rhizosphere soil was sampled from each replicate by collecting three young lateral roots up to 3 cm in length with root hairs included from each plant. Roots were vigorously agitated by hand to detach the loosely attached soil prior to washing roots. The roots were then collected in 2-mL Eppendorf tubes, filling 1/3 of the volume, and vortexed in ddH 2 O containing 0.05% Tween 20 for 20 min to dislodge the tightly attached soil. These root washes were kept as rhizosphere soil samples, which contained both rhizosphere and rhizoplane communities. Washed roots were then surface sterilized in 6% hydrogen peroxide solution for 30 s, rinsed twice with sterile ddH 2 O, and kept as root endophytic samples. Expanded young healthy leaves of each plant were sampled at splitting. Other aboveground tissues including inflorescence and senesced leaves were also sampled from each plant at the end of the experiment by using sterile scissors. All aboveground tissues were sampled at 5 cm below the tips for approximately 1 cm from three randomly picked tissue objects.
Samples were flash frozen in liquid nitrogen within 1 hr after collecting. Samples were then freeze-dried with a SpeedVac (Thermo Fisher, Waltham, MA, U.S.), placed in 2-mL centrifuge tubes together with three metal beads in each tube, and ground to a powder with a TissueLyser II (Qiagen, Hilden, Germany) at maximum speed for 40 s. Microbial DNA was extracted from soil samples with a MagAttract PowerSoil DNA kit (Qiagen, Hilden, Germany) and from plant samples with E.Z.N.A. Plant DNA kit (Omega Bio-Tek, Norcross, GA, U.S.). Libraries were prepared as previously described, including black samples as negative controls with some modifications. Briefly, extracted DNA was amplified with primer sets 515 f and 806 r for bacterial communities targeting the 16S rDNA V4 region and primers 5.8 f and ITS4r for fungal communities targeting ITS2 rDNA . Following the initial amplification, amplicons were PCR-ligated onto Illumina sequencing adapters and customized barcodes and normalized with a Norgen DNA purification kit (Norgen Biotek Corp., Thorold, ON, Canada). Pooled barcoded amplicons were then purified and concentrated with Amicon centrifugal units (Sigma-Aldrich, St. Louis, MO, U.S.) and further purified with a HighPrep PCR Clean-up System (MAGBIO Genomics, Gaithersburg, MD, U.S.). Sequencing was conducted at the Michigan State University RTSF Genomic Cores (East Lansing, MI, U.S.) with a v3 kit on an Illumina MiSeq sequencer. The raw sequences were demultiplexed with default setting in bcl2fastq, filtered, and clustered into amplicon sequence variants (ASVs) using DADA2 in R 4.0.2. ASV taxonomic annotations were generated using CONSTAX2 v2.0.18 with SILVA v138 for the 16S and UNITE 9.0 for the ITS regions, respectively. Raw 16S and ITS sequences data were deposited in NCBI under BioProject ID PRJNA1002602 and PRJNA1002603 , respectively.
Microbial 16S and ITS rRNA amplicon sequence variant (ASV) tables, taxonomy tables, and metadata were imported into the R software for statistical computing and graphics. We removed ITS sequences with BLAST identity and coverage of ≤60% to the UNITE fungal database v 9.0 . Pre-transplant leaf samples and post-transplant senescence leaf samples were dominated by plant organelles (e.g., mitochondria, chloroplast) with a very low number of fungal and bacterial sequences; therefore, we removed these samples from our analysis (Fig. S1 and S2). Mitochondria and chloroplast sequences were also removed from the overall 16S data set. Sequence distributions allowed for the detection and removal of outlier samples, one post-transplant non-QsuB leaf sample and one post-transplant non-QsuB root sample, having low fungal read counts (Fig. S3 and S4). Samples with low number of reads (i.e., distribution outliers) were removed by adopting rarefaction cutoffs of 2,948 and 12,126 sequence reads per sample for fungi and bacteria, respectively. Rarefaction curves were calculated in the vegan package and plotted in the ggplot2 package . Rarefaction curves showed that most samples recover the whole diversity present in each sample, and rarefaction only marginally affected the total number of ASV detected across the entire data sets (Fig. S5). Rarefied ASV richness and Shannon diversity index were calculated in vegan . Beta-diversity Bray-Curtis distance matrices were assessed to illustrate the community structures between samples and sample groups. We used nonmetric multidimensional scaling (NMDS) and principal coordinate analysis (PCoA) ordinations to visualize beta-diversity. Permutational multivariate analysis of variance (PERMANOVA) was performed to test the statistical differences of beta-diversity between sample groups. We tested the interaction between Niche and Treatment, Status and Treatment, and Status and Niche while controlling for Status, Niche, and Treatment, respectively. Since PERMANOVA (“adons2”, vegan R package) does not allow specifying random effects, we took advantage of the sequential nature of the function in calculating the sums of squares and specified the fix/random factor as first term in the model. To assess differences in dispersion between groups (i.e., multivariate heteroscedasticity) that can contribute to the group difference detected with adonis2, a multivariate dispersion analysis was used as implemented in the R function “betadisper”. To compare alpha diversity measures between groups (i.e., ASV richness and Shannon index), we used a nonparametric Wilcoxon signed-rank test with P values corrected for multiple comparisons using the Benjamin-Hochberg method. Stacked bar charts were generated to show the relative abundance of lineage-level bacteria and fungi in sample groups. To identify differentially abundant ASV across sample groups, we used a pairwise Wilcoxon test and DESeq2 in the stats and DESeq2 R packages, respectively. All analysis and figures were generated in R , and the R code to reproduce the analysis is available here: https://github.com/Gian77/Scientific-Papers-R-Code/ .
Data summary and overview In total, we obtained 17,811,594 ITS and 30,086,564 16S raw sequence reads, respectively, from the 198 sample libraries. After removing nontarget ASVs, including non-fungal eukaryotes, chloroplasts, and mitochondria , a total of 13,403,151 ITS and 24,577,163 16S reads remained, respectively. These accounted for 8,089 and 33,853 ASVs for the ITS (fungal) and 16S (bacterial) communities, respectively, distributed across 144 total samples. On average, each sample had 93,077.44 (± 41,537.25 standard deviation) ITS sequence reads and 170,674.7 (± 110,850 standard deviation) 16S sequence reads. No sequences remained in negative control samples. The three experimental variables in our design were status (pre-transplant and post-transplant to field soil), niche (bulk soils, rhizosphere soils, roots, leaves, and inflorescences), and genotype (QsuB and non-QsuB wild-type). In the nonmetric multidimensional scaling analyses of fungal and bacterial data sets, samples from the same niche clustered together, especially in bacterial communities, when plotting on two dimensions (Fig. S6). Bulk soil and rhizosphere fungal and bacterial communities clustered with each other but apart from root and aboveground communities. Bacterial communities of belowground samples (roots, rhizosphere, and bulk soils) were distinct from those of aboveground samples (leaf and inflorescence) prominently. Pre- and post-transplant samples were also clearly separated in ordination space (Fig. S6). Sampling status and sampling niches had obvious influences on both fungal and bacterial communities. Therefore, to investigate the influence of the genotype on microbial communities, we split our data sets by sampling niches and status in the following analysis. QsuB leaf and pre-transplant root samples had higher bacterial richness and diversity In general, soil and rhizosphere samples had significantly higher richness than inflorescence and leaf samples (Wilcoxon test, P < 0.05 ) . The QsuB genotype had no significant influence on the fungal richness in any plant niches of pre-transplant or post-transplant samples . However, QsuB plants had significantly greater bacterial richness in post-transplant leaf and pre-transplant root samples, but not in post-transplant root samples . Fungal communities in root samples had significantly lower Shannon diversity compared to soil, rhizosphere, and aboveground tissues. The QsuB genotype had no significant influence on the fungal Shannon diversity indices across sampling niches for both pre-transplant and post-transplant samples . For bacterial communities, soil samples (soil and rhizosphere) had significantly greater diversity than plant samples (root, inflorescence, and leaf) . The QsuB plants had significantly greater bacterial Shannon indices in post-transplant inflorescence, leaf, and pre-transplant root samples, but not in post-transplant root samples . Additionally, it is worth noting that, for the same sampling niches, post-transplant samples always had greater bacterial and fungal richness and Shannon indices than those of corresponding pre-transplant samples . QsuB significantly influenced root and leaf fungal community beta-diversity We used principal coordinate analysis ordinations to improve the visualization of beta-diversity results and statistically examined the treatment effects on the beta-diversity. The QsuB traits significantly influenced the fungal community structures in the root ( P = 0.002 ) and post-transplant leaf ( P = 0.041 ) samples according to PERMANOVA (Table S1). In root samples, genotype, status, and the interaction between them were all significant factors of the fungal community structures. The QsuB genotype explained the most variance with the highest R 2 of 23.09% ( P = 0.002 ), followed by status (R 2 = 17.68%, P = 0.002 ) and the interaction (R 2 = 6.47%, P = 0.003 ) (Table S1). However, we also detected differences in multivariate variances between the groups we analyzed with PERMANOVA, i.e., the fungal community in root samples (Table S2). QsuB significantly influenced root and rhizosphere bacterial community beta-diversity The root bacterial community of QsuB genotype and wild-type switchgrass separated from each other on two-dimension PCoA and the visual observation was supported by the PERMANOVA. QsuB genotype significantly influenced the bacterial community structures in root ( P = 0.003) and rhizosphere ( P = 0.029) samples (Table S1). In both root and rhizosphere samples, genotype, status, and the interaction between them were all significant factors of the bacterial community structures. In root samples, the status explained the most variance with R 2 of 20.56% ( P = 0.003), followed by QsuB genotype (R 2 = 5.67%, P = 0.003) and their interaction (R 2 = 4.35%, P = 0.019) (Table S1). In rhizosphere samples, the status also explained the most variance with R 2 of 21.31% ( P = 0.003 ), followed by QsuB genotype (R 2 = 4.15%, P = 0.029 ) and their interaction (R 2 = 3.83%, P = 0.029 ) (Table S1). However, we did not detect differences in multivariate variances between the groups we analyzed with PERMANOVA in bacterial community (Table S2). For both root and rhizosphere samples, the influence from the QsuB genotype was only manifested in pre-transplant samples in the PCoA plots, not in post-transplant samples . To focus on the impact of the QsuB genotype and eliminate interference from different status, we assessed beta-diversity on the post-transplant root and rhizosphere samples. This approach revealed that the QsuB genotype had significant influence on the bacterial community of post-transplant root ( P = 0.001) and rhizosphere ( P = 0.003) samples (Fig. S7; Table S1). Fungal composition Overall, sixteen fungal lineages were detected in our sample. Ascomycota , Glomeromycotina , and Basidiomycota are predominant and have average relative abundance of 80.63%, 8.57%, and 7.97%, respectively (Fig. S8). Most Glomeromycotina were detected in root samples, and the majority of Basidiomycota were detected in post-transplant bulk soil and rhizosphere samples (Fig. S8). The above beta-diversity analyses showed that the QsuB genotype significantly influenced the fungal community of root and leaf samples. Nearly 94.00% of leaf fungi were Ascomycota (Fig. S8). The significant effects of QsuB genotype on root fungal communities were likely explained by relatively more Ascomycota and relatively fewer Glomeromycotina in QsuB plants (Fig. S8). In post-transplant root samples, QsuB plants had 62.52% Ascomycota and 32.86% Glomeromycotina , while non-QsuB wild-type had 50.74% Ascomycota and 47.76% Glomeromycotina . Statistical analysis on lineage-level relative abundance data showed that only Ascomycota ( P < 0.001) were significantly influenced by QsuB genotype in post-transplant root samples. However, in pre-transplant root samples, both Ascomycota ( P = 0.009) and Glomeromycotina ( P = 0.033) were significantly influenced by QsuB genotype. In pre-transplant root samples, QsuB plants had 78.35% Ascomycota and 18.57% Glomeromycotina , while non-QsuB wild-type had 67.63% Ascomycota and 27.71% Glomeromycotina (Fig. S8). To further investigate the QsuB genotype effects, we visualized the relative abundance of fungi at order level. We present the top 10 orders (by relative abundance), with the remaining orders detected represented as “Others” . In pre-transplant root samples, three of top 10 orders were significantly influenced by QsuB genotype, and they are all from class Sordariomycetes . QsuB plants had significantly more Hypocreales ( P = 0.021) and Sordariales ( P = 0.010), but less Myrmecridiales ( P = 0.005). Similarly, much more Sordariales and less Myrmecridiales in QsuB plants were also observed in post-transplant root samples. In pre-transplant samples, the relative abundance of Glomerales decreased from 27% in non-QsuB to 18% in QsuB, while in post-transplant samples, it decreased from 47% to 32% . Bacterial composition Overall, 47 bacterial lineages were detected in our samples, and the sum of top 10 most abundant lineages accounted for an average relative abundance of 95.64% among all samples. Proteobacteria and Actinobacteria were predominant bacterial lineages in our samples, with the average relative abundance of 45.47% and 17.53%, respectively . Bacterial communities of root and rhizosphere were significantly influenced by QsuB genotype. The only consistent trend associated with bacterial community was that QsuB plants had relatively fewer Actinobacteria compared to wild-type: 20.38% vs 28.72% (pre-transplant root), 10.93% vs 20.45% (post-transplant root), 10.90% vs 15.41% (pre-transplant rhizosphere), and 30.80% vs 36.56% (post-transplant rhizosphere) . In pre-transplant rhizosphere and root samples, we observed a greater relative abundance of Myxococcota in QsuB than wild-type, but this trend was not shown in post-transplant samples. In post-transplant rhizosphere and root samples, we observed relatively more Proteobacteria in QsuB than wild-type, but this trend was not shown in pre-transplant samples . These trends were not statistically significant. The bacterial relative abundance of the top 10 orders is shown in Fig. S9. Some trends are numerically evident: compared to wild-type, QsuB plants had relatively more Pseudonocardiales in roots and more Burkholderiales in rhizosphere (Fig. S9). The greater alpha-diversity (richness and Shannon index) in QsuB plants than wild-type was likely explained by relatively more “Others” in pre-transplant root samples (28.90% in QsuB vs 14.02% in wild-type), in post-transplant inflorescence samples (45.27% in QsuB vs 37.30% in wild-type), and in post-transplant leaf sample (30.21% in QsuB vs 27.71% in wild-type) (Fig. S9). Indicator ASVs We used differential abundance measurements between QsuB genotype and wild-type switchgrass to identify indicator ASVs. Consistent with our results, the Wilcoxon test commonly outputs a high number of significant ASVs compared to DESeq2 since the former is less stringent . DESeq2 identified 14 and 15 bacterial biomarkers in pre- and post-transplant root samples, respectively, while Wilcoxon test identified 99 and 307 significant ASVs in those metadata subgroups. No indicator ASVs were detected in leaf fungal, while three (two in pre-transplant and one in post-transplant) were detected for rhizosphere bacterial samples by DESeq2, even though those samples’ microbial communities were significantly influenced by QsuB genotype . Root sample “niche” was the only group that significantly influenced both fungal and bacterial communities , so we focused on the fungal and bacterial indicator ASVs of root samples. Most indicator ASVs identified with DESeq2 were also identified with the Wilcoxon test. Thirty-one and thirty-two fungal indicator ASVs were identified in the pre- and post-transplant root samples, respectively. The majority of these (14 of 31 from pre-transplant and 20 of 32 from post-transplant) were Glomeromycotina (Fig. S10 and ). All of the identified Glomeromycotina belonged to the arbuscular mycorrhizal fungi (AMF) family Glomeraceae . Four Funneliformis (AMF) indicator ASVs were identified in post-transplant roots but not in pre-transplant root samples. Fifteen fungal indicator ASVs were detected both in pre- and post-transplant root samples. Six of them were associated with the QsuB genotype: ASV_27 ( Zopfiella ), ASV_35 ( Chaetomiaceae ), and ASV_112, 184, 189, 273 ( Rhizophagus ). Nine of them were associated with the wild-type switchgrass: ASV_21 ( Myrmecridium ), ASV_22, 34 ( Glomeraceae ), and ASV_26, 36, 44, 61, 99, 126 ( Rhizophagus ) (Fig. S10 and ). Fewer bacterial indicator ASVs were identified by DESeq2 compared to the fungal indicator ASVs. Bacterial indicator ASVs were dominated by Proteobacteria and Actinobacteriota in pre- and post-transplant root samples, respectively. In pre-transplant root samples, Lentzea aerocolonigenes (ASV_191 and 244) were associated with the QsuB plants, while Amycolatopsis mediterranei (ASV_46 and 69) were associated with non-QsuB wild-type plants (Fig. S11). Both species belong to Pseudonocardiales . In post-transplant root samples, Streptomyces (ASV_101) was the only Actinobacteria indicator associated with the QsuB plants (Fig. S12).
In total, we obtained 17,811,594 ITS and 30,086,564 16S raw sequence reads, respectively, from the 198 sample libraries. After removing nontarget ASVs, including non-fungal eukaryotes, chloroplasts, and mitochondria , a total of 13,403,151 ITS and 24,577,163 16S reads remained, respectively. These accounted for 8,089 and 33,853 ASVs for the ITS (fungal) and 16S (bacterial) communities, respectively, distributed across 144 total samples. On average, each sample had 93,077.44 (± 41,537.25 standard deviation) ITS sequence reads and 170,674.7 (± 110,850 standard deviation) 16S sequence reads. No sequences remained in negative control samples. The three experimental variables in our design were status (pre-transplant and post-transplant to field soil), niche (bulk soils, rhizosphere soils, roots, leaves, and inflorescences), and genotype (QsuB and non-QsuB wild-type). In the nonmetric multidimensional scaling analyses of fungal and bacterial data sets, samples from the same niche clustered together, especially in bacterial communities, when plotting on two dimensions (Fig. S6). Bulk soil and rhizosphere fungal and bacterial communities clustered with each other but apart from root and aboveground communities. Bacterial communities of belowground samples (roots, rhizosphere, and bulk soils) were distinct from those of aboveground samples (leaf and inflorescence) prominently. Pre- and post-transplant samples were also clearly separated in ordination space (Fig. S6). Sampling status and sampling niches had obvious influences on both fungal and bacterial communities. Therefore, to investigate the influence of the genotype on microbial communities, we split our data sets by sampling niches and status in the following analysis.
In general, soil and rhizosphere samples had significantly higher richness than inflorescence and leaf samples (Wilcoxon test, P < 0.05 ) . The QsuB genotype had no significant influence on the fungal richness in any plant niches of pre-transplant or post-transplant samples . However, QsuB plants had significantly greater bacterial richness in post-transplant leaf and pre-transplant root samples, but not in post-transplant root samples . Fungal communities in root samples had significantly lower Shannon diversity compared to soil, rhizosphere, and aboveground tissues. The QsuB genotype had no significant influence on the fungal Shannon diversity indices across sampling niches for both pre-transplant and post-transplant samples . For bacterial communities, soil samples (soil and rhizosphere) had significantly greater diversity than plant samples (root, inflorescence, and leaf) . The QsuB plants had significantly greater bacterial Shannon indices in post-transplant inflorescence, leaf, and pre-transplant root samples, but not in post-transplant root samples . Additionally, it is worth noting that, for the same sampling niches, post-transplant samples always had greater bacterial and fungal richness and Shannon indices than those of corresponding pre-transplant samples .
We used principal coordinate analysis ordinations to improve the visualization of beta-diversity results and statistically examined the treatment effects on the beta-diversity. The QsuB traits significantly influenced the fungal community structures in the root ( P = 0.002 ) and post-transplant leaf ( P = 0.041 ) samples according to PERMANOVA (Table S1). In root samples, genotype, status, and the interaction between them were all significant factors of the fungal community structures. The QsuB genotype explained the most variance with the highest R 2 of 23.09% ( P = 0.002 ), followed by status (R 2 = 17.68%, P = 0.002 ) and the interaction (R 2 = 6.47%, P = 0.003 ) (Table S1). However, we also detected differences in multivariate variances between the groups we analyzed with PERMANOVA, i.e., the fungal community in root samples (Table S2).
The root bacterial community of QsuB genotype and wild-type switchgrass separated from each other on two-dimension PCoA and the visual observation was supported by the PERMANOVA. QsuB genotype significantly influenced the bacterial community structures in root ( P = 0.003) and rhizosphere ( P = 0.029) samples (Table S1). In both root and rhizosphere samples, genotype, status, and the interaction between them were all significant factors of the bacterial community structures. In root samples, the status explained the most variance with R 2 of 20.56% ( P = 0.003), followed by QsuB genotype (R 2 = 5.67%, P = 0.003) and their interaction (R 2 = 4.35%, P = 0.019) (Table S1). In rhizosphere samples, the status also explained the most variance with R 2 of 21.31% ( P = 0.003 ), followed by QsuB genotype (R 2 = 4.15%, P = 0.029 ) and their interaction (R 2 = 3.83%, P = 0.029 ) (Table S1). However, we did not detect differences in multivariate variances between the groups we analyzed with PERMANOVA in bacterial community (Table S2). For both root and rhizosphere samples, the influence from the QsuB genotype was only manifested in pre-transplant samples in the PCoA plots, not in post-transplant samples . To focus on the impact of the QsuB genotype and eliminate interference from different status, we assessed beta-diversity on the post-transplant root and rhizosphere samples. This approach revealed that the QsuB genotype had significant influence on the bacterial community of post-transplant root ( P = 0.001) and rhizosphere ( P = 0.003) samples (Fig. S7; Table S1).
Overall, sixteen fungal lineages were detected in our sample. Ascomycota , Glomeromycotina , and Basidiomycota are predominant and have average relative abundance of 80.63%, 8.57%, and 7.97%, respectively (Fig. S8). Most Glomeromycotina were detected in root samples, and the majority of Basidiomycota were detected in post-transplant bulk soil and rhizosphere samples (Fig. S8). The above beta-diversity analyses showed that the QsuB genotype significantly influenced the fungal community of root and leaf samples. Nearly 94.00% of leaf fungi were Ascomycota (Fig. S8). The significant effects of QsuB genotype on root fungal communities were likely explained by relatively more Ascomycota and relatively fewer Glomeromycotina in QsuB plants (Fig. S8). In post-transplant root samples, QsuB plants had 62.52% Ascomycota and 32.86% Glomeromycotina , while non-QsuB wild-type had 50.74% Ascomycota and 47.76% Glomeromycotina . Statistical analysis on lineage-level relative abundance data showed that only Ascomycota ( P < 0.001) were significantly influenced by QsuB genotype in post-transplant root samples. However, in pre-transplant root samples, both Ascomycota ( P = 0.009) and Glomeromycotina ( P = 0.033) were significantly influenced by QsuB genotype. In pre-transplant root samples, QsuB plants had 78.35% Ascomycota and 18.57% Glomeromycotina , while non-QsuB wild-type had 67.63% Ascomycota and 27.71% Glomeromycotina (Fig. S8). To further investigate the QsuB genotype effects, we visualized the relative abundance of fungi at order level. We present the top 10 orders (by relative abundance), with the remaining orders detected represented as “Others” . In pre-transplant root samples, three of top 10 orders were significantly influenced by QsuB genotype, and they are all from class Sordariomycetes . QsuB plants had significantly more Hypocreales ( P = 0.021) and Sordariales ( P = 0.010), but less Myrmecridiales ( P = 0.005). Similarly, much more Sordariales and less Myrmecridiales in QsuB plants were also observed in post-transplant root samples. In pre-transplant samples, the relative abundance of Glomerales decreased from 27% in non-QsuB to 18% in QsuB, while in post-transplant samples, it decreased from 47% to 32% .
Overall, 47 bacterial lineages were detected in our samples, and the sum of top 10 most abundant lineages accounted for an average relative abundance of 95.64% among all samples. Proteobacteria and Actinobacteria were predominant bacterial lineages in our samples, with the average relative abundance of 45.47% and 17.53%, respectively . Bacterial communities of root and rhizosphere were significantly influenced by QsuB genotype. The only consistent trend associated with bacterial community was that QsuB plants had relatively fewer Actinobacteria compared to wild-type: 20.38% vs 28.72% (pre-transplant root), 10.93% vs 20.45% (post-transplant root), 10.90% vs 15.41% (pre-transplant rhizosphere), and 30.80% vs 36.56% (post-transplant rhizosphere) . In pre-transplant rhizosphere and root samples, we observed a greater relative abundance of Myxococcota in QsuB than wild-type, but this trend was not shown in post-transplant samples. In post-transplant rhizosphere and root samples, we observed relatively more Proteobacteria in QsuB than wild-type, but this trend was not shown in pre-transplant samples . These trends were not statistically significant. The bacterial relative abundance of the top 10 orders is shown in Fig. S9. Some trends are numerically evident: compared to wild-type, QsuB plants had relatively more Pseudonocardiales in roots and more Burkholderiales in rhizosphere (Fig. S9). The greater alpha-diversity (richness and Shannon index) in QsuB plants than wild-type was likely explained by relatively more “Others” in pre-transplant root samples (28.90% in QsuB vs 14.02% in wild-type), in post-transplant inflorescence samples (45.27% in QsuB vs 37.30% in wild-type), and in post-transplant leaf sample (30.21% in QsuB vs 27.71% in wild-type) (Fig. S9).
We used differential abundance measurements between QsuB genotype and wild-type switchgrass to identify indicator ASVs. Consistent with our results, the Wilcoxon test commonly outputs a high number of significant ASVs compared to DESeq2 since the former is less stringent . DESeq2 identified 14 and 15 bacterial biomarkers in pre- and post-transplant root samples, respectively, while Wilcoxon test identified 99 and 307 significant ASVs in those metadata subgroups. No indicator ASVs were detected in leaf fungal, while three (two in pre-transplant and one in post-transplant) were detected for rhizosphere bacterial samples by DESeq2, even though those samples’ microbial communities were significantly influenced by QsuB genotype . Root sample “niche” was the only group that significantly influenced both fungal and bacterial communities , so we focused on the fungal and bacterial indicator ASVs of root samples. Most indicator ASVs identified with DESeq2 were also identified with the Wilcoxon test. Thirty-one and thirty-two fungal indicator ASVs were identified in the pre- and post-transplant root samples, respectively. The majority of these (14 of 31 from pre-transplant and 20 of 32 from post-transplant) were Glomeromycotina (Fig. S10 and ). All of the identified Glomeromycotina belonged to the arbuscular mycorrhizal fungi (AMF) family Glomeraceae . Four Funneliformis (AMF) indicator ASVs were identified in post-transplant roots but not in pre-transplant root samples. Fifteen fungal indicator ASVs were detected both in pre- and post-transplant root samples. Six of them were associated with the QsuB genotype: ASV_27 ( Zopfiella ), ASV_35 ( Chaetomiaceae ), and ASV_112, 184, 189, 273 ( Rhizophagus ). Nine of them were associated with the wild-type switchgrass: ASV_21 ( Myrmecridium ), ASV_22, 34 ( Glomeraceae ), and ASV_26, 36, 44, 61, 99, 126 ( Rhizophagus ) (Fig. S10 and ). Fewer bacterial indicator ASVs were identified by DESeq2 compared to the fungal indicator ASVs. Bacterial indicator ASVs were dominated by Proteobacteria and Actinobacteriota in pre- and post-transplant root samples, respectively. In pre-transplant root samples, Lentzea aerocolonigenes (ASV_191 and 244) were associated with the QsuB plants, while Amycolatopsis mediterranei (ASV_46 and 69) were associated with non-QsuB wild-type plants (Fig. S11). Both species belong to Pseudonocardiales . In post-transplant root samples, Streptomyces (ASV_101) was the only Actinobacteria indicator associated with the QsuB plants (Fig. S12).
In this research, we set out to assess whether switchgrass engineered for low lignin with the QsuB gene would impact the microbiome associated with different aboveground and belowground plant organs. As we hypothesized, our results indicate that QsuB-engineered plants impacted switchgrass-associated microbial community structure. Our results showed that the QsuB genotype influenced fungal and bacterial community structure. Specifically, QsuB plants had a significant impact on the fungal community in root and leaf samples and also a significant impact on belowground bacterial microbiomes in the root and rhizosphere. In contrast, we observed little impact of the QsuB genotype on inflorescence and bulk soil fungal or bacterial microbiomes. QsuB plants showed lower relative abundance and diversity of AMF Arbuscular mycorrhizal fungi are obligate biotrophic plant mutualists that belong to Glomeromycotina . These fungi are known to be beneficial to plant nutrition and soil health by transporting nutrients (e.g., P and N) and water to plant hosts via their hyphal network, while stimulating and stabilizing soil organic matter . Under nutrient limitation, host plants are more dependent on AMF for nutrients . In this study, we detected lower relative abundance of Glomerales (i.e., the most frequent AMF order detected in this study) sequences in the root samples from QsuB plants (both pre- and post-transplant). The expression of QsuB in switchgrass results in the reduction of lignin and the accumulation of protocatechuate in biomass and improves biomass saccharification efficiency . Less lignin and more protocatechuate may have stimulated bacterial community activity and increased soil nutrient mineralization and turnover rates . More available nutrients may reduce the reliance of plants on AMF and may indirectly affect AMF colonization. For example, nutrient deficiency could trigger plant signaling, such as phenols, flavonoids, and sesquiterpenoids, and promote the growth of AMF appressorium . Five lower lignin transgenic lines of poplar with downregulated genes of monolignol biosynthesis pathway displayed a lower mycorrhizal colonization percentage than wild-type, and the authors proposed that the gene modifications in monolignol pathway impacted ectomycorrhizal colonization possibly by changing cell wall ultrastructure and decreasing the communication efficiency between plants and fungi . QsuB roots showed an increase in the relative abundance of Sordariales and Hypocreales In this study, greater relative abundance of Ascomycota (e.g., Sordariales and Hypocreales ) was detected in the root samples from QsuB plants compared to the wild-type. Previous work has identified Sordariales and Hypocreales as dominant decomposers in arable soil with long-term organic management practice , and we found these orders predominant in our samples. It is known that mycorrhizal and saprotrophic fungi compete for niche space and organic substrates . For example, Cao et al. reported that AMF inhibited the population abundance and enzyme activity of saprotrophic fungi, possibly by reducing the availability of limiting nutrients. Increased accessibility to carbohydrates and soil nutrients may have stimulated Sordariales and Hypocreales , which were associated with the QsuB genotype. Interestingly, in both root and leaf samples, QsuB plants had a higher relative abundance of Hypocreales . Six Fusarium (members of Hypocreales) ASVs were identified as indicator ASVs whose relative abundances significantly increased in the root and leaf samples of QsuB plants compared to the wild-type. This is of interest because many Fusarium species are known plant pathogens . In root samples, QsuB plants had relatively greater Fusarium in their rhizobiome compared to the wild-type: 9.81% vs 2.82% (pre-transplant) and 7.27% vs 1.97% (post-transplant); however, this trend was not obvious in the leaf samples (Fig. S13). The only two leaf Fusarium indicator ASVs were identified at the species level: Fusarium oxysporum (ASV_15 and 19), and they were also root Fusarium indicator ASVs. Soils are often the source of plant-associated F. oxysporum , and detached leaf assays showed that F. oxysporum might be benign or beneficial in switchgrass, even though other Fusarium species were pathogenic . In this study, F. oxysporum was the only Fusarium species identified in leaf samples associated with QsuB plants. Knowing the diversity of Fusarium species, the complexity of their function, and the limits of short amplicon sequencing, the actual roles of the Fusarium spp. in our study are difficult to discern . In root samples, there were significantly more Sordariales in switchgrass QsuB plants than the wild-type, largely accounted for by Zopfiella (Fig. S13). Interestingly, some Zopfiella species have the potential to control plant disease by producing antifungal compounds and promote plant growth by increasing stress resistance . QsuB plants hosted a greater richness and diversity of bacteria We found that QsuB-engineered plants supported a significantly greater richness and diversity of bacteria in inflorescence, leaf, and root (pre-transplant) samples. Generally, bacteria are efficient at degrading simple substrates, while fungi are better equipped at decomposing recalcitrant organic matter, such as lignin . Lignin biodegradation starts with lignin depolymerization, which is predominantly performed by fungi . Therefore, compared to lignin, protocatechuate represents a more favorable growth substrate for bacteria to utilize, and bacteria are competitive for carbon and energy sources. Plants expressing QsuB accumulate inside tissues more protocatechuate that may stimulate the activity of the bacterial community. Diverse bacteria are indeed known to degrade protocatechuate including those in the order Bacillales , Burkholderiales , Sphingomonadales , and Pseudomonadales . Fewer Actinobacteria were detected in the root and rhizosphere samples of QsuB plants In this study, the relative abundance of Actinobacteria detected in the root and rhizosphere samples of QsuB plants was lower than that of the wild-type. Actinobacteria are an important terrestrial group of detritus decomposers , which are able to degrade lignin materials . Given that QsuB-engineered switchgrass biosynthesizes less lignin, it may be expected to host a lower relative abundance of Actinobacteria . In post-transplant root samples, all non-QsuB-associated bacterial indicator ASVs showing significantly increased relative abundances were Actinobacteria (Fig. S9). The AMF and bacterial community dynamics may be interlinked We have described how the QsuB genotype influenced the fungal communities, especially AMF, as well as the bacteria communities belowground. We posit that the change in the AMF community might be an important driver of the change observed in those bacterial communities. Previous studies have found that the AMF community composition was a significant contributor to determining the bacterial community composition , perhaps through changes in the root exudates composition and soil structure modification . Interestingly, AMF-associated bacterial communities have been shown to be structured predominantly by AMF symbiont identity ( Glomus geosporum or Glomus constrictum ), rather than the host plant ( Plantago lanceolata or Hieracium pilosella ) . AMF hyphae release a variety of exudates, including carbohydrates, polyols, amino acids, amines, nucleic acids, organic acids, etc.; different AMF species or the same AMF under different abiotic conditions might have different metabolite profiles of hyphal exudates . The carbon sources supplied by AMF have important roles in bacterial growth and distribution, so it is likely that AMF activity has an impact on the surrounding bacterial communities . AMF hyphae provide a scaffold bridging the soil and root microbiomes . The addition of protocatechuate to the culture medium inhibited primary root growth but increased lateral root numbers in Arabidopsis . The potential root morphology change may also influence AMF development and surrounding bacterial communities. Some specific bacteria showed the same (positive or negative) response to AMF, even under different experimental setups. For example, in this study, root and rhizosphere samples from QsuB plants had relatively fewer AMF and Actinobacteria . This positive response of Actinobacteria to AMF has also been observed in other studies . Recent work showed that QsuB switchgrass had no yield penalty compared to the wild-type under optimal irrigation in the field . However, as AMF and Actinobacteria are associated with plant drought resilience, this raises questions into how QsuB plants would fare under water-limiting conditions . Importance of testing microbiome impacts in QsuB-engineered plants Engineering the biofuel feedstock switchgrass with the QsuB gene is a promising strategy for reducing lignin content and improving saccharification. Our work highlights the importance of assessing QsuB genotype impact on the plant-associated microbiomes. We found that QsuB engineering changed plant physiology and its microbiomes, including some important functional microbial groups including AMF and Actinobacteria . Unfortunately, we did not obtain chemical data of the roots and surrounding rhizosphere, so the lignin and protocatechuate contents of specific belowground compartments are unknown. It is possible that the accumulation of protocatechuate altered plant cell osmolarity, and might further modify the root exudates, which may directly contribute to the microbial community dynamics. A longer-term field study of QsuB bioenergy plants with measurements of biomass yield, soil properties, and microbiome communities in different locations and climates could help confirm the promise and future of QsuB-engineered bioenergy crops under real-world agricultural scenarios. Conclusion Less lignin content, more fermentable sugar yield, and greater biomass conversion efficiency to biofuels made QsuB genotype promising in wide application. As hypothesized, our study found that the QsuB-engineered plants impacted switchgrass-associated fungal and bacterial communities, especially those associated with the roots and rhizosphere. Importantly, the microbiome differences between QsuB plants and non-modified wild-type switchgrass did not appear to impact the relative abundances of putative switchgrass pathogens. However, the reduction in AMF diversity and relative abundance in QsuB plants are noteworthy and raise questions regarding how this could further impact plant performance under drought conditions and consequent soil physio-chemical properties. By characterizing the microbiome responses to QsuB genotype, we provide a baseline for evaluating QsuB and other bioengineered traits on plant-microbe interactions.
Arbuscular mycorrhizal fungi are obligate biotrophic plant mutualists that belong to Glomeromycotina . These fungi are known to be beneficial to plant nutrition and soil health by transporting nutrients (e.g., P and N) and water to plant hosts via their hyphal network, while stimulating and stabilizing soil organic matter . Under nutrient limitation, host plants are more dependent on AMF for nutrients . In this study, we detected lower relative abundance of Glomerales (i.e., the most frequent AMF order detected in this study) sequences in the root samples from QsuB plants (both pre- and post-transplant). The expression of QsuB in switchgrass results in the reduction of lignin and the accumulation of protocatechuate in biomass and improves biomass saccharification efficiency . Less lignin and more protocatechuate may have stimulated bacterial community activity and increased soil nutrient mineralization and turnover rates . More available nutrients may reduce the reliance of plants on AMF and may indirectly affect AMF colonization. For example, nutrient deficiency could trigger plant signaling, such as phenols, flavonoids, and sesquiterpenoids, and promote the growth of AMF appressorium . Five lower lignin transgenic lines of poplar with downregulated genes of monolignol biosynthesis pathway displayed a lower mycorrhizal colonization percentage than wild-type, and the authors proposed that the gene modifications in monolignol pathway impacted ectomycorrhizal colonization possibly by changing cell wall ultrastructure and decreasing the communication efficiency between plants and fungi .
Sordariales and Hypocreales In this study, greater relative abundance of Ascomycota (e.g., Sordariales and Hypocreales ) was detected in the root samples from QsuB plants compared to the wild-type. Previous work has identified Sordariales and Hypocreales as dominant decomposers in arable soil with long-term organic management practice , and we found these orders predominant in our samples. It is known that mycorrhizal and saprotrophic fungi compete for niche space and organic substrates . For example, Cao et al. reported that AMF inhibited the population abundance and enzyme activity of saprotrophic fungi, possibly by reducing the availability of limiting nutrients. Increased accessibility to carbohydrates and soil nutrients may have stimulated Sordariales and Hypocreales , which were associated with the QsuB genotype. Interestingly, in both root and leaf samples, QsuB plants had a higher relative abundance of Hypocreales . Six Fusarium (members of Hypocreales) ASVs were identified as indicator ASVs whose relative abundances significantly increased in the root and leaf samples of QsuB plants compared to the wild-type. This is of interest because many Fusarium species are known plant pathogens . In root samples, QsuB plants had relatively greater Fusarium in their rhizobiome compared to the wild-type: 9.81% vs 2.82% (pre-transplant) and 7.27% vs 1.97% (post-transplant); however, this trend was not obvious in the leaf samples (Fig. S13). The only two leaf Fusarium indicator ASVs were identified at the species level: Fusarium oxysporum (ASV_15 and 19), and they were also root Fusarium indicator ASVs. Soils are often the source of plant-associated F. oxysporum , and detached leaf assays showed that F. oxysporum might be benign or beneficial in switchgrass, even though other Fusarium species were pathogenic . In this study, F. oxysporum was the only Fusarium species identified in leaf samples associated with QsuB plants. Knowing the diversity of Fusarium species, the complexity of their function, and the limits of short amplicon sequencing, the actual roles of the Fusarium spp. in our study are difficult to discern . In root samples, there were significantly more Sordariales in switchgrass QsuB plants than the wild-type, largely accounted for by Zopfiella (Fig. S13). Interestingly, some Zopfiella species have the potential to control plant disease by producing antifungal compounds and promote plant growth by increasing stress resistance .
We found that QsuB-engineered plants supported a significantly greater richness and diversity of bacteria in inflorescence, leaf, and root (pre-transplant) samples. Generally, bacteria are efficient at degrading simple substrates, while fungi are better equipped at decomposing recalcitrant organic matter, such as lignin . Lignin biodegradation starts with lignin depolymerization, which is predominantly performed by fungi . Therefore, compared to lignin, protocatechuate represents a more favorable growth substrate for bacteria to utilize, and bacteria are competitive for carbon and energy sources. Plants expressing QsuB accumulate inside tissues more protocatechuate that may stimulate the activity of the bacterial community. Diverse bacteria are indeed known to degrade protocatechuate including those in the order Bacillales , Burkholderiales , Sphingomonadales , and Pseudomonadales .
Actinobacteria were detected in the root and rhizosphere samples of QsuB plants In this study, the relative abundance of Actinobacteria detected in the root and rhizosphere samples of QsuB plants was lower than that of the wild-type. Actinobacteria are an important terrestrial group of detritus decomposers , which are able to degrade lignin materials . Given that QsuB-engineered switchgrass biosynthesizes less lignin, it may be expected to host a lower relative abundance of Actinobacteria . In post-transplant root samples, all non-QsuB-associated bacterial indicator ASVs showing significantly increased relative abundances were Actinobacteria (Fig. S9).
We have described how the QsuB genotype influenced the fungal communities, especially AMF, as well as the bacteria communities belowground. We posit that the change in the AMF community might be an important driver of the change observed in those bacterial communities. Previous studies have found that the AMF community composition was a significant contributor to determining the bacterial community composition , perhaps through changes in the root exudates composition and soil structure modification . Interestingly, AMF-associated bacterial communities have been shown to be structured predominantly by AMF symbiont identity ( Glomus geosporum or Glomus constrictum ), rather than the host plant ( Plantago lanceolata or Hieracium pilosella ) . AMF hyphae release a variety of exudates, including carbohydrates, polyols, amino acids, amines, nucleic acids, organic acids, etc.; different AMF species or the same AMF under different abiotic conditions might have different metabolite profiles of hyphal exudates . The carbon sources supplied by AMF have important roles in bacterial growth and distribution, so it is likely that AMF activity has an impact on the surrounding bacterial communities . AMF hyphae provide a scaffold bridging the soil and root microbiomes . The addition of protocatechuate to the culture medium inhibited primary root growth but increased lateral root numbers in Arabidopsis . The potential root morphology change may also influence AMF development and surrounding bacterial communities. Some specific bacteria showed the same (positive or negative) response to AMF, even under different experimental setups. For example, in this study, root and rhizosphere samples from QsuB plants had relatively fewer AMF and Actinobacteria . This positive response of Actinobacteria to AMF has also been observed in other studies . Recent work showed that QsuB switchgrass had no yield penalty compared to the wild-type under optimal irrigation in the field . However, as AMF and Actinobacteria are associated with plant drought resilience, this raises questions into how QsuB plants would fare under water-limiting conditions .
Engineering the biofuel feedstock switchgrass with the QsuB gene is a promising strategy for reducing lignin content and improving saccharification. Our work highlights the importance of assessing QsuB genotype impact on the plant-associated microbiomes. We found that QsuB engineering changed plant physiology and its microbiomes, including some important functional microbial groups including AMF and Actinobacteria . Unfortunately, we did not obtain chemical data of the roots and surrounding rhizosphere, so the lignin and protocatechuate contents of specific belowground compartments are unknown. It is possible that the accumulation of protocatechuate altered plant cell osmolarity, and might further modify the root exudates, which may directly contribute to the microbial community dynamics. A longer-term field study of QsuB bioenergy plants with measurements of biomass yield, soil properties, and microbiome communities in different locations and climates could help confirm the promise and future of QsuB-engineered bioenergy crops under real-world agricultural scenarios.
Less lignin content, more fermentable sugar yield, and greater biomass conversion efficiency to biofuels made QsuB genotype promising in wide application. As hypothesized, our study found that the QsuB-engineered plants impacted switchgrass-associated fungal and bacterial communities, especially those associated with the roots and rhizosphere. Importantly, the microbiome differences between QsuB plants and non-modified wild-type switchgrass did not appear to impact the relative abundances of putative switchgrass pathogens. However, the reduction in AMF diversity and relative abundance in QsuB plants are noteworthy and raise questions regarding how this could further impact plant performance under drought conditions and consequent soil physio-chemical properties. By characterizing the microbiome responses to QsuB genotype, we provide a baseline for evaluating QsuB and other bioengineered traits on plant-microbe interactions.
|
Adoption of preventive measures during and after the 2009 influenza A (H1N1) virus pandemic peak in Spain | f9be310d-9526-41ce-9b83-e3c5d546db36 | 7119352 | Preventive Medicine[mh] | Novel influenza A (H1N1) emerged from Mexico in April 2009 . On June 11, 2009, the World Health Organization raised the pandemic alert level to phase 6 . The number of deaths at the beginning led to early predictions of massive spread and unknown clinical course . A worldwide debate was sparked on the advisability of epidemiological control measures. Most western countries decided to vaccinate at-risk groups while the general population was advised to adopt preventive measures to avoid or mitigate transmission. In Spain, the first suspected cases of 2009 influenza A (H1N1) were notified on 26 April 2009 . In fact, one of them was the first laboratory-confirmed case in Europe. On July 2009, the Spanish Ministry of Health (MoH) began a campaign recommending two preventive measures: covering the mouth and nose with a tissue when sneezing or coughing (respiratory hygiene) and washing hands regularly using soap and water . Furthermore, a vaccination campaign to some specific groups began on November 16, 2009 in Spain. Since substantial changes in risk perceptions ocurr throughout the course of pandemics , this study explores behaviors and perceptions related to the 2009 influenza A (H1N1) during the peak and the declining phase of the pandemic in Spain.
Two waves of anonymous cross-sectional surveys using computer-assisted telephone interview (CATI) method were conducted. The first wave (December 2010) covered the pandemic peak (weeks 43–46/2010) and the second wave (February 2010) included the declining phase (weeks 47/2009–4/2010). The sample size was estimated as 800 interviewed people per wave, providing an error of ± 3.5% with a confidence level of 95% for p = q = 0.5. Methods were previously described in “Attitudes and Preventive Behaviours Adopted during the (H1N1) 2009 Influenza Virus epidemic in Spain” . To describe and analyze the primary outcomes, three variables were created summarizing preventive measures: MoH recommended measures (respiratory hygiene and/or hand washing more frequently); avoidance measures (avoiding people with influenza and/or any of the followings: avoiding crowds, avoiding health facilities, avoiding public transport) and purchase measures (buying masks and/or hand sanitizer).
The association between personal characteristics (including medical conditions considered as risk factors which deserve vaccination) and attitudes with the primary outcomes were analyzed using multivariate logistic regression adjusting for wave. Data entry and statistical analysis were performed with the SPSS software program (v13.0).
A total of 4892 eligible participants were contacted. 2823 refused to participate, 223 were unable to respond and 219 did not finish the interview. 1.627 completed the interview (response rate of 33.3%). The distribution of sex, age groups and educational level were similar in both waves (data not shown). The two most frequently adopted preventive measures were those recommended by the Spanish MoH. Overall, 79.5% of the participants reported adopting at least one preventive measure in the first wave. The proportion was lower in the second wave (74.6%, p = 0.02) . As shown in , the factors associated with the adoption of the MoH recommended measures were female gender, secondary or higher educational level, living in towns with more than 50,000 inhabitants, high concern about becoming infected by 2009 influenza A (H1N1), perceiving the preventive measures to be highly effective and high perception of the usefulness of the information provided by the government. For purchase measures, similar associated factors were identified except that the respondents belonging to the younger age groups (< 55) and those living with school-aged children were more likely to follow these measures. In addition, no association was observed regarding the perceived usefulness of the information provided by the government. Avoidance measures were independently associated with the group aged 18–35 years, living in towns with more than 50,000 inhabitants, high concern about becoming infected by 2009 influenza A (H1N1) and perceiving the preventive measures to be highly effective.
To our knowledge this is the first study reporting information on self-reported behaviors and perceptions towards the 2009 influenza A (H1N1) pandemic during the peak and the declining phase. As expected, there was a decrease of the adoption of preventive measures. In addition, we have found that respiratory hygiene and hand washing were the most frequently preventive measures adopted. These two measures are considered as effective non-pharmaceutical public health interventions against influenza . The high prevalence of both measures is consistent with the government campaign . Clearly, 2009 influenza A (H1N1) impacted on health-related perceptions and behaviors in terms of self-protection, as approximately 80% of respondents adopted at least one preventive measure. Some of these behaviors persisted among a large proportion of the population after the pandemic peak, although a significant decrease was observed during the declining phase. The hand washing rate in this study was in the range reported by previous studies (28%–80%) . In our study, the proportion of respondents who purchased face masks (3.9% and 1.9% in the first and second waves, respectively) was lower than the proportion reported by other European countries during the pre-pandemic peak phase (7%) , the USA (5%) and Malaysia (8%). There were also wide regional differences in the prevalence of wearing a face mask, ranking between 22% and 89% in previous Asian studies , again much higher than the proportion we found in Spain (7%). The same pattern was observed for avoidance measures. The proportion of Spanish general population reporting keeping away from crowded places was 4% while in Asian countries it was around 55% . This might be explained by a higher public concern in those countries regarding the threat of the severe acute respiratory syndrome (SARS) or the human avian H5N1 virus a few years ago . This study highlights the importance of perceptions and beliefs, such as perceived susceptibility to the infection by 2009 influenza A (H1N1), perceived effectiveness of preventive measures and perceived usefulness of government information, to explain preventive health behaviors. A recent review reported similar findings whereas Cava et al. have observed that the credibility of the information received from public health authorities could impact on the adoption of some measures . On the other hand, some associated factors observed in the present study (i.e., female sex, higher educational level) are consistent with previous reports . One of the limitations of this study was the use of telephone surveys, which excluded those households without telephone line. While this is a potential selection bias that cannot be ruled out completely, the magnitude is limited since more than 80% of households have a landline in Spain . More important is that we obtained a response rate of 33% which is nevertheless in the range of other published studies . Finally, since cultural factors could result in differences in behavioral responses, caution should be exercised when generalizing our results to other contexts. The Spanish MoH campaign was effective in making the general population to follow its recommendations. The results provided can be useful in case of similar future events.
FA analyzed the data, contributed to the data interpretation and drafted the report. MN was involved in the study design, discussion of the data, and helped write the report. MJL, AP and XGC participated in the study design, data discussion and drafting the paper. All authors have read and approved the final version.
The authors declare that there are no conflicts of interest.
|
Validity Assessment of Self-reported Medication Use for Hypertension, Diabetes, and Dyslipidemia in a Pharmacoepidemiologic Study by Comparison With Health Insurance Claims | 3b508f1b-d806-4868-88d3-5a4c7d356ace | 8328856 | Pharmacology[mh] | The number of patients with three of the major lifestyle-related diseases—hypertension, diabetes and dyslipidemia—is increasing. These are major risk factors for cardiovascular disease. – To assess relationships between risk factors and health outcomes in cohort studies, participant characteristics including medication use are often evaluated using self-reported questionnaire. Despite the possibility of information bias, however, the accuracy of self-reported questionnaires has not been sufficiently studied. , In particular, few reports have explored the individual determinants of discordance between self-reported questionnaires on medication use and the true status of medication. To date, only a few studies have evaluated the validity of self-reported medication use in population-based studies, and the results of these have been inconsistent. – Although self-reported medication use for lifestyle-related disease has shown high validity with sensitivity over 70%, the sensitivity nevertheless varied from study to study. This inconsistency has been explained by differences in data collection method, type of medication, and surveyed populations. Moreover, only a few studies have identified individual determinants of discordance between self-reported medication use and true status of medication. , , These include sex, , age, , marital status, number of medications regularly taken, smoking status, health status and education years, albeit that the results varied among studies. The aim of this study was to evaluate the validity of self-reported medication use for lifestyle-related diseases in our population-based Tsuruoka Metabolomics Cohort Study using health insurance claims as a standard. Individual determinants of discordance, such as social factors, were also examined.
Japanese healthcare insurance system Japan has a universal healthcare insurance system which covers all citizens. There are two types of coverage for individuals aged younger than 75 years, Employees’ Health Insurance and National Health Insurance (NHI). The former is managed by the workplace and covers salaried employees while the latter is managed by municipalities and covers individual proprietors, pensioners and those with irregular employment. On reaching 75 years of age, current NHI members are switched from NHI to Medical Care System for the Advanced Elderly. If an insured member goes to a hospital or pharmacy as an outpatient, their information is stored as health insurance claims data of medical/dental outpatient claims and pharmacy claims. In Japan, long-term prescriptions are allowed, except for special medications, such as newly launched or psychoactive medications; newly launched medications, for example, can be prescribed in 2-week courses. In contrast, most medications, particularly those for lifestyle-related diseases, are prescribed in courses of 90 days duration or less. Study base Participants of this study were 1,128 males and 1,344 females (total 2,472) who joined the follow-up survey of the Tsuruoka Metabolomics Cohort Study between April 2015 and March 2016 and those who were the beneficiaries of NHI and Medical Care System for the Advanced Elderly. Briefly, the Tsuruoka Metabolomics Cohort Study is a population-based study started in April 2012 in Tsuruoka City, Yamagata Prefecture, Japan. A total of 11,002 participants aged 35–74 years were recruited from municipal or worksite health check-ups in the city during the baseline period from 2012 to 2014 and enrolled. Follow-up surveys of this original cohort are conducted periodically. Participant information, including social factors, medical history, and medications was obtained from standardized self-administered questionnaires with face-to-face interview during the health check-up. Other measurements (height, weight, blood pressure, and laboratory data) were also collected during the check-up. All data were recorded using anonymized participant linkers. Details have been reported previously. – The study was approved by the Medical Ethics Committee of the School of Medicine, Keio University, Tokyo, Japan (Approval No 20110264). All individual participants in this study provided written informed consent. Self-reported medication use All participants were asked to complete a standardized self-administered questionnaire which included the items listed below . The answers were checked twice by interviewers using face-to-face interview. • Are you currently (at least once a week) taking any medications? (yes or no). [1] Medication for hypertension (yes or blank). [2] Medication for blood sugar level-lowering (diabetes) (yes or blank). [3] Medication for cholesterol-lowering (dyslipidemia) (yes or blank). We defined participants who answered ‘yes’ to the first question as self-reported medication users and those who answered ‘no’ as non-users. Self-reported medication users who chose “Medication for hypertension”, “Medication for blood sugar level-lowering (diabetes)” or “Medication for cholesterol-lowering (dyslipidemia)” were defined as self-reported medication users against each disease. Medication use information from medical and pharmacy health insurance claims Regular medication users were captured by using health insurance claims from October 2014 to March 2016 provided by Tsuruoka City. To define medication categories, we used the drug database in Japan , and codes of the Anatomical Therapeutic Chemical (ATC) provided by World Health Organization. For some medications which did not have an ATC code, we assigned the closest minimum code based on medication category. We defined antihypertensive medications as follows: medications with an ATC code starting from C02 or listed as a medication for hypertension in Japan . Medications for diabetes were as follows: medications with an ATC code starting from A10 or listed as a medication for diabetes in Japan . Medications for dyslipidemia were as follows: medications with an ATC code starting from C10 or listed as a medication for dyslipidemia in Japan . As long-term prescriptions are allowed in Japan, even if the participants were not prescribed the medications during the survey month, they might take the medications that have been prescribed during the previous month. As a previous study observed that period of time shorter than 90 days are less sensitive to detect the medication use, we used two different time periods (3- and 6-month fixed time windows). The definition of 3-month fixed time window is the period of time that includes the survey month as the participants answered the self-reported questionnaires and the previous 2 months. The definition of 6-month fixed time window is the period of time that includes the survey month as the participants answered the self-reported questionnaires and the previous 5 months. Therefore, we identified ‘Regular medication users’ by collecting data for medications using 3- and 6-month fixed time windows and if the medications were prescribed during the period of time at least one time, we considered them as ‘regular medication users’ from an objective perspective. Additional covariate data of sociodemographic information Marital status was classified as married if a participant answered ‘yes’ to the question ‘Do you currently have a spouse? (even if not living together)’. If a participant answered ‘no’, they were classified as single, divorced or widowed. If a participant’s last education status was an elementary school, junior high school or high school, we classified them as having 12 or fewer years of education years. If they had graduated from a technical college, junior college, university or graduate school, we classified them as having more than 12 years of education years. Job status was classified as ‘currently working’ if participants were not homemakers or unemployed. We defined the current smokers as those who smoked cigarettes currently and current drinkers as those who consumed alcohol more than 20 g every day. Those who maintained the habit of moderate exercising at least 30 minutes more than two times per week and kept the habit for more than 1 year were defined as regular exercisers. The information was collected at the baseline survey and updated at the follow-up survey if their status had changed. Statistical methods We analyzed 2,472 beneficiaries (1,128 males and 1,344 females) of NHI or Medical Care System for the Advanced Elderly in this study because data on Employees’ Health Insurance beneficiaries was not available at this time. Differences between males and females were determined by using Student’s t -test for continuous variables and Chi-square test for categorized variables. We evaluated the prevalence of medication use as determined by the standardized self-administered questionnaire and by the health insurance claims separately. To assess the validity of self-reported medication use, we used the health insurance claims as a standard. Sensitivity, specificity, and agreement were calculated with 95% confidence intervals (CIs). Sensitivity identifies the proportion of self-reported medication users among regular medication users, while specificity identifies the proportion of non-users according to the questionnaire among non-users detected by the health insurance claims. Agreement between self-reported medication use and the health insurance claims was calculated using the kappa statistics. The kappa statistics vary from 0 to 1 and are interpreted as follows: fair to poor (<0.40), moderate (0.41–0.60), substantial (0.61–0.80), and almost perfect (>0.81). , Furthermore, we performed logistic regression analysis to examine potential determinants of discordance which affected sensitivity in each medication group, such as sex, age, marital status, education years, job status, smoking status, drinking status and regular exercise habit. Odds ratios (ORs) with 95% CIs were calculated. Multivariable logistic regression was performed in each medication group by adjusting for all potential determinants mentioned above. Subgroup analyses stratified by concurrent therapeutic areas, sex, education years, and smoking status were also performed. Also, we performed logistic regression analysis to examine potential determinants of discordance, which affected not only sensitivity but also specificity in each medication group. P < 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA).
Japan has a universal healthcare insurance system which covers all citizens. There are two types of coverage for individuals aged younger than 75 years, Employees’ Health Insurance and National Health Insurance (NHI). The former is managed by the workplace and covers salaried employees while the latter is managed by municipalities and covers individual proprietors, pensioners and those with irregular employment. On reaching 75 years of age, current NHI members are switched from NHI to Medical Care System for the Advanced Elderly. If an insured member goes to a hospital or pharmacy as an outpatient, their information is stored as health insurance claims data of medical/dental outpatient claims and pharmacy claims. In Japan, long-term prescriptions are allowed, except for special medications, such as newly launched or psychoactive medications; newly launched medications, for example, can be prescribed in 2-week courses. In contrast, most medications, particularly those for lifestyle-related diseases, are prescribed in courses of 90 days duration or less.
Participants of this study were 1,128 males and 1,344 females (total 2,472) who joined the follow-up survey of the Tsuruoka Metabolomics Cohort Study between April 2015 and March 2016 and those who were the beneficiaries of NHI and Medical Care System for the Advanced Elderly. Briefly, the Tsuruoka Metabolomics Cohort Study is a population-based study started in April 2012 in Tsuruoka City, Yamagata Prefecture, Japan. A total of 11,002 participants aged 35–74 years were recruited from municipal or worksite health check-ups in the city during the baseline period from 2012 to 2014 and enrolled. Follow-up surveys of this original cohort are conducted periodically. Participant information, including social factors, medical history, and medications was obtained from standardized self-administered questionnaires with face-to-face interview during the health check-up. Other measurements (height, weight, blood pressure, and laboratory data) were also collected during the check-up. All data were recorded using anonymized participant linkers. Details have been reported previously. – The study was approved by the Medical Ethics Committee of the School of Medicine, Keio University, Tokyo, Japan (Approval No 20110264). All individual participants in this study provided written informed consent.
All participants were asked to complete a standardized self-administered questionnaire which included the items listed below . The answers were checked twice by interviewers using face-to-face interview. • Are you currently (at least once a week) taking any medications? (yes or no). [1] Medication for hypertension (yes or blank). [2] Medication for blood sugar level-lowering (diabetes) (yes or blank). [3] Medication for cholesterol-lowering (dyslipidemia) (yes or blank). We defined participants who answered ‘yes’ to the first question as self-reported medication users and those who answered ‘no’ as non-users. Self-reported medication users who chose “Medication for hypertension”, “Medication for blood sugar level-lowering (diabetes)” or “Medication for cholesterol-lowering (dyslipidemia)” were defined as self-reported medication users against each disease.
Regular medication users were captured by using health insurance claims from October 2014 to March 2016 provided by Tsuruoka City. To define medication categories, we used the drug database in Japan , and codes of the Anatomical Therapeutic Chemical (ATC) provided by World Health Organization. For some medications which did not have an ATC code, we assigned the closest minimum code based on medication category. We defined antihypertensive medications as follows: medications with an ATC code starting from C02 or listed as a medication for hypertension in Japan . Medications for diabetes were as follows: medications with an ATC code starting from A10 or listed as a medication for diabetes in Japan . Medications for dyslipidemia were as follows: medications with an ATC code starting from C10 or listed as a medication for dyslipidemia in Japan . As long-term prescriptions are allowed in Japan, even if the participants were not prescribed the medications during the survey month, they might take the medications that have been prescribed during the previous month. As a previous study observed that period of time shorter than 90 days are less sensitive to detect the medication use, we used two different time periods (3- and 6-month fixed time windows). The definition of 3-month fixed time window is the period of time that includes the survey month as the participants answered the self-reported questionnaires and the previous 2 months. The definition of 6-month fixed time window is the period of time that includes the survey month as the participants answered the self-reported questionnaires and the previous 5 months. Therefore, we identified ‘Regular medication users’ by collecting data for medications using 3- and 6-month fixed time windows and if the medications were prescribed during the period of time at least one time, we considered them as ‘regular medication users’ from an objective perspective.
Marital status was classified as married if a participant answered ‘yes’ to the question ‘Do you currently have a spouse? (even if not living together)’. If a participant answered ‘no’, they were classified as single, divorced or widowed. If a participant’s last education status was an elementary school, junior high school or high school, we classified them as having 12 or fewer years of education years. If they had graduated from a technical college, junior college, university or graduate school, we classified them as having more than 12 years of education years. Job status was classified as ‘currently working’ if participants were not homemakers or unemployed. We defined the current smokers as those who smoked cigarettes currently and current drinkers as those who consumed alcohol more than 20 g every day. Those who maintained the habit of moderate exercising at least 30 minutes more than two times per week and kept the habit for more than 1 year were defined as regular exercisers. The information was collected at the baseline survey and updated at the follow-up survey if their status had changed.
We analyzed 2,472 beneficiaries (1,128 males and 1,344 females) of NHI or Medical Care System for the Advanced Elderly in this study because data on Employees’ Health Insurance beneficiaries was not available at this time. Differences between males and females were determined by using Student’s t -test for continuous variables and Chi-square test for categorized variables. We evaluated the prevalence of medication use as determined by the standardized self-administered questionnaire and by the health insurance claims separately. To assess the validity of self-reported medication use, we used the health insurance claims as a standard. Sensitivity, specificity, and agreement were calculated with 95% confidence intervals (CIs). Sensitivity identifies the proportion of self-reported medication users among regular medication users, while specificity identifies the proportion of non-users according to the questionnaire among non-users detected by the health insurance claims. Agreement between self-reported medication use and the health insurance claims was calculated using the kappa statistics. The kappa statistics vary from 0 to 1 and are interpreted as follows: fair to poor (<0.40), moderate (0.41–0.60), substantial (0.61–0.80), and almost perfect (>0.81). , Furthermore, we performed logistic regression analysis to examine potential determinants of discordance which affected sensitivity in each medication group, such as sex, age, marital status, education years, job status, smoking status, drinking status and regular exercise habit. Odds ratios (ORs) with 95% CIs were calculated. Multivariable logistic regression was performed in each medication group by adjusting for all potential determinants mentioned above. Subgroup analyses stratified by concurrent therapeutic areas, sex, education years, and smoking status were also performed. Also, we performed logistic regression analysis to examine potential determinants of discordance, which affected not only sensitivity but also specificity in each medication group. P < 0.05 was considered statistically significant. All statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA).
Basic characteristics Table shows the characteristics of participants. Mean and standard deviation (SD) age was 66 (SD, 6.9) years in total, and 65 (SD, 7.5) in males and 66 (SD, 6.5) in females. A higher proportion of males were married, working, current smokers, current drinkers, or taking prescribed medications for hypertension or diabetes than females. The most commonly prescribed medications were antihypertensive medications. With a 3-month fixed time window, the proportion of participants who took antihypertensive medications was 39.0% (males 43.7% and females 35.0%), versus dyslipidemia medications at 30.8% (males 24.6% and females 36.0%) and diabetes medications at 9.1% (males 12.1% and females 6.5%). Validity of self-reported medication use Validation was performed between medication use from a self-reported questionnaire and health insurance claims (Table ). We also conducted the same analyses stratified by sex (data not shown) and concurrent therapeutic areas . Although there were no obvious differences in sensitivity, specificity or kappa scores between 3- and 6-month fixed time windows, we used the 3-month fixed time window for the following analyses as it showed slightly higher sensitivity than the 6-month window. The self-reported use of antihypertensive medications and diabetes medications predicted the regular use with high sensitivity (3-month fixed time window, 0.95 for antihypertensive medications and 0.94 for diabetes medications; 6-month fixed time window, 0.94 for antihypertensive medications and 0.92 for diabetes medications). In contrast, the self-reported use for dyslipidemia medications showed lower sensitivity (3-month fixed time window, 0.84; 6-month fixed time window, 0.84) than those for the other medications. Specificities were all over 0.97. Also, agreement of dyslipidemia medications was lower than those for the other medications, but still within the almost perfect kappa scores (3-month fixed time window, 0.85; 6-month fixed time window, 0.86). Sensitivity was better in one category than two or three categories of therapeutic areas. Determinants of discordance Analyses of subgroups with the 3-month fixed time window stratified by sociodemographic factors including sex, age, marital status, education years, job status, smoking status, drinking status, and regular exercise habit are shown in Table . In the antihypertensive medications and the diabetes medications groups, sensitivity and specificity were all over 0.88 and kappa scores were all over 0.82 regardless of sociodemographic factors. In the antihypertensive medications group, education years were associated with sensitivity (over 12 years, 0.99; 12 or fewer years; 0.94) and the association was still observed after multivariate adjustment (OR 0.19; 95% CI, 0.05–0.81). In contrast, in the dyslipidemia medications group, sex (males, 0.71; females, 0.92), education years (over 12 years, 0.91; 12 or fewer years; 0.83), and smoking status (current smoker, 0.61; non-current smoker, 0.86) were associated with sensitivity. The associations were still observed after multivariate adjustment for sex (OR 4.15; 95% CI, 2.54–6.77), education years (OR 0.44; 95% CI, 0.23–0.85), and smoking status (OR 2.19; 95% CI, 1.09–4.38) (Table ). Next, we conducted the same subgroup analyses divided by concurrent therapeutic areas . Sex was associated with sensitivity in those with dyslipidemia and those with hypertension and dyslipidemia. Education years were also associated with sensitivity in those with hypertension and dyslipidemia. The associations were still observed after multivariate adjustment . We also conducted the same subgroup analyses divided by sex ( and ). In the dyslipidemia medications group in male participants, sensitivity was associated with education years (over 12 years, 0.84; 12 or fewer years, 0.68) even after multivariable analysis (OR 0.41; 95% CI, 0.18–0.93) . Further analyses stratifying the same subgroup by education years and smoking status showed the similar tendencies, even after multivariate adjustment. In the group of 12 or fewer years of education years with dyslipidemia medications, sex (males, 0.68; females, 0.91) and smoking status (current smoker, 0.56; non-current smoker, 0.84) were associated with sensitivity (data not shown). Furthermore, sex (males, 0.73; females, 0.92) and education years (over 12 years, 0.92; 12 or fewer years, 0.84) were associated with sensitivity in the group of non-current smoker with dyslipidemia medications (data not shown). These associations were still observed even after multivariate adjustment ( and ). The characteristics of concordance and discordance groups which affected not only sensitivity but also specificity are shown in . The following determinants were associated with discordance: sex (OR 1.69; 95% CI, 1.04–2.74), age (OR 2.06; 95% CI, 1.18–3.59), and education years (OR 0.42; 95% CI, 0.20–0.88) in the antihypertensive medications group; sex (OR 3.62; 95% CI, 1.74–7.51) in the diabetes medications group; and sex (OR 2.33; 95% CI, 1.57–3.46) and age (OR 1.79; 95% CI, 1.19–2.69) in the dyslipidemia medications group (data not shown).
Table shows the characteristics of participants. Mean and standard deviation (SD) age was 66 (SD, 6.9) years in total, and 65 (SD, 7.5) in males and 66 (SD, 6.5) in females. A higher proportion of males were married, working, current smokers, current drinkers, or taking prescribed medications for hypertension or diabetes than females. The most commonly prescribed medications were antihypertensive medications. With a 3-month fixed time window, the proportion of participants who took antihypertensive medications was 39.0% (males 43.7% and females 35.0%), versus dyslipidemia medications at 30.8% (males 24.6% and females 36.0%) and diabetes medications at 9.1% (males 12.1% and females 6.5%).
Validation was performed between medication use from a self-reported questionnaire and health insurance claims (Table ). We also conducted the same analyses stratified by sex (data not shown) and concurrent therapeutic areas . Although there were no obvious differences in sensitivity, specificity or kappa scores between 3- and 6-month fixed time windows, we used the 3-month fixed time window for the following analyses as it showed slightly higher sensitivity than the 6-month window. The self-reported use of antihypertensive medications and diabetes medications predicted the regular use with high sensitivity (3-month fixed time window, 0.95 for antihypertensive medications and 0.94 for diabetes medications; 6-month fixed time window, 0.94 for antihypertensive medications and 0.92 for diabetes medications). In contrast, the self-reported use for dyslipidemia medications showed lower sensitivity (3-month fixed time window, 0.84; 6-month fixed time window, 0.84) than those for the other medications. Specificities were all over 0.97. Also, agreement of dyslipidemia medications was lower than those for the other medications, but still within the almost perfect kappa scores (3-month fixed time window, 0.85; 6-month fixed time window, 0.86). Sensitivity was better in one category than two or three categories of therapeutic areas.
Analyses of subgroups with the 3-month fixed time window stratified by sociodemographic factors including sex, age, marital status, education years, job status, smoking status, drinking status, and regular exercise habit are shown in Table . In the antihypertensive medications and the diabetes medications groups, sensitivity and specificity were all over 0.88 and kappa scores were all over 0.82 regardless of sociodemographic factors. In the antihypertensive medications group, education years were associated with sensitivity (over 12 years, 0.99; 12 or fewer years; 0.94) and the association was still observed after multivariate adjustment (OR 0.19; 95% CI, 0.05–0.81). In contrast, in the dyslipidemia medications group, sex (males, 0.71; females, 0.92), education years (over 12 years, 0.91; 12 or fewer years; 0.83), and smoking status (current smoker, 0.61; non-current smoker, 0.86) were associated with sensitivity. The associations were still observed after multivariate adjustment for sex (OR 4.15; 95% CI, 2.54–6.77), education years (OR 0.44; 95% CI, 0.23–0.85), and smoking status (OR 2.19; 95% CI, 1.09–4.38) (Table ). Next, we conducted the same subgroup analyses divided by concurrent therapeutic areas . Sex was associated with sensitivity in those with dyslipidemia and those with hypertension and dyslipidemia. Education years were also associated with sensitivity in those with hypertension and dyslipidemia. The associations were still observed after multivariate adjustment . We also conducted the same subgroup analyses divided by sex ( and ). In the dyslipidemia medications group in male participants, sensitivity was associated with education years (over 12 years, 0.84; 12 or fewer years, 0.68) even after multivariable analysis (OR 0.41; 95% CI, 0.18–0.93) . Further analyses stratifying the same subgroup by education years and smoking status showed the similar tendencies, even after multivariate adjustment. In the group of 12 or fewer years of education years with dyslipidemia medications, sex (males, 0.68; females, 0.91) and smoking status (current smoker, 0.56; non-current smoker, 0.84) were associated with sensitivity (data not shown). Furthermore, sex (males, 0.73; females, 0.92) and education years (over 12 years, 0.92; 12 or fewer years, 0.84) were associated with sensitivity in the group of non-current smoker with dyslipidemia medications (data not shown). These associations were still observed even after multivariate adjustment ( and ). The characteristics of concordance and discordance groups which affected not only sensitivity but also specificity are shown in . The following determinants were associated with discordance: sex (OR 1.69; 95% CI, 1.04–2.74), age (OR 2.06; 95% CI, 1.18–3.59), and education years (OR 0.42; 95% CI, 0.20–0.88) in the antihypertensive medications group; sex (OR 3.62; 95% CI, 1.74–7.51) in the diabetes medications group; and sex (OR 2.33; 95% CI, 1.57–3.46) and age (OR 1.79; 95% CI, 1.19–2.69) in the dyslipidemia medications group (data not shown).
In this study, we found that self-reported medication use had high validity for predicting regular medication use, and that sensitivity for dyslipidemia medication use was lower than those for the other lifestyle-related diseases. Our data provide convincing evidence that self-reported medication use for lifestyle-related diseases is a valid measure to capture regular medication use in a cohort study. Moreover, potential individual determinants, such as sex, education years and smoking status were related with discordance in self-reported medication use for dyslipidemia. Medication use information from a self-reported questionnaire and health insurance claims In this study, we compared the medication use from a self-reported questionnaire with health insurance claims. A previous study showed the sensitivity of information from hospital files, structured interviews and insurance claims data comparing with medication-containing blood samples. Although the study reported that there were no significant differences between methods, the sensitivity of information from insurance claims data was the highest (0.89 for interview and 0.93 for insurance claims data). According to this result, we considered that insurance claims data would be one of the useful tools to capture the regular medication users from the medication users measured with the self-reported questionnaire in this study. 3- and 6-month fixed time windows No obvious differences in results were observed between 3- and 6-month fixed time windows by sex and concurrent therapeutic areas. Medications for lifestyle-related diseases often need to be taken on a regular basis for a long time, and are often prescribed in quantities for courses of 3 months duration or less. This might have led us to recount the same participants as in the 3-month fixed time window even when we fixed the time window for 6 months. A previous population-based study in Japan validated self-reported medication use for lifestyle-related disease in 54,712 participants using a 3-month fixed time window for pharmacy health insurance claims. Their reported sensitivities for antihypertensive medications (92.4%) and dyslipidemia medications (86.2%) were similar to our present results, but their sensitivity for diabetes medications (82.6%) was lower. The reason for this discrepancy is likely due to the type of health insurance claims covered—their validation was done using health insurance claims for pharmacy only, whereas we used claims for both medicine and pharmacy, which provided for more accurate results. Dyslipidemia medication use showed lower sensitivity than the other medication uses in both our present and this previous study. Awareness level of dyslipidemia is reported to be lower than that of other lifestyle-related diseases such as hypertension. Self-recognition of health condition is also reported to affect sensitivity. To our knowledge, our present paper is one of only a few population-based validation studies of self-reported medication use which have covered all of the participants’ health insurance claims. Determinants of discordance of self-reported medication use We found that type of medication, sex, age, education years and smoking status were associated with the accuracy of self-reported medication use. The sensitivity of participants using medications for dyslipidemia was lower than those for the other medications. Males who studied 12 or fewer years and who had a current smoking habit showed lower sensitivity than those who studied more than 12 years and those who were non-current smokers in the dyslipidemia medications group. Although a number of population-based studies have reported the validity of self-reported medication use, few studies have explored the individual determinants of discordance for self-reported medication use. , , A study from Scotland which validated self-reported medication use for cholesterol-lowering medications and antihypertensive medications in 9,043 participants has shown the predictors of discordance that affected sensitivity. The Scotland study observed that sociodemographic information, including sex, age, marital status, education years, and smoking status did not affect discordance for cholesterol-lowering medication use, but found that female sex, younger age, and smoker were associated with increased discordance for antihypertensive medication use. The reason only our study identified sex, education years and smoking status as determinants of discordance for dyslipidemia medication use may be due to slight differences among studies in data collection. Whereas our study collected medication data for dyslipidemia medications, the Scottish study collected data on cholesterol-lowering medications only, and might not include medications for hypertriglyceridemia or hypo HDL-cholesterolemia. Studies from Finland and Ireland have explored the predictors of discordance which affected not only sensitivity but also specificity. In the Finnish study, the diabetes medication use in 7,625 participants was validated and the study has reported that none of the sociodemographic information was associated with the discordance. The Irish study validated calcium channel blockers, diabetes medication and lipid-modifying agent use in 2,621 participants and it has reported that older age was associated with increased discordance for the use of calcium channel blockers, which showed the same tendency as the antihypertensive medications group in the Tsuruoka Metabolomics Cohort Study. There is a possibility that the predictors of discordance might be different depending on the definition of discordance. Although previous studies did not identify education status as a determinant of discordance for self-reported medication use for lifestyle-related disease, a few studies of antidepressant use reported that a lack of high education was associated with worse recall. , , We assume that participants without high education might take the medications not knowing their efficacy, due to either a lack of knowledge, lack of interest in the treatment, or poor health awareness, such as smoking cigarettes, particularly with regard to diseases with few or no symptoms, such as dyslipidemia. Study strengths and weaknesses Among its strengths, this study was conducted by linkage of population-based cohort data with both medical and pharmacy health insurance claims. Our use of information on prescribed medications dispensed from hospitals and pharmacies enabled us to draw accurate results. Furthermore, our detailed analyses by the factors that would affect sensitivity strongly supported the associations, especially in those with dyslipidemia medications. Several limitations of our study also warrant mention. First, we covered only a part of participants in this study, namely beneficiaries of NHI and Medical Care System for the Advanced Elderly. The selection of participants might lead to the older age demographic in this study. Further study will be required for beneficiaries of Employees’ Health Insurance, which include most participants aged younger than 65 years. Second, the health insurance claims data may be insufficient for participants who newly changed their coverage from Employees’ Health Insurance to NHI. This might have increased the number of false-positive results. Third, adherence to medication was not considered. Although we observed high sensitivity and specificity for each medication, we do not know if the participants took the medications correctly as indicated, because the prescription records provide only the fact that patients have received the medications. In this study, we could observe the proportion of those with medications, but there is a possibility that some of the participants with low adherence are included in regular medication users. Fourth, the generalizability of this study to other questionnaires might be limited as we analyzed the participants who joined the cohort study. The participants who joined a cohort study might report their medication use more accurately than those who did not. Fifth, we also conducted the validation by concurrent therapeutic areas; however, further study will be needed by increasing the number of participants. Sixth, in the analyses by the factors which would affect sensitivity, the associations were not determined enough by the response variables due to a small number of failures, especially in those with hypertension and diabetes. Further study also will be needed by increasing the number of participants in regard to this point. Finally, only medications for lifestyle-related diseases were validated. Further study will be needed with other medications. In conclusion, we found that the self-reported medication use for lifestyle-related diseases was a valid measure to capture regular medication use in a cohort study. Sensitivity for dyslipidemia medications was lower than those for the others. Dyslipidemia medication, sex, number of years of education, and smoking habit were associated with discordance which affected sensitivity in self-reporting.
In this study, we compared the medication use from a self-reported questionnaire with health insurance claims. A previous study showed the sensitivity of information from hospital files, structured interviews and insurance claims data comparing with medication-containing blood samples. Although the study reported that there were no significant differences between methods, the sensitivity of information from insurance claims data was the highest (0.89 for interview and 0.93 for insurance claims data). According to this result, we considered that insurance claims data would be one of the useful tools to capture the regular medication users from the medication users measured with the self-reported questionnaire in this study.
No obvious differences in results were observed between 3- and 6-month fixed time windows by sex and concurrent therapeutic areas. Medications for lifestyle-related diseases often need to be taken on a regular basis for a long time, and are often prescribed in quantities for courses of 3 months duration or less. This might have led us to recount the same participants as in the 3-month fixed time window even when we fixed the time window for 6 months. A previous population-based study in Japan validated self-reported medication use for lifestyle-related disease in 54,712 participants using a 3-month fixed time window for pharmacy health insurance claims. Their reported sensitivities for antihypertensive medications (92.4%) and dyslipidemia medications (86.2%) were similar to our present results, but their sensitivity for diabetes medications (82.6%) was lower. The reason for this discrepancy is likely due to the type of health insurance claims covered—their validation was done using health insurance claims for pharmacy only, whereas we used claims for both medicine and pharmacy, which provided for more accurate results. Dyslipidemia medication use showed lower sensitivity than the other medication uses in both our present and this previous study. Awareness level of dyslipidemia is reported to be lower than that of other lifestyle-related diseases such as hypertension. Self-recognition of health condition is also reported to affect sensitivity. To our knowledge, our present paper is one of only a few population-based validation studies of self-reported medication use which have covered all of the participants’ health insurance claims.
We found that type of medication, sex, age, education years and smoking status were associated with the accuracy of self-reported medication use. The sensitivity of participants using medications for dyslipidemia was lower than those for the other medications. Males who studied 12 or fewer years and who had a current smoking habit showed lower sensitivity than those who studied more than 12 years and those who were non-current smokers in the dyslipidemia medications group. Although a number of population-based studies have reported the validity of self-reported medication use, few studies have explored the individual determinants of discordance for self-reported medication use. , , A study from Scotland which validated self-reported medication use for cholesterol-lowering medications and antihypertensive medications in 9,043 participants has shown the predictors of discordance that affected sensitivity. The Scotland study observed that sociodemographic information, including sex, age, marital status, education years, and smoking status did not affect discordance for cholesterol-lowering medication use, but found that female sex, younger age, and smoker were associated with increased discordance for antihypertensive medication use. The reason only our study identified sex, education years and smoking status as determinants of discordance for dyslipidemia medication use may be due to slight differences among studies in data collection. Whereas our study collected medication data for dyslipidemia medications, the Scottish study collected data on cholesterol-lowering medications only, and might not include medications for hypertriglyceridemia or hypo HDL-cholesterolemia. Studies from Finland and Ireland have explored the predictors of discordance which affected not only sensitivity but also specificity. In the Finnish study, the diabetes medication use in 7,625 participants was validated and the study has reported that none of the sociodemographic information was associated with the discordance. The Irish study validated calcium channel blockers, diabetes medication and lipid-modifying agent use in 2,621 participants and it has reported that older age was associated with increased discordance for the use of calcium channel blockers, which showed the same tendency as the antihypertensive medications group in the Tsuruoka Metabolomics Cohort Study. There is a possibility that the predictors of discordance might be different depending on the definition of discordance. Although previous studies did not identify education status as a determinant of discordance for self-reported medication use for lifestyle-related disease, a few studies of antidepressant use reported that a lack of high education was associated with worse recall. , , We assume that participants without high education might take the medications not knowing their efficacy, due to either a lack of knowledge, lack of interest in the treatment, or poor health awareness, such as smoking cigarettes, particularly with regard to diseases with few or no symptoms, such as dyslipidemia.
Among its strengths, this study was conducted by linkage of population-based cohort data with both medical and pharmacy health insurance claims. Our use of information on prescribed medications dispensed from hospitals and pharmacies enabled us to draw accurate results. Furthermore, our detailed analyses by the factors that would affect sensitivity strongly supported the associations, especially in those with dyslipidemia medications. Several limitations of our study also warrant mention. First, we covered only a part of participants in this study, namely beneficiaries of NHI and Medical Care System for the Advanced Elderly. The selection of participants might lead to the older age demographic in this study. Further study will be required for beneficiaries of Employees’ Health Insurance, which include most participants aged younger than 65 years. Second, the health insurance claims data may be insufficient for participants who newly changed their coverage from Employees’ Health Insurance to NHI. This might have increased the number of false-positive results. Third, adherence to medication was not considered. Although we observed high sensitivity and specificity for each medication, we do not know if the participants took the medications correctly as indicated, because the prescription records provide only the fact that patients have received the medications. In this study, we could observe the proportion of those with medications, but there is a possibility that some of the participants with low adherence are included in regular medication users. Fourth, the generalizability of this study to other questionnaires might be limited as we analyzed the participants who joined the cohort study. The participants who joined a cohort study might report their medication use more accurately than those who did not. Fifth, we also conducted the validation by concurrent therapeutic areas; however, further study will be needed by increasing the number of participants. Sixth, in the analyses by the factors which would affect sensitivity, the associations were not determined enough by the response variables due to a small number of failures, especially in those with hypertension and diabetes. Further study also will be needed by increasing the number of participants in regard to this point. Finally, only medications for lifestyle-related diseases were validated. Further study will be needed with other medications. In conclusion, we found that the self-reported medication use for lifestyle-related diseases was a valid measure to capture regular medication use in a cohort study. Sensitivity for dyslipidemia medications was lower than those for the others. Dyslipidemia medication, sex, number of years of education, and smoking habit were associated with discordance which affected sensitivity in self-reporting.
|
Advances in Thiopurine Drug Delivery: The Current State-of-the-Art | 554bb363-68d9-4986-9804-b0c11230e7f4 | 8599251 | Pharmacology[mh] |
In the 1950s, Gertrude Elion and George Hitchings developed thiopurines for the treatment of childhood leukemia, this revolutionized drug discovery. Historically, drug discovery was empirically based and often a result of a trial-and-error process. Elion and Hitchings rationally designed new molecules with specific molecular structures, which today is called rational drug design. For this, they were awarded the Noble Prize in Chemistry in 1988 . The three thiopurines they discovered were thioguanine, mercaptopurine and azathioprine. These drugs have remained the cornerstone treatments for a wide range of diseases such as leukemia, organ rejection, inflammatory bowel disease (IBD) , systemic lupus erythematosus (SLE) and other inflammatory and autoimmune diseases in general. In this context, probably the most experience with thiopurines has been gained in the management of IBD, where thiopurines have been used since the early 1960s. Thiopurines are prodrugs that undergo extensive metabolism before they are converted into the active metabolites, 6-thioguanine nucleotides (6-TGN). This pool of 6-TGNs consists of both thioguanine ribonucleotides and deoxyribonucleotides. It has long been known that 6-TGNs, especially induced by higher dosages of thiopurines, can be incorporated into RNA and DNA as fraudulent bases through competing pathways. Incorporation into DNA results in a stop of replication by single-strand breaks, crosslinking and sister chromatid exchange . A more likely target of lower dose thiopurines is the Ras-related C3 botulinum toxin substrate 1 (Rac1) via ribonucleotide 6-thioguanine-triphosphate (6-TGTP) . This 6-TGTP binds to Rac1 instead of GTP , and this complex induces apoptosis of activated T lymphocytes . Thiopurines, due to their chemical properties, are known for low solubility in water and varying rates of bioavailability . In the last decade, various drug delivery formulations and approaches have been tested to improve the delivery of thiopurines for this reason. This review provides an overview of novel drug delivery strategies for thiopurines, reviewing modified release formulations, liposomal delivery systems and nano-formulations.
A literature search on PubMed was performed using the query mentioned in the appendix. The most recent search was performed up to August 2021. Furthermore, references were searched by hand for additional studies. Studies were included that investigated a drug delivery system for thiopurines (azathioprine, mercaptopurine, thioguanine or thiamiprine). Both clinical and non-clinical studies were included; reviews were excluded.
The thiopurines are structurally alike. They all consist of a thio-group and a purine, consisting of a pyrimidine-ring bonded to an imidazole-ring. Azathioprine contains an additional imidazole bonded to the sulfur group of mercaptopurine. In Fig. , the structural formulas with relevant chemical properties are shown for azathioprine, mercaptopurine, thioguanine and thiamiprine. Thiopurines can be subdivided into two classes (imidazole and non-imidazole thiopurines) and two groups (mercaptopurine and thioguanine groups). It is essential to make these distinctions. An often forgotten thiopurine, and one deemed an ‘impurity’ in the synthesis of azathioprine , thiamiprine, is a prodrug form of thioguanine with an imidazole group attached. Although the literature about this drug is limited, preclinical studies have shown that it has stronger immunomodulating effects than azathioprine , which is consistent with it being a prodrug of thioguanine. The rationale behind the 1-methyl-4-nitro-imidazoyl derivative in azathioprine and thiamiprine is that these compounds were vulnerable to a nucleophilic attack between the sulfur on the purine and the methyl-nitroimidazole ring because of the ortho-nitro substituent . This leads to the split between the purine and imidazole groups. It is thought that azathioprine partially evades first-pass metabolism because azathioprine is converted to mercaptopurine after liver metabolism . In their early work, Elion et al. discovered that azathioprine and thiamiprine were as effective as mercaptopurine and thioguanine, but less toxic . Furthermore, contemporary research has shown that a designer thiopurine analogue was less hepato- and myelotoxic and more effective in reducing inflammation than mercaptopurine . Thus, thiopurines remain interesting compounds to be clinically tested for a wide range of diseases. The chemical properties of thiopurines depend on essential parameters such as the partitioning coefficient (log P ) and the water solubility. Marvin was used for the prediction and calculation of the chemical properties of the different thiopurines: Marvin 20.12, 2020-04-27, ChemAxon ( www.chemaxon.com ). The water solubility is influenced by the acid dissociation constant ( pK a ). Thioguanine has pK a values of 1.2 and 10.0, while mercaptopurine has pK a values at 3.0 and 11.1. The clog P (at pH 7.4) of thioguanine and mercaptopurine are − 0.35 and − 0.12. Furthermore, the water solubility rates (at pH 7.4) of thioguanine and mercaptopurine are 0.05 and 0.09 mg/ml. Hence, the solubility of both drugs is poor (the highest strength of a drug substance is soluble in ≤ 250 ml aqueous media over the pH range of 1.0–6.8 at 37 ± 1 °C) according to the biopharmaceutical classification system (BCS) . The bioavailability of thioguanine and mercaptopurine ranges from 5 to 42% . Thus, based on the solubility and bioavailability rates, thioguanine and mercaptopurine belong to BCS class IV drugs (poor solubility and poor permeability). The poor water solubility of thiopurines remains a challenge for oral drug delivery. However, the magnitude of absorption is primarily determined by the extent of its unionized form at the site of absorption . This explains why thiopurines can still be absorbed while the solubility is low in the gastrointestinal tract. More specifically, the absorption of thiopurine is controlled by the dissolution rate and the extent to which the drugs dissolve in the gastrointestinal tract . As the pH in the stomach is low (~ pH 1–2), it is expected that the entire dosages of thioguanine and mercaptopurine can be dissolved rapidly in the gastric fluids. This is because most of thioguanine and mercaptopurine species are present in their protonated forms, which significantly increases their solubility (~ 1 to 100 mg/ml at pH 1–2). However, further in the gastrointestinal tract, the pH levels increase, which causes a shift from protonated to unionized species of thiopurines in the small and large intestines where pH values are > 5.0 . According to the pH-partition hypothesis of drug absorption, the lipid bilayer of the gastrointestinal epithelium forms an impermeable barrier to protonated drugs, while the unionized forms are able to pass freely .
Thiopurine methyltransferase (TPMT) and Nudix hydrolase 15 (NUDT15) are enzymes involved in thiopurine metabolism, and genetic polymorphisms in these enzymes are associated with thiopurine-related toxicities . Testing of genetic polymorphisms has been implemented in clinical practice, especially in leukemia and IBD . Moreover, novel data regarding thioguanine in treatment of childhood leukemia indicate that PGx testing is helpful to reduce thioguanine-related hepatic sinusoidal obstruction syndrome . Furthermore, it has also been shown that TPMT geno-/phenotype is significantly associated with TGN levels in thioguanine-treated IBD patients, but that TGN levels do not correlate with laboratory parameters . The use of PGx is clinically important for thiopurine therapy, and it can be combined with novel drug delivery formulations to reduce toxicity.
In medical practice, thiopurines are in general delivered orally; however intravenous and rectal delivery has been described in the past . The pharmacokinetics of thiopurine have been extensively studied, especially in patients with leukemia. Different dosages for the three thiopurines are commonly used. The oral dosages of thioguanine range between 5 and 40 mg/day for IBD, while the dosages may go up three times higher for leukemia. The plasma half-life of thioguanine follows a bi-exponential figure, with an initial t 1/2 of 3.0 h and terminal t 1/2 of 5.9 h. The 6-thioguanine N half-life of thioguanine varies from 4.4 to 9 days . The daily dosage of mercaptopurine ranges from 1 to 2.5 mg/kg. The plasma t 1/2 of mercaptopurine is 1.1 h , while the 6-TGN t 1/2 is approximately 5 days . The daily azathioprine dosage ranges from 1 to 5 mg/kg, with a serum t 1/2 of azathioprine of 0.2–0.5 h . A meta-analysis showed that in IBD patients with 6-TGN cutoff levels > 235 pmol/8 × 10 8 RBC and 250 pmol/8 × 10 8 RBC had odd ratios for remission of OR 2.66 [95% CI 1.94–3.66] and 4.71 [95% CI 2.31–9.62] compared to below these cutoff levels . Another meta-analysis also showed that higher 6-TGN levels were present in patients with leukopenia (mean difference 127.1 pmol/8 × 10 8 RBC) and gastrointestinal intolerance (201.5 pmol/8 × 10 8 RBC). Furthermore, 6-methylmercaptopurine ribonucleotides (6-MMPR) were significantly associated with hepatoxicity (mean difference 3241.2 pmol/8 × 10 8 RBC; OR 4.28 [95% CI 3.20–5.71] . Therefore, achieving adequate 6-TGN levels is clinically relevant for IBD patients. As the bioavailability of thiopurines is subject to high variability, drug delivery systems may aid in achieving more stable 6-TGN levels. This can in turn improve efficacy and reduce toxicity. Effects of Food Intake on Bioavailability of Thiopurines The effect of food has not been clarified sufficiently for thiopurine pro-drugs, but overall it appears that steady-state TGN drug levels are unaffected by concomitant ingestion of food. Concomitant food intake with oral thioguanine administration led to significantly decreased maximal plasma concentrations (Cmax) and decreased area-under-the-curve (AUC) values. That this did, however, not affect 6-TGN values 4 weeks after administration compared to fasting patients still requires further explanation . The concomitant consumption of unprocessed cow’s milk with mercaptopurine might also reduce the bioavailability of mercaptopurine, because unprocessed cow’s milk contains high levels of xanthine oxidase . Based on the chemical properties of the thiopurines at physiological conditions, it is to be expected that food and drugs affecting the gastric pH might affect the gastric solubility of thiopurines . Drug-induced Effects on Thiopurine Metabolism The pharmacokinetics of thiopurines, not unexpectedly, can also be influenced by concomitant therapy with other drugs. The first step in the metabolism of mercaptopurine and its prodrug azathioprine is the conversion into 6-thioinosine monophosphate (6-TIMP) by hypoxanthine-guanine phosphoribosyl transferase (HPRT) . 6-TIMP is further metabolized by inosine monophosphate dehydrogenase (IMPDH), and in the end the pharmacologically active 6-TGNs are formed. Alternatively, mercaptopurine can be metabolized by thiopurine methyltransferase (TPMT) into the 6-methylmercaptopurines (6-MMP) pathways, associated with hepatotoxicity, or be catabolized by XO into 6-thiouric acid (6-TUA). The addition of allopurinol, a non-selective XO inhibitor, to azathioprine or mercaptopurine therapy leads to a rise in 6-TGN and a concomitant reduction in 6-MMPs . Although the biological mechanism of this switch in preferential metabolism is not completely elucidated, it is suggested that TPMT is directly inhibited because of an increase in thioxanthine, which is a consequence of the inhibition of xanthine dehydrogenase by oxypurinol, the active metabolite of allopurinol . While allopurinol only leads to the inhibition of reduced forms of XO, febuxostat, a newer selective non-purine-based XO inhibitor, inhibits both oxidized and reduced forms and seems to have a greater potency to inhibit XO . It seems likely that co-administration of XO inhibitors and thiopurines causes a similar metabolic shift of thiopurine metabolism, and a case series demonstrated that concomitant use is indeed associated with TGN-induced myelosuppressive adverse events . Sulfasalazine and other 5-aminosalicylic acid (5-ASA) derivates, used in gram doses to treat IBD, were shown to be potent in vitro inhibitors of recombinant human TPMT . In vivo studies demonstrated increased levels of 6-TGN during concurrent therapy, especially with time-dependent 5-ASA formulations, and there has now been at least one report of a potentially serious drug interaction when these agents were administered to a patient who was also being treated with a standard doses of thiopurine drugs . However, a randomized controlled trial involving CD patients who were randomly assigned to post-surgical treatment with azathioprine or mesalazine at all study visits found that TPMT activity did not differ between the two patient groups after 1 year follow-up . Therefore, the mechanism by which thiopurines and 5-ASA derivates interact with each other is not completely understood. Still, physicians should be cautious when combining these drugs. Furosemide and to a lesser extent bendroflumethiazide and trichlormethiazide are in in vitro and ex vivo inhibitors of TPMT activity in red blood cells, although contradicting results, with elevated TPMT activities in subjects on diuretics, were reported in a population study . The same study demonstrated that the use of non-steroidal anti-inflammatory drugs (NSAIDs) and antihypertensives seems to be associated with lower TPMT activity in red blood cells . IMPDH is a target for drug interaction when azathioprine and ribavirin are used concomitantly because the inhibition of IMPDH by ribavirin leads to the conversion of 6-TIMP into 6-methylthioinosine monophosphate (6-MTIMP) via TPMT, and accumulation of 6-MTIMP is associated with myelotoxicity . Unsurprisingly, a modest decrease in 6-TGN levels was observed during concomitant therapy, moderating azathioprine efficacy . Effects of the Microbiome on Thiopurines Metabolism Besides systemic conversion, thiopurines and especially thioguanine can also be prone to bacterial metabolism. It was demonstrated in vitro that representative gut bacteria were able to generate the pharmacologically active 6-TGN after incubation with thioguanine . Dextran sulfate sodium (DSS)-treated HPRT-deficient mice had detectable fecal 6-TGN, suggesting a bacterial conversion of thioguanine into the active metabolites given that host cells of HPRT-deficient mice cannot generate 6-TGN. Consistent with this, in vivo studies demonstrated an improvement of DSS-induced colitis in HPRT-deficient mice with oral thioguanine treatment . When treating Winnie mice, a mouse with a spontaneous colitis due to a variant polymorphism in Muc2, with rectal thioguanine, a rapid and significant improvement of distal colitis was observed . Following these observations of microbial conversion of thioguanine and their rectal benefit in Winnie with intact HPRT activity, a few IBD patients have been treated with rectally administered thioguanine. Systemic 6-TGN levels were low, and promising treatment responses were observed in an uncontrolled series . Therefore, colonic delivery, via enemas, suppositories or oral tablets designed for that purpose, may provide additional benefit via microbial conversion of thioguanine to active drug and by actions at the level of the inflamed epithelium.
The effect of food has not been clarified sufficiently for thiopurine pro-drugs, but overall it appears that steady-state TGN drug levels are unaffected by concomitant ingestion of food. Concomitant food intake with oral thioguanine administration led to significantly decreased maximal plasma concentrations (Cmax) and decreased area-under-the-curve (AUC) values. That this did, however, not affect 6-TGN values 4 weeks after administration compared to fasting patients still requires further explanation . The concomitant consumption of unprocessed cow’s milk with mercaptopurine might also reduce the bioavailability of mercaptopurine, because unprocessed cow’s milk contains high levels of xanthine oxidase . Based on the chemical properties of the thiopurines at physiological conditions, it is to be expected that food and drugs affecting the gastric pH might affect the gastric solubility of thiopurines .
The pharmacokinetics of thiopurines, not unexpectedly, can also be influenced by concomitant therapy with other drugs. The first step in the metabolism of mercaptopurine and its prodrug azathioprine is the conversion into 6-thioinosine monophosphate (6-TIMP) by hypoxanthine-guanine phosphoribosyl transferase (HPRT) . 6-TIMP is further metabolized by inosine monophosphate dehydrogenase (IMPDH), and in the end the pharmacologically active 6-TGNs are formed. Alternatively, mercaptopurine can be metabolized by thiopurine methyltransferase (TPMT) into the 6-methylmercaptopurines (6-MMP) pathways, associated with hepatotoxicity, or be catabolized by XO into 6-thiouric acid (6-TUA). The addition of allopurinol, a non-selective XO inhibitor, to azathioprine or mercaptopurine therapy leads to a rise in 6-TGN and a concomitant reduction in 6-MMPs . Although the biological mechanism of this switch in preferential metabolism is not completely elucidated, it is suggested that TPMT is directly inhibited because of an increase in thioxanthine, which is a consequence of the inhibition of xanthine dehydrogenase by oxypurinol, the active metabolite of allopurinol . While allopurinol only leads to the inhibition of reduced forms of XO, febuxostat, a newer selective non-purine-based XO inhibitor, inhibits both oxidized and reduced forms and seems to have a greater potency to inhibit XO . It seems likely that co-administration of XO inhibitors and thiopurines causes a similar metabolic shift of thiopurine metabolism, and a case series demonstrated that concomitant use is indeed associated with TGN-induced myelosuppressive adverse events . Sulfasalazine and other 5-aminosalicylic acid (5-ASA) derivates, used in gram doses to treat IBD, were shown to be potent in vitro inhibitors of recombinant human TPMT . In vivo studies demonstrated increased levels of 6-TGN during concurrent therapy, especially with time-dependent 5-ASA formulations, and there has now been at least one report of a potentially serious drug interaction when these agents were administered to a patient who was also being treated with a standard doses of thiopurine drugs . However, a randomized controlled trial involving CD patients who were randomly assigned to post-surgical treatment with azathioprine or mesalazine at all study visits found that TPMT activity did not differ between the two patient groups after 1 year follow-up . Therefore, the mechanism by which thiopurines and 5-ASA derivates interact with each other is not completely understood. Still, physicians should be cautious when combining these drugs. Furosemide and to a lesser extent bendroflumethiazide and trichlormethiazide are in in vitro and ex vivo inhibitors of TPMT activity in red blood cells, although contradicting results, with elevated TPMT activities in subjects on diuretics, were reported in a population study . The same study demonstrated that the use of non-steroidal anti-inflammatory drugs (NSAIDs) and antihypertensives seems to be associated with lower TPMT activity in red blood cells . IMPDH is a target for drug interaction when azathioprine and ribavirin are used concomitantly because the inhibition of IMPDH by ribavirin leads to the conversion of 6-TIMP into 6-methylthioinosine monophosphate (6-MTIMP) via TPMT, and accumulation of 6-MTIMP is associated with myelotoxicity . Unsurprisingly, a modest decrease in 6-TGN levels was observed during concomitant therapy, moderating azathioprine efficacy .
Besides systemic conversion, thiopurines and especially thioguanine can also be prone to bacterial metabolism. It was demonstrated in vitro that representative gut bacteria were able to generate the pharmacologically active 6-TGN after incubation with thioguanine . Dextran sulfate sodium (DSS)-treated HPRT-deficient mice had detectable fecal 6-TGN, suggesting a bacterial conversion of thioguanine into the active metabolites given that host cells of HPRT-deficient mice cannot generate 6-TGN. Consistent with this, in vivo studies demonstrated an improvement of DSS-induced colitis in HPRT-deficient mice with oral thioguanine treatment . When treating Winnie mice, a mouse with a spontaneous colitis due to a variant polymorphism in Muc2, with rectal thioguanine, a rapid and significant improvement of distal colitis was observed . Following these observations of microbial conversion of thioguanine and their rectal benefit in Winnie with intact HPRT activity, a few IBD patients have been treated with rectally administered thioguanine. Systemic 6-TGN levels were low, and promising treatment responses were observed in an uncontrolled series . Therefore, colonic delivery, via enemas, suppositories or oral tablets designed for that purpose, may provide additional benefit via microbial conversion of thioguanine to active drug and by actions at the level of the inflamed epithelium.
There are various strategies to adjust or improve thiopurine drug delivery. This depends on the specific aims of these drug delivery strategies. These specific drug delivery aims include improved local delivery (e.g., intestinal drug exposure), reduced systemic exposure or reduced frequency of drug intake. This may result in improved efficacy, reduced toxicity and/or higher drug compliance. In this review, various strategies are discussed such as physical-chemical modifications, liposomes, polymer-based approaches, nanoparticles, controlled-release formulations and other strategies. A summary of studies performed that have investigated thiopurine drug delivery strategies are listed in Table . Physical-chemical Modifications By modifying physical or chemical properties of thiopurines, higher dissolution rates might be achieved, which might improve the bioavailability of thiopurines. Yang et al. studied an amorphous complex of bismuth (III) bonded to three mercaptopurine molecules (Bi(mercaptopurine) 3 (NO 3 ) 2 ]NO 3 ). The solubility of the complex was increased compared to conventional mercaptopurine (1.2 vs 0.14 mg/ml). In vitro, the complex showed strong inhibitory effects compared to conventional mercaptopurine on lung cancer cells. Another Chinese group reported improved dissolution and bioavailability rates of co-crystalized mercaptopurine with isonicotinamide, a compound that has as the capability of inducing apoptosis in leukemia cell lines/models. Other studies also reported on the synthesis and structure determination of co-crystals of mercaptopurine . Thus, physicochemical modifications that increase solubility might be applied to advantage depending on where and how quickly the drug should be released for clinical ends. Nanomedicine Approaches Another promising strategy for thiopurine distribution is nano-based drug delivery. This field within drug delivery research has grown since the rise of nanotechnology. Nanoparticles are solid, colloidal particles that range from 10 to 1000 nm in size. Novel nano-based formulations have been developed and tested for thiopurines. These approaches include liposomal delivery, micelles, microspheres and metallic and polymeric based nanoparticles (see Fig. ) . Liposomal Drug Delivery The discovery of liposomes, which are primarily composed of phospholipids, was attributed to Alec Bangham in 1961 . Liposomes are closed spherical vesicles containing an aqueous core surrounded by mono- or bilayer membranes alternating with aqueous compartments. Liposomes can be formulated in sizes ranging from 30 nm to several micrometers in diameter. Important chemical properties are the size, composition, porosity and degradability of the liposomes . Liposomes are biocompatible and are generally considered safe to use . Furthermore, liposomes can be conjugated with various molecules that may increase specific targeting to the desired action site . The liposomal delivery of mercaptopurine had already been studied in the mid-1970s. The results of these earlier studies were discouraging because of low encapsulation rates . More recently, liposomal drug delivery has been studied for azathioprine , thioguanine and mercaptopurine . Taneja et al. studied conventional and stealth liposomes in vitro and in albino rats. They found drug release rates after 6 h of the liposomal formulations of approximately 17%, while free mercaptopurine had a drug release rate of 95% after 4 h. Furthermore, they observed prolonged half-lives and increased AUC values for the liposomal formulations compared to free mercaptopurine. The most recent published study (2007) by Umrethia et al. investigated the liposomal delivery of mercaptopurine using conventional and stealth liposomes. The stealth liposomes exhibited higher encapsulation rates ( E 24 94 vs 33 %) and higher AUC values (42 vs 25 µg h/ml) compared to conventional liposomes in a mouse model. The liposomal formulations showed favorable pharmacokinetics (higher AUC values, lower C max values) compared to free mercaptopurine. Furthermore, they reported that the liposomal formulations had significantly less systemic exposure to various tissues such as the liver, kidney, heart, lungs and spleen compared to free mercaptopurine. The stealth liposomes had similar serum biochemical values compared to control, while the free mercaptopurine and conventional liposomes had significantly increased serum biochemical values compared to control. This study underlined that oral stealth liposomes could achieve higher bioavailability while potentially decreasing systemic side effects. Liposomal delivery seems to be especially useful for systemic diseases such as leukemia . Polymer-based Approaches Polymers are substances composed of macromolecules with many repeating subunits, which gives polymers the unique ability to be chemically modified to yield specific properties. An advantage of polymers is that they can be biodegradable and biocompatible. The most studied polymer for thiopurine delivery is chitosan. Thiopurine-loaded chitosan nanoparticles have been studied preclinically for thioguanine, azathioprine and mercaptopurine . Chitosan nanoparticles display a pH-dependent drug release profile due to the solubility of chitosan . Other conjugated polymeric approaches have been described. These include thioguanine-dialdehyde sodium alginate nanoparticles , thioguanine-poly-lactic-co-glycolic acid (PLGA) nanoparticles , azathioprine-gelatin nanoparticles and glutathione-sensitive hyaluronic acid-mercaptopurine nanoparticles . Govindappa et al. studied the toxicity of mercaptopurine-conjugated chitosan nanoparticles in an animal model. Both mercaptopurine and the mercaptopurine nanoparticles were categorized as category 4 (> 300–2000 mg/kg bw) according to the Globally Harmonized Classification System. Furthermore, usage of low dosages (15 mg/kg) of mercaptopurine and mercaptopurine nanoparticles did not lead to signs of myelotoxicity and hepatotoxicity. High dosages (50 mg/kg) led to a statistically significant reduction of hematological parameters compared to saline, low and mid (30 mg/kg) dose treatment, whereas biochemical parameters were significantly increased. These data suggested that mercaptopurine nanoparticles had a favorable toxicity profile compared to normal mercaptopurine drug formulations. Chatterjee et al. investigated thioguanine conjugated PLGA nanoparticles. They obtained an encapsulation rate of 97% and a sustained drug release rate of 60–65% in the first 30 days. The nanoparticles exhibited cytotoxic properties in HeLa cells within 48 h of treatment, which was mediated by intracellular uptake of the nanoparticles. The high encapsulation rates and in vitro efficacy seem promising, but the polymer-based approaches have yet to reach the stage of clinical development. Metallic Nanoparticles Metallic nanoparticles are nanosized metals with a size ranging from 10 to 100 nm. These metallic nanoparticles have unique features because they can be synthesized and modified in a way that allows them to bind with ligand, antibodies and drugs. The large surface-to-volume ratio of metallic nanoparticles allows them to bind many molecules . A few studies have been performed with thiopurine-coated metallic nanoparticles. Thiopurine gold nanoparticles have been studied for thioguanine and mercaptopurine therapies . The multivalent and highly adjustable surface architecture of gold nanoparticles offers the opportunity to incorporate multiple drugs on the surface of a single nanoparticle . Podsiadlo et al. found that mercaptopurine-gold nanoparticles were more effective in inhibiting human chronic myeloid leukemia cells than mercaptopurine alone. Aghevlian et al. studied the effects of thioguanine-gold nanoparticles against breast cancer cells. They found significantly stronger inhibition of MCF-7 cancers by the thioguanine-gold nanoparticles compared to free thioguanine at higher concentrations (6.2 µM). Furthermore, one study characterized mercaptopurine-coated magnetite nanoparticles as a controlled release delivery system in vitro . Release rates of 93% and 51% were obtained at pH values of 4.8 and 7.4, respectively. The nanoparticle formulation showed sustained release kinetics, which may reduce systemic side effects. One disadvantage of metallic nanoparticles is that they might cause toxicity in the long term because of their size, as long-term exposure to metallic nanoparticles may affect cellular metabolism and energy homeostasis . Controlled-Release Drug Formulations The majority of thiopurines are administered by tablets via the oral route. The tablets used are known as ‘immediate-release’ or ‘conventional’ drug delivery. There are circumstances in which the ‘immediate release’ is not desirable; therefore, manipulation of the release profile of the tablets is required. This is especially relevant in IBD, where inflammation is mainly located in the distal ileum and colon. The modified-release tablets refer to the modification of drug release from a dosage form to change the drug release rate or the localization of the release within the gastrointestinal tract. Attempts to develop modified-release or controlled-release formulations have been tested in the past . Israeli et al. performed a phase II clinical trial of non-absorbable delayed-release tablets of mercaptopurine for Crohn’s disease. For conventional oral mercaptopurine tablets, they found C max , T max and AUC values of 82.1 ng h/ml (SD 28.7), 1.9 h (SD 1.1) and 216.1 ng/ml/h (SD 73.8), while for the delayed-release tablets these values were 6.1 ng h/ml, 9 h and 10.2 ng H/ml. Only one tablet formulation led to systematic release, while the other did not show any absorption at all. Thus, these delayed-release tablets demonstrated significantly lower or no systemic uptake suggestive of a local drug effect. Subsequently, this group performed a multi-center, double-blinded, double-dummy, two-arm phase II randomized non-inferiority trial for patients diagnosed with active Crohn’s disease. The delayed-release tablets had significantly fewer adverse events than the conventional tablets (67.5% vs 95.8%, P = 0.0079). Higher clinical response rates after 8 weeks were obtained for the delayed-release tablets versus conventional tablets (48.3% vs 21.4%, P = 0.01). They concluded that the delayed-release tablets were non-inferior to conventional mercaptopurine tablets. Their formulation was patented for the treatment of Crohn’s disease (WO2015168448A1). This delayed-release formulation might also be interesting for thioguanine, which has similar physical-chemical properties to mercaptopurine and has also been proven safe and effective for Crohn’s disease . Recently, examples were given of direct extended release of thioguanine with enteric coating to prevent gastric dissolution and target the distal ileum and colon, the main sites of inflammation in IBD (WO2017054042A1) .
By modifying physical or chemical properties of thiopurines, higher dissolution rates might be achieved, which might improve the bioavailability of thiopurines. Yang et al. studied an amorphous complex of bismuth (III) bonded to three mercaptopurine molecules (Bi(mercaptopurine) 3 (NO 3 ) 2 ]NO 3 ). The solubility of the complex was increased compared to conventional mercaptopurine (1.2 vs 0.14 mg/ml). In vitro, the complex showed strong inhibitory effects compared to conventional mercaptopurine on lung cancer cells. Another Chinese group reported improved dissolution and bioavailability rates of co-crystalized mercaptopurine with isonicotinamide, a compound that has as the capability of inducing apoptosis in leukemia cell lines/models. Other studies also reported on the synthesis and structure determination of co-crystals of mercaptopurine . Thus, physicochemical modifications that increase solubility might be applied to advantage depending on where and how quickly the drug should be released for clinical ends.
Another promising strategy for thiopurine distribution is nano-based drug delivery. This field within drug delivery research has grown since the rise of nanotechnology. Nanoparticles are solid, colloidal particles that range from 10 to 1000 nm in size. Novel nano-based formulations have been developed and tested for thiopurines. These approaches include liposomal delivery, micelles, microspheres and metallic and polymeric based nanoparticles (see Fig. ) . Liposomal Drug Delivery The discovery of liposomes, which are primarily composed of phospholipids, was attributed to Alec Bangham in 1961 . Liposomes are closed spherical vesicles containing an aqueous core surrounded by mono- or bilayer membranes alternating with aqueous compartments. Liposomes can be formulated in sizes ranging from 30 nm to several micrometers in diameter. Important chemical properties are the size, composition, porosity and degradability of the liposomes . Liposomes are biocompatible and are generally considered safe to use . Furthermore, liposomes can be conjugated with various molecules that may increase specific targeting to the desired action site . The liposomal delivery of mercaptopurine had already been studied in the mid-1970s. The results of these earlier studies were discouraging because of low encapsulation rates . More recently, liposomal drug delivery has been studied for azathioprine , thioguanine and mercaptopurine . Taneja et al. studied conventional and stealth liposomes in vitro and in albino rats. They found drug release rates after 6 h of the liposomal formulations of approximately 17%, while free mercaptopurine had a drug release rate of 95% after 4 h. Furthermore, they observed prolonged half-lives and increased AUC values for the liposomal formulations compared to free mercaptopurine. The most recent published study (2007) by Umrethia et al. investigated the liposomal delivery of mercaptopurine using conventional and stealth liposomes. The stealth liposomes exhibited higher encapsulation rates ( E 24 94 vs 33 %) and higher AUC values (42 vs 25 µg h/ml) compared to conventional liposomes in a mouse model. The liposomal formulations showed favorable pharmacokinetics (higher AUC values, lower C max values) compared to free mercaptopurine. Furthermore, they reported that the liposomal formulations had significantly less systemic exposure to various tissues such as the liver, kidney, heart, lungs and spleen compared to free mercaptopurine. The stealth liposomes had similar serum biochemical values compared to control, while the free mercaptopurine and conventional liposomes had significantly increased serum biochemical values compared to control. This study underlined that oral stealth liposomes could achieve higher bioavailability while potentially decreasing systemic side effects. Liposomal delivery seems to be especially useful for systemic diseases such as leukemia . Polymer-based Approaches Polymers are substances composed of macromolecules with many repeating subunits, which gives polymers the unique ability to be chemically modified to yield specific properties. An advantage of polymers is that they can be biodegradable and biocompatible. The most studied polymer for thiopurine delivery is chitosan. Thiopurine-loaded chitosan nanoparticles have been studied preclinically for thioguanine, azathioprine and mercaptopurine . Chitosan nanoparticles display a pH-dependent drug release profile due to the solubility of chitosan . Other conjugated polymeric approaches have been described. These include thioguanine-dialdehyde sodium alginate nanoparticles , thioguanine-poly-lactic-co-glycolic acid (PLGA) nanoparticles , azathioprine-gelatin nanoparticles and glutathione-sensitive hyaluronic acid-mercaptopurine nanoparticles . Govindappa et al. studied the toxicity of mercaptopurine-conjugated chitosan nanoparticles in an animal model. Both mercaptopurine and the mercaptopurine nanoparticles were categorized as category 4 (> 300–2000 mg/kg bw) according to the Globally Harmonized Classification System. Furthermore, usage of low dosages (15 mg/kg) of mercaptopurine and mercaptopurine nanoparticles did not lead to signs of myelotoxicity and hepatotoxicity. High dosages (50 mg/kg) led to a statistically significant reduction of hematological parameters compared to saline, low and mid (30 mg/kg) dose treatment, whereas biochemical parameters were significantly increased. These data suggested that mercaptopurine nanoparticles had a favorable toxicity profile compared to normal mercaptopurine drug formulations. Chatterjee et al. investigated thioguanine conjugated PLGA nanoparticles. They obtained an encapsulation rate of 97% and a sustained drug release rate of 60–65% in the first 30 days. The nanoparticles exhibited cytotoxic properties in HeLa cells within 48 h of treatment, which was mediated by intracellular uptake of the nanoparticles. The high encapsulation rates and in vitro efficacy seem promising, but the polymer-based approaches have yet to reach the stage of clinical development. Metallic Nanoparticles Metallic nanoparticles are nanosized metals with a size ranging from 10 to 100 nm. These metallic nanoparticles have unique features because they can be synthesized and modified in a way that allows them to bind with ligand, antibodies and drugs. The large surface-to-volume ratio of metallic nanoparticles allows them to bind many molecules . A few studies have been performed with thiopurine-coated metallic nanoparticles. Thiopurine gold nanoparticles have been studied for thioguanine and mercaptopurine therapies . The multivalent and highly adjustable surface architecture of gold nanoparticles offers the opportunity to incorporate multiple drugs on the surface of a single nanoparticle . Podsiadlo et al. found that mercaptopurine-gold nanoparticles were more effective in inhibiting human chronic myeloid leukemia cells than mercaptopurine alone. Aghevlian et al. studied the effects of thioguanine-gold nanoparticles against breast cancer cells. They found significantly stronger inhibition of MCF-7 cancers by the thioguanine-gold nanoparticles compared to free thioguanine at higher concentrations (6.2 µM). Furthermore, one study characterized mercaptopurine-coated magnetite nanoparticles as a controlled release delivery system in vitro . Release rates of 93% and 51% were obtained at pH values of 4.8 and 7.4, respectively. The nanoparticle formulation showed sustained release kinetics, which may reduce systemic side effects. One disadvantage of metallic nanoparticles is that they might cause toxicity in the long term because of their size, as long-term exposure to metallic nanoparticles may affect cellular metabolism and energy homeostasis .
The discovery of liposomes, which are primarily composed of phospholipids, was attributed to Alec Bangham in 1961 . Liposomes are closed spherical vesicles containing an aqueous core surrounded by mono- or bilayer membranes alternating with aqueous compartments. Liposomes can be formulated in sizes ranging from 30 nm to several micrometers in diameter. Important chemical properties are the size, composition, porosity and degradability of the liposomes . Liposomes are biocompatible and are generally considered safe to use . Furthermore, liposomes can be conjugated with various molecules that may increase specific targeting to the desired action site . The liposomal delivery of mercaptopurine had already been studied in the mid-1970s. The results of these earlier studies were discouraging because of low encapsulation rates . More recently, liposomal drug delivery has been studied for azathioprine , thioguanine and mercaptopurine . Taneja et al. studied conventional and stealth liposomes in vitro and in albino rats. They found drug release rates after 6 h of the liposomal formulations of approximately 17%, while free mercaptopurine had a drug release rate of 95% after 4 h. Furthermore, they observed prolonged half-lives and increased AUC values for the liposomal formulations compared to free mercaptopurine. The most recent published study (2007) by Umrethia et al. investigated the liposomal delivery of mercaptopurine using conventional and stealth liposomes. The stealth liposomes exhibited higher encapsulation rates ( E 24 94 vs 33 %) and higher AUC values (42 vs 25 µg h/ml) compared to conventional liposomes in a mouse model. The liposomal formulations showed favorable pharmacokinetics (higher AUC values, lower C max values) compared to free mercaptopurine. Furthermore, they reported that the liposomal formulations had significantly less systemic exposure to various tissues such as the liver, kidney, heart, lungs and spleen compared to free mercaptopurine. The stealth liposomes had similar serum biochemical values compared to control, while the free mercaptopurine and conventional liposomes had significantly increased serum biochemical values compared to control. This study underlined that oral stealth liposomes could achieve higher bioavailability while potentially decreasing systemic side effects. Liposomal delivery seems to be especially useful for systemic diseases such as leukemia .
Polymers are substances composed of macromolecules with many repeating subunits, which gives polymers the unique ability to be chemically modified to yield specific properties. An advantage of polymers is that they can be biodegradable and biocompatible. The most studied polymer for thiopurine delivery is chitosan. Thiopurine-loaded chitosan nanoparticles have been studied preclinically for thioguanine, azathioprine and mercaptopurine . Chitosan nanoparticles display a pH-dependent drug release profile due to the solubility of chitosan . Other conjugated polymeric approaches have been described. These include thioguanine-dialdehyde sodium alginate nanoparticles , thioguanine-poly-lactic-co-glycolic acid (PLGA) nanoparticles , azathioprine-gelatin nanoparticles and glutathione-sensitive hyaluronic acid-mercaptopurine nanoparticles . Govindappa et al. studied the toxicity of mercaptopurine-conjugated chitosan nanoparticles in an animal model. Both mercaptopurine and the mercaptopurine nanoparticles were categorized as category 4 (> 300–2000 mg/kg bw) according to the Globally Harmonized Classification System. Furthermore, usage of low dosages (15 mg/kg) of mercaptopurine and mercaptopurine nanoparticles did not lead to signs of myelotoxicity and hepatotoxicity. High dosages (50 mg/kg) led to a statistically significant reduction of hematological parameters compared to saline, low and mid (30 mg/kg) dose treatment, whereas biochemical parameters were significantly increased. These data suggested that mercaptopurine nanoparticles had a favorable toxicity profile compared to normal mercaptopurine drug formulations. Chatterjee et al. investigated thioguanine conjugated PLGA nanoparticles. They obtained an encapsulation rate of 97% and a sustained drug release rate of 60–65% in the first 30 days. The nanoparticles exhibited cytotoxic properties in HeLa cells within 48 h of treatment, which was mediated by intracellular uptake of the nanoparticles. The high encapsulation rates and in vitro efficacy seem promising, but the polymer-based approaches have yet to reach the stage of clinical development.
Metallic nanoparticles are nanosized metals with a size ranging from 10 to 100 nm. These metallic nanoparticles have unique features because they can be synthesized and modified in a way that allows them to bind with ligand, antibodies and drugs. The large surface-to-volume ratio of metallic nanoparticles allows them to bind many molecules . A few studies have been performed with thiopurine-coated metallic nanoparticles. Thiopurine gold nanoparticles have been studied for thioguanine and mercaptopurine therapies . The multivalent and highly adjustable surface architecture of gold nanoparticles offers the opportunity to incorporate multiple drugs on the surface of a single nanoparticle . Podsiadlo et al. found that mercaptopurine-gold nanoparticles were more effective in inhibiting human chronic myeloid leukemia cells than mercaptopurine alone. Aghevlian et al. studied the effects of thioguanine-gold nanoparticles against breast cancer cells. They found significantly stronger inhibition of MCF-7 cancers by the thioguanine-gold nanoparticles compared to free thioguanine at higher concentrations (6.2 µM). Furthermore, one study characterized mercaptopurine-coated magnetite nanoparticles as a controlled release delivery system in vitro . Release rates of 93% and 51% were obtained at pH values of 4.8 and 7.4, respectively. The nanoparticle formulation showed sustained release kinetics, which may reduce systemic side effects. One disadvantage of metallic nanoparticles is that they might cause toxicity in the long term because of their size, as long-term exposure to metallic nanoparticles may affect cellular metabolism and energy homeostasis .
The majority of thiopurines are administered by tablets via the oral route. The tablets used are known as ‘immediate-release’ or ‘conventional’ drug delivery. There are circumstances in which the ‘immediate release’ is not desirable; therefore, manipulation of the release profile of the tablets is required. This is especially relevant in IBD, where inflammation is mainly located in the distal ileum and colon. The modified-release tablets refer to the modification of drug release from a dosage form to change the drug release rate or the localization of the release within the gastrointestinal tract. Attempts to develop modified-release or controlled-release formulations have been tested in the past . Israeli et al. performed a phase II clinical trial of non-absorbable delayed-release tablets of mercaptopurine for Crohn’s disease. For conventional oral mercaptopurine tablets, they found C max , T max and AUC values of 82.1 ng h/ml (SD 28.7), 1.9 h (SD 1.1) and 216.1 ng/ml/h (SD 73.8), while for the delayed-release tablets these values were 6.1 ng h/ml, 9 h and 10.2 ng H/ml. Only one tablet formulation led to systematic release, while the other did not show any absorption at all. Thus, these delayed-release tablets demonstrated significantly lower or no systemic uptake suggestive of a local drug effect. Subsequently, this group performed a multi-center, double-blinded, double-dummy, two-arm phase II randomized non-inferiority trial for patients diagnosed with active Crohn’s disease. The delayed-release tablets had significantly fewer adverse events than the conventional tablets (67.5% vs 95.8%, P = 0.0079). Higher clinical response rates after 8 weeks were obtained for the delayed-release tablets versus conventional tablets (48.3% vs 21.4%, P = 0.01). They concluded that the delayed-release tablets were non-inferior to conventional mercaptopurine tablets. Their formulation was patented for the treatment of Crohn’s disease (WO2015168448A1). This delayed-release formulation might also be interesting for thioguanine, which has similar physical-chemical properties to mercaptopurine and has also been proven safe and effective for Crohn’s disease . Recently, examples were given of direct extended release of thioguanine with enteric coating to prevent gastric dissolution and target the distal ileum and colon, the main sites of inflammation in IBD (WO2017054042A1) .
In the past and present many different strategies have been proposed and tested for improvement of absorption and targeting the site of delivery of thiopurines. Drug delivery formulations have made it into clinical development; however, none has made it into clinical practice so far. Non-absorbable delayed-release tablets have been clinically non-inferior compared to conventional tablets for Crohn’s disease and were associated with fewer adverse events due to decreased systemic uptake . Furthermore, a phase 0 clinical trial was performed to investigate azathioprine delayed-release tablets, which showed significantly lower bioavailability and lower C max compared to conventional azathioprine tablets . The latest developments in nano-formulations for thiopurines are promising because the first preclinical studies have provided encouraging results. Depending on the mechanism and properties of the nano-formulation, the design of these formulations could improve local delivery, reduce unwanted systemic side effects and offer comparable or improved efficacy at a lower concentration compared to conventional formulations. Local drug delivery is highly desirable in local bowel diseases such as IBD . The disease localization should dictate the necessity for maximal intestinal drug exposure, whereas systemic exposure should be minimalized to prevent systemic side effects . Given the potential additional mode of intestinal conversion of thioguanine to 6-TGN, local delivery becomes even more attractive . However, systemic diseases such as leukemia will always require adequate systemic drug levels . Sustained or controlled release profiles of nano-formulations might achieve these adequate systemic levels while reducing side effects, as demonstrated in the previously discussed studies . However, studies in humans have yet to be performed. Ideally, the sustained release formulation is able to decrease high toxic peak concentrations while still achieving adequate therapeutic concentrations. Another possible advantage of sustained release formulations is the reduction of the frequency of oral tablet intake. The translation from preclinical to clinical development has challenges to be resolved. Most studies are ex vivo and have focused on the potential to improve drug delivery . Both the safety and efficacy of nano-formulations need to be clear to allow clinical development to progress. Another important issue is the potential of large-scale production of these nano-formulations. This might be very costly, and therefore it would not be cost-effective to use those drug delivery systems in daily practice .
Thiopurines are useful treatments in a wide range of diseases. The solubility of thiopurines is low, and the bioavailability of these drugs varies. The use of thiopurines has also been limited because of systemic side effects such as myelotoxicity and hepatotoxicity. Therefore, there is a need for novel drug delivery development approaches to improve targeted therapy and reduce side effects for millions of patients worldwide, especially those suffering from IBD. Delayed-release tablets are in clinical development, and preclinical data of nano-formulations show promising results. The combination of novel drug delivery formulations and adequate pharmacogenetic testing might improve patients’ benefit in the future.
|
Predicting Post‐Mortem α‐Synuclein Pathology by the Combined Presence of Probable | dd8b02f7-c02b-4a44-bf63-28ef2059d7c4 | 11802639 | Forensic Medicine[mh] | Subject Selection and Clinical Assessments All subjects included in this study were volunteers enrolled in the AZSAND and the Brain and Body Donation Program (BBDP; www.brainandbodydonationprogram.org ), a clinicopathological study of aging at Banner Sun Health Research Institute (BSHRI). , In AZSAND/BBDP, recruitment of participants is performed from surrounding communities through public speaking events, media report, and tours of the institute and focuses both on cognitively and movement‐unimpaired subjects as well as those with dementia and parkinsonism. Additional subjects with dementia and parkinsonism are referred by community neurologists. All subjects provided signed informed consents that were ethically approved by designated BSHRI Institutional Review Boards, for both clinical assessment and autopsy for research purposes. Participants are clinically characterized with annual standardized test batteries, consisting of general neurological, cognitive, and movement disorders components done by cognitive neurologists, movement disorders' neurologists, and neuropsychologists. Private medical records are also obtained and reviewed for additional clinical information. All subjects from the AZSAND/BBDP database that had come to autopsy, had completed standardized movement and cognitive exams including a clinician assessment for presence or absence of PRBD, and had performed an olfactory test proximate to death were included in the study for a total of 652 subjects. A final clinical movement and cognitive diagnosis for all subjects (including presence or absence of PRBD) was assigned by consensus conference following death by review of all clinical data, including AZSAND standardized clinical assessments and private medical records. Data such as the Unified Parkinson's Disease Rating Scale (UPDRS) motor score and the Mini‐Mental State Exam (MMSE) cognitive exam were also available. Olfactory testing was performed every 3 years using the University of Pennsylvania Smell Identification Test (UPSIT), a 40‐item multiple choice olfactory identification task. For the purpose of this study, only the last UPSIT test score before death for each subject was used for statistical analysis. Further, subjects from AZSAND, and when available an informant, annually completed the MSQ, a 16‐item scale that screens for sleep disorders by asking if a behavior has been observed at least three times in the past , ( http://www.mayoclinic.org/documents/msq‐copyrightfinal‐pdf/doc‐20079462 ), which has been used over the years to support the PRBD diagnosis. Of all included cases, 498 had at least one subject that completed MSQ and 351 additionally had an informant that completed MSQ. PRBD diagnoses of five subjects included in this study were PSG‐confirmed by a sleep study. For the purposes of this study, a subject was considered to have PRBD based on clinician review of all clinical data including the physician findings during subject assessments as well as the MSQ when available at their final consensus conference. Neuropathological Assessment At the time of death, a full neuropathological examination was performed, as previously described, and a final clinicopathological diagnosis was assigned to each case according to consensus criteria. , The densities of LTS in previously described standardized regions including the olfactory bulb, amygdala, entorhinal region, pons, medulla, and several neocortical regions, were graded on a four‐point semi‐quantitative scale on formalin‐fixed, paraffin embedded sections that were immunohistochemically stained using an antibody against phosphorylated α‐synuclein peptide. The topographical distribution of LTS was classified using the Unified Staging System for Lewy Body Disorders (USSLBD). Statistical Analyses Statistical analyses were performed using SPSS software (IBM SPSS Statistics 29.0). The demographics and final clinicopathological diagnosis of cases with and without clinician diagnosed PRBD were compared using two sample t tests or χ 2 test or Fisher's exact test when applicable. Receiver operating characteristics (ROC) curve analysis and the calculation of the Youden index was used to identify an optimal threshold value on the UPSIT test to predict post‐mortem presence of LTS and divide cases as having either a low UPSIT score or a high UPSIT score. ROC curves were further analyzed to assess the sensitivity and specificity of a clinical diagnosis of PRBD (PRBD or no PRBD), and the UPSIT score (low or high UPSIT score) was used to predict the post‐mortem presence of LTS (LTS or no LTS) and a final clinicopathological diagnosis of PD and DLB. All subjects included in this study were volunteers enrolled in the AZSAND and the Brain and Body Donation Program (BBDP; www.brainandbodydonationprogram.org ), a clinicopathological study of aging at Banner Sun Health Research Institute (BSHRI). , In AZSAND/BBDP, recruitment of participants is performed from surrounding communities through public speaking events, media report, and tours of the institute and focuses both on cognitively and movement‐unimpaired subjects as well as those with dementia and parkinsonism. Additional subjects with dementia and parkinsonism are referred by community neurologists. All subjects provided signed informed consents that were ethically approved by designated BSHRI Institutional Review Boards, for both clinical assessment and autopsy for research purposes. Participants are clinically characterized with annual standardized test batteries, consisting of general neurological, cognitive, and movement disorders components done by cognitive neurologists, movement disorders' neurologists, and neuropsychologists. Private medical records are also obtained and reviewed for additional clinical information. All subjects from the AZSAND/BBDP database that had come to autopsy, had completed standardized movement and cognitive exams including a clinician assessment for presence or absence of PRBD, and had performed an olfactory test proximate to death were included in the study for a total of 652 subjects. A final clinical movement and cognitive diagnosis for all subjects (including presence or absence of PRBD) was assigned by consensus conference following death by review of all clinical data, including AZSAND standardized clinical assessments and private medical records. Data such as the Unified Parkinson's Disease Rating Scale (UPDRS) motor score and the Mini‐Mental State Exam (MMSE) cognitive exam were also available. Olfactory testing was performed every 3 years using the University of Pennsylvania Smell Identification Test (UPSIT), a 40‐item multiple choice olfactory identification task. For the purpose of this study, only the last UPSIT test score before death for each subject was used for statistical analysis. Further, subjects from AZSAND, and when available an informant, annually completed the MSQ, a 16‐item scale that screens for sleep disorders by asking if a behavior has been observed at least three times in the past , ( http://www.mayoclinic.org/documents/msq‐copyrightfinal‐pdf/doc‐20079462 ), which has been used over the years to support the PRBD diagnosis. Of all included cases, 498 had at least one subject that completed MSQ and 351 additionally had an informant that completed MSQ. PRBD diagnoses of five subjects included in this study were PSG‐confirmed by a sleep study. For the purposes of this study, a subject was considered to have PRBD based on clinician review of all clinical data including the physician findings during subject assessments as well as the MSQ when available at their final consensus conference. At the time of death, a full neuropathological examination was performed, as previously described, and a final clinicopathological diagnosis was assigned to each case according to consensus criteria. , The densities of LTS in previously described standardized regions including the olfactory bulb, amygdala, entorhinal region, pons, medulla, and several neocortical regions, were graded on a four‐point semi‐quantitative scale on formalin‐fixed, paraffin embedded sections that were immunohistochemically stained using an antibody against phosphorylated α‐synuclein peptide. The topographical distribution of LTS was classified using the Unified Staging System for Lewy Body Disorders (USSLBD). Statistical analyses were performed using SPSS software (IBM SPSS Statistics 29.0). The demographics and final clinicopathological diagnosis of cases with and without clinician diagnosed PRBD were compared using two sample t tests or χ 2 test or Fisher's exact test when applicable. Receiver operating characteristics (ROC) curve analysis and the calculation of the Youden index was used to identify an optimal threshold value on the UPSIT test to predict post‐mortem presence of LTS and divide cases as having either a low UPSIT score or a high UPSIT score. ROC curves were further analyzed to assess the sensitivity and specificity of a clinical diagnosis of PRBD (PRBD or no PRBD), and the UPSIT score (low or high UPSIT score) was used to predict the post‐mortem presence of LTS (LTS or no LTS) and a final clinicopathological diagnosis of PD and DLB. Demographics and Clinicopathological Diagnosis of Cases with and without PRBD Of the 652 subjects with a clinical assessment for PRBD and UPSIT test available, 156 cases had a clinical diagnosis of PRBD. When comparing cases with and without PRBD (see Table ), cases with PRBD had a significantly younger age of death ( P < 0.001), presented with a higher proportion of men than women (χ 2 = 42.96; P < 0.001), a lower UPSIT score ( P < 0.001), a higher UPDRS motor score ( P < 0.001), a lower MMSE score ( P = 0.001), and a higher mean USSLBD stage as well as a higher LB density score (both P < 0.001). Of the 652 cases in the cohort, 288 (44.2%) had Lewy bodies. Most cases had mixed pathology, with the most common clinicopathological diagnosis being Alzheimer's disease ((AD) in 236 (36.2%) cases, 121 cases had PD (18.6%)), 73 presented with incidental Lewy body disease (ILBD) at autopsy, 41 had DLB (6%), whereas 187 (28.7%) were controls (ie, within normal limits on their cognitive and movement assessments and had no LTS at autopsy). Clinician‐diagnosed PRBD was more frequently present in cases with LTS, including DLB (15/41: 36.6%) and PD (81/121: 66.9%) compared to controls (14/187: 7.5%) and AD (59/236: 25%) cases (χ 2 = 100.59; P < 0.001). PRBD was present in only 3.6% (2/56) of cases with ILBD, in 21.9% (16/73) cases with progressive supranuclear palsy (PSP) including 30.3% (10/33) that had concomitant LTS, and in 66.7% (4/6) of cases with MSA (Table ). Cases with a Low or High UPSIT Score To classify cases as having either a low or a high UPSIT score, ROC curve analysis and calculation of the Youden index (sensitivity + specificity − 1) identified an UPSIT score of 20.5 as being an optimal threshold value to predict LTS. Hence, for further analysis cases were classified as having a low UPSIT score (UPSIT score ≤20) or a high UPSIT score (UPSIT score >20). A low UPSIT score was significantly more frequent in subjects with PRBD (109/152: 71.7%) than in cases without PRBD (201/494: 40.7%) (χ 2 = 44.82; P < 0.001). A low UPSIT score was found in 65.7% (155/236) of AD cases, 84.3% (102/121) with concomitant LTS, and in 46.1% (53/115) without LTS, in 42.9% (24/56) of cases with ILBD, in 86.7% (105/121) of cases with PD, in 97.7% (40/41) of cases with DLB, in 46.6% (34/74) of PSP cases with 69.7% (23/33) of PSP cases with concomitant LTS at autopsy and in 27.7% (11/40) of PSP cases without LTS, and in 16.7% (1/6) of cases with MSA. Histological Presence of Lewy Type α‐Synucleinopathy As MSA cases have synuclein pathology, but are not considered to have Lewy type synuclein pathology, they were excluded from further analysis. The histological presence of LTS, across all the clinicopathological diagnosis, was found in 288 of 652 (44.2%) cases. Presence of LTS was significantly more frequent in those who had PRBD (112/152: 73.7%) than those without (177/494: 35.8%) (χ 2 = 67.37; P < 0.001) and in cases with a low UPSIT score (215/ 310: 69.4%) than cases with a high UPSIT score (74/336: 22.0%) (χ 2 = 146.10; P < 0.001). Presence of LTS was significantly more frequent in cases with PRBD + low UPSIT (99/109: 90.8%) than in cases with PRBD only (73.7%, see above) (χ 2 = 13.79; P < 0.001) or cases with a low UPSIT score only (69.4%, see above) (χ 2 = 19.79; P < 0.001) as well as other subgroups including cases with PRBD + high UPSIT (13/43: 30.2%) (χ 2 = 58.39; P < 0.001) and cases without PRBD with either low (116/201: 57.7%) (χ 2 = 34.73; P < 0.001) or high UPSIT score (61/293: 20.8%) (χ 2 = 162.50; P < 0.001). In cases with high UPSIT, no differences were found between those with and without a PRBD diagnosis (χ 2 = 1.93; P = 0.2). See Figure for the proportions of cases with LTS in each group. Of the PRBD cases, 44 (28%) were in USSLBD stage 0 (no Lewy bodies), one (0.6%) was in stage I (olfactory bulb only), 11 (7.1%) were in stage II (either IIa brainstem predominant or IIb limbic predominant), 39 were in stage III (brainstem and limbic), and 61 were in stage IV (neocortical). For the group without PRBD, 321 (64.7%) were in stage 0, 16 (3.2%) were in stage I, 74 (14.9%) were in stage II, 49 (9.8%) were in stage III, and 36 (7.3%) were in stage IV. See Figure for the percentages of cases in each USSLBD stage for cases with and without PRBD. Prediction of Post‐mortem Lewy Type α‐Synucleinopathy and a Final Clinicopathological Diagnosis of Lewy Body Disease ROC curves demonstrated that PRBD diagnosis predicted the presence of post‐mortem LTS with a sensitivity of 38.8%, a specificity of 88.8% for an overall accuracy of 66.4%, and a Youden's index of 0.276. Sensitivity of a low UPSIT score for predicting LTS was 74.7%, specificity 73.6%, overall accuracy of 73.8%, and a Youden's index of 0.478. When combining both the presence of a PRBD diagnosis and a low UPSIT score, sensitivity for predicting LTS was 34.3%, and the specificity increased to 97.2% for an accuracy of 69.0%. Additionally, when looking at cases with either PRBD or olfactory loss, we observed a sensitivity of 80.3% and a specificity of 61.2%. PRBD predicted a final clinicopathological diagnosis of PD or DLB with a sensitivity of 59.3%, a specificity of 88.4%, and an overall accuracy of 81.1%, whereas UPSIT score predicted a diagnosis of PD or DLB with a sensitivity of 89.5% and a specificity of 65.9% with an accuracy of 71.8%. Combining both PRBD and UPSIT yielded a sensitivity of 54.9% and increased the specificity to 95.9% with an accuracy of 85.6% to predict a final clinicopathological diagnosis of Lewy body disease. See Table and Figure for characteristics of these ROC curves. PRBD Of the 652 subjects with a clinical assessment for PRBD and UPSIT test available, 156 cases had a clinical diagnosis of PRBD. When comparing cases with and without PRBD (see Table ), cases with PRBD had a significantly younger age of death ( P < 0.001), presented with a higher proportion of men than women (χ 2 = 42.96; P < 0.001), a lower UPSIT score ( P < 0.001), a higher UPDRS motor score ( P < 0.001), a lower MMSE score ( P = 0.001), and a higher mean USSLBD stage as well as a higher LB density score (both P < 0.001). Of the 652 cases in the cohort, 288 (44.2%) had Lewy bodies. Most cases had mixed pathology, with the most common clinicopathological diagnosis being Alzheimer's disease ((AD) in 236 (36.2%) cases, 121 cases had PD (18.6%)), 73 presented with incidental Lewy body disease (ILBD) at autopsy, 41 had DLB (6%), whereas 187 (28.7%) were controls (ie, within normal limits on their cognitive and movement assessments and had no LTS at autopsy). Clinician‐diagnosed PRBD was more frequently present in cases with LTS, including DLB (15/41: 36.6%) and PD (81/121: 66.9%) compared to controls (14/187: 7.5%) and AD (59/236: 25%) cases (χ 2 = 100.59; P < 0.001). PRBD was present in only 3.6% (2/56) of cases with ILBD, in 21.9% (16/73) cases with progressive supranuclear palsy (PSP) including 30.3% (10/33) that had concomitant LTS, and in 66.7% (4/6) of cases with MSA (Table ). UPSIT Score To classify cases as having either a low or a high UPSIT score, ROC curve analysis and calculation of the Youden index (sensitivity + specificity − 1) identified an UPSIT score of 20.5 as being an optimal threshold value to predict LTS. Hence, for further analysis cases were classified as having a low UPSIT score (UPSIT score ≤20) or a high UPSIT score (UPSIT score >20). A low UPSIT score was significantly more frequent in subjects with PRBD (109/152: 71.7%) than in cases without PRBD (201/494: 40.7%) (χ 2 = 44.82; P < 0.001). A low UPSIT score was found in 65.7% (155/236) of AD cases, 84.3% (102/121) with concomitant LTS, and in 46.1% (53/115) without LTS, in 42.9% (24/56) of cases with ILBD, in 86.7% (105/121) of cases with PD, in 97.7% (40/41) of cases with DLB, in 46.6% (34/74) of PSP cases with 69.7% (23/33) of PSP cases with concomitant LTS at autopsy and in 27.7% (11/40) of PSP cases without LTS, and in 16.7% (1/6) of cases with MSA. As MSA cases have synuclein pathology, but are not considered to have Lewy type synuclein pathology, they were excluded from further analysis. The histological presence of LTS, across all the clinicopathological diagnosis, was found in 288 of 652 (44.2%) cases. Presence of LTS was significantly more frequent in those who had PRBD (112/152: 73.7%) than those without (177/494: 35.8%) (χ 2 = 67.37; P < 0.001) and in cases with a low UPSIT score (215/ 310: 69.4%) than cases with a high UPSIT score (74/336: 22.0%) (χ 2 = 146.10; P < 0.001). Presence of LTS was significantly more frequent in cases with PRBD + low UPSIT (99/109: 90.8%) than in cases with PRBD only (73.7%, see above) (χ 2 = 13.79; P < 0.001) or cases with a low UPSIT score only (69.4%, see above) (χ 2 = 19.79; P < 0.001) as well as other subgroups including cases with PRBD + high UPSIT (13/43: 30.2%) (χ 2 = 58.39; P < 0.001) and cases without PRBD with either low (116/201: 57.7%) (χ 2 = 34.73; P < 0.001) or high UPSIT score (61/293: 20.8%) (χ 2 = 162.50; P < 0.001). In cases with high UPSIT, no differences were found between those with and without a PRBD diagnosis (χ 2 = 1.93; P = 0.2). See Figure for the proportions of cases with LTS in each group. Of the PRBD cases, 44 (28%) were in USSLBD stage 0 (no Lewy bodies), one (0.6%) was in stage I (olfactory bulb only), 11 (7.1%) were in stage II (either IIa brainstem predominant or IIb limbic predominant), 39 were in stage III (brainstem and limbic), and 61 were in stage IV (neocortical). For the group without PRBD, 321 (64.7%) were in stage 0, 16 (3.2%) were in stage I, 74 (14.9%) were in stage II, 49 (9.8%) were in stage III, and 36 (7.3%) were in stage IV. See Figure for the percentages of cases in each USSLBD stage for cases with and without PRBD. ROC curves demonstrated that PRBD diagnosis predicted the presence of post‐mortem LTS with a sensitivity of 38.8%, a specificity of 88.8% for an overall accuracy of 66.4%, and a Youden's index of 0.276. Sensitivity of a low UPSIT score for predicting LTS was 74.7%, specificity 73.6%, overall accuracy of 73.8%, and a Youden's index of 0.478. When combining both the presence of a PRBD diagnosis and a low UPSIT score, sensitivity for predicting LTS was 34.3%, and the specificity increased to 97.2% for an accuracy of 69.0%. Additionally, when looking at cases with either PRBD or olfactory loss, we observed a sensitivity of 80.3% and a specificity of 61.2%. PRBD predicted a final clinicopathological diagnosis of PD or DLB with a sensitivity of 59.3%, a specificity of 88.4%, and an overall accuracy of 81.1%, whereas UPSIT score predicted a diagnosis of PD or DLB with a sensitivity of 89.5% and a specificity of 65.9% with an accuracy of 71.8%. Combining both PRBD and UPSIT yielded a sensitivity of 54.9% and increased the specificity to 95.9% with an accuracy of 85.6% to predict a final clinicopathological diagnosis of Lewy body disease. See Table and Figure for characteristics of these ROC curves. This clinicopathological study investigated the combined value of a clinician diagnosis of PRBD and reduced olfactory function in predicting the post‐mortem presence of Lewy type α‐synucleinopathy (LTS). Our main result demonstrates that the combination of a clinician diagnosis of PRBD and a low olfactory score has a high specificity, but a low sensitivity for the presence of LTS and a final clinicopathological diagnosis of Lewy body disease. Overall, this work reinforces the association that has already been shown between PRBD and reduced olfactory function with LTS, but now in a large autopsy proven series. These findings suggest that the use of both clinician diagnosed PRBD assessment and olfactory function may provide a cost‐effective means of predicting LTS in a broader community, which can then justify the use of more expensive screening tests such as sleep studies and/or real‐time quaking‐induced conversion synuclein seeding assays. We extend our previous work reporting PRBD to be highly specific, but not sensitive for LTS pathology. As expected, a large fraction of cases with LTS did not have PRBD (177/289: 61.2%), explaining the low sensitivity. UPSIT score alone was less specific at 73.6% because olfactory dysfunction is relatively common in the general population, but more sensitive, at 74.7%, for the post‐mortem presence of LTS. Our novel finding combining the assessments of both PRBD and UPSIT increased the specificity, from 88.8% in PRBD only, to 97.2% to predict the post‐mortem presence of LTS. This is consistent with olfactory dysfunction being repeatedly associated with α‐synuclein pathology as well as clinical finding identifying olfactory dysfunction as a predictor of conversion to Lewy body disease in idiopathic RBD. , , , A low UPSIT score was significantly more frequent in subjects that had a clinical diagnosis of PRBD, which is also in link with studies showing more severe olfactory impairment in RBD and worse olfactory function correlating with PRBD disease severity in PD. Our results reinforce that measuring olfactory function in subjects with PRBD may be useful for screening of individuals to be included in an at risk or prodromal study, which could potentially increase the likelihood of a positive study. Diagnostic accuracy for early PD is still low and screening of these early symptoms of Lewy body disease is crucial for identifying subjects eligible to participate in neuroprotective trials and for therapy, when available, as well as to offer appropriate counseling to patients and families. Results from this study, along with published results from AZSAND autopsied cases, found LTS to be less frequent in PRBD (28%) than in cohorts of subjects with sleep‐study confirmed RBD and neurodegenerative disorders. , , This suggests that when considering a broader and less selected population, a lower positive predictive value of ~73.9% is observed for the prediction of LTS in PRBD, which is increased to 90.8% when also considering olfactory function. AZSAND participants are recruited as community‐dwelling volunteers, including cognitively and movement unimpaired individuals, in addition to specific recruitment effort for individuals with dementia and parkinsonism. Similarly, when screening PRBD from the general population, one study reported 35% of subjects not to have PSG‐confirmed RBD. Importantly, population‐screened PRBD were reported to have similar level of impairment with regards to prodromal markers of PD and had similar underlying prevalence for neurodegenerative synucleinopathy as sleep centered referred patients, emphasizing the importance of closely monitoring these individuals at high risk of developing a synucleinopathy. In this study, we observed that within cases with PRBD, that were found not to have LTS at autopsy, 38% had periodic limb movement of sleep, and 27% presented with restless legs syndrome. Although these observed motoric impairments may co‐exist with RBD, they may also have potentially been confounding factors, which reinforce the importance of PSG confirmation of RBD as a next step for a definite diagnosis. Among other diagnostic groups, we observed a high rate of RBD (67%) in MSA cases, whereas only one of these cases (16%) had a reduced olfactory function, which is in link with previous literature. , , We report a relatively high rate of RBD of 22% in PSP cases, which RBD in PSP was also previously reported in the literature, , although not in pathologically confirmed cases. We hypothesize that this rate may be potentially explained by the additional presence of concomitant LTS as a higher rate of 30% was observed in PSP cases with LTS as opposed to 15% in those without. Olfactory dysfunction has been reported to be less frequent in PSP and less pronounced as opposed to PD subjects. , We report a lower UPSIT score in 46% of PSP cases, with most of these cases presenting with a lower olfactory function and also having concomitant LTS. Further, we report only 3.6% of ILBD cases to have PRBD. This low rate is surprising considering the evidence of both RBD and ILBD being considered as prodromal stages of synucleinopathies. , These ILBD cases with PRBD were also found to have a reduced mean olfactory function, as previously reported in ILBD cases. , Nevertheless, this low rate of PRBD in ILBD is not clear to us, but may potentially be explained by the older age of these participants in our program, which may suggest that these cases have been resistant to clinically manifesting symptoms and one would hypothesize that most RBD cases would have phenoconverted before reaching their 80s. It would be of high interest to better understand the mechanisms through which individuals can manifest such resistance. We acknowledge some limitations of this study. A major limitation is that a definite diagnosis of RBD with PSG confirmation was not available. Still, the diagnosis was made by detailed history and with the help of a validated instrument when available. This reflects more the clinical practice, in which PSG assessment for RBD is not routinely performed and would be more suitable to screen in larger populations such as in this study. Moreover, although AZSAND/BBDP participants are enrolled as normal aging controls, there may be a volunteer participation bias. Additionally, it is important to note that specific recruitments efforts are performed toward subjects with dementia and parkinsonism as well as additional referrals from community neurologists, which explain higher rates of disease. Therefore, considering this enrollment selection bias toward dementia and parkinsonism, findings from this population may not be totally generalizable to either tertiary referral or community‐based populations. Another limitation is the cross‐sectional design of this study that evaluates the last assessment before death rather than longitudinal changes. Yet, studies with neuropathology validated LTS diagnoses remain sparse; hence, a major strength of this study is the pathological confirmation of LTS and final clinicopathological diagnosis in a large sample size, which allows to appropriately evaluate the sensitivity and specificity of PRBD and UPSIT olfactory test. In conclusion, PRBD, diagnosed without sleep study confirmation and combined with a low performance on the UPSIT olfactory test is highly specific for predicting the post‐mortem presence of LTS. Therefore, the use of both PRBD assessment and olfactory function may provide a cost‐effective means of predicting LTS in a broader community. Ethical Compliance Statement: This work was ethically approved by designated BSHRI Institutional Review Boards. All subjects gave written informed consents. We confirm that we have read the Journal's position on issues involved in ethical publication and affirm that this work is consistent with those guidelines. Funding sources and conflicts of interests: The AZSAND and BBDP has been supported by the National Institute of Neurological Disorders and Stroke (NINDS) (U24 NS072026 National Brain and Tissue Resource for Parkinson's Disease and Related Disorders), the National Institute on Aging (NIA) (P30 AG019610 and P30AG072980, Arizona Alzheimer's Disease Center), the Arizona Department of Health Services (contract 211, 002, Arizona Alzheimer's Research Center), the Arizona Biomedical Research Commission (contracts 4001, 0011, 05–901, and 1001 to the Arizona Parkinson's Disease Consortium) and The Michael J. Fox Foundation (MJFF) for Parkinson's Research. The authors declare that there are no conflicts of interest relevant to this work. Financial Disclosures of all authors (for the previous 12 months): E.D.D., C.B., J.K.L., and G.E.S. declare no disclosures. C.T. has been supported by postdoctoral fellowships from the Canadian Institutes of Health Research (CIHR) and the Quebec's Health Research Funds (FRQS). C.H.A. received consulting fees from CND Life Sciences. H.A.S. has received research support from Intra‐cellular Therapeutics, Transposon, Parkinson Study Group/UCB, Parkinson's Foundation, NINDS, Supernus/US World Meds, MJFF, Jazz Pharmaceuticals, Barrow Neurological Foundation, Saccadous, and Cerevel Therapeutics. H.A.S. has additionally served as a consultant for the Parkinson Study Group/Nq, Biogen, AbbVie, Sage/Biogen, Praxis, KeifeRx, Fasikl, and Jazz Pharmaceuticals. S.M. receives funding from Parkinson's Progression Markers Initiative. P.C. has received research support from Lewy Body Dementia Association and Arizona Alzheimer's Consortium. D.R.S. received research support from Annovis, Biogen, Cognition Therapeutics, EIP Pharma, Eisai, Jazz Pharmaceuticals, MJFF, Neuraly, and Neurocrine; has been a consultant for Amneal, AbbVie, Kyowa, and Neurocrin; and received speaker honoraria from American Osteopathic Association, American Academy of Neurology, Parkinson Movement Disorders Alliance, International Parkinson and Movement Disorders Society. A.A. has received honoraria or support for consulting; participates in independent data safety monitoring boards; provides educational lectures, programs, and materials; or serves on advisory boards for Acadia, Alzheimer's Association, Alzheimer's Disease International (ADI), AriBio, Biogen, Eisai, Life Molecular Imaging, Lundbeck, Merck, Novo Nordisk, ONO, Prothena, and Roche/Genentech. A.A. receives book royalties from Oxford University Press for a medical book on dementia. A.A. receives institutional research grant/contract funding from NIA/National Institutes of Health (NIH) 1P30AG072980, NIA/NIH U24AG057437, AZ DHS CTR040636, the Foundation for NIH, Washington University St Louis, and Gates Ventures. A.A.'s institution (Banner Health) receives/received funding for clinical trial grants, contracts and projects from government, consortia, foundations, and companies for which he serves/served as contracted site‐PI. T.G.B. has received consulting fees from Aprinoia Therapeutics, Biogen, and Acadia Pharmaceuticals. He has received payment or honoraria from the NIH, International Movement Disorders Association, World PD Coalition, Mayo Clinic Florida, Stanford University, and the IOS Press Journal of Parkinson's Disease; and support for attending meetings from the Alzheimer's Association, AD/PD/Kenes Group, Mayo Clinic Florida, and the Universitätsklinikum Hamburg‐Eppendorf. He also has a leadership/fiduciary role and stock options with Vivid Genomics. (1) Research project: A. Conception, B. Organization, C. Execution; (2) Statistical Analysis: A. Design, B. Execution, C. Review and Critique; (3) Manuscript Preparation: A. Writing of the First Draft, B. Review and Critique. C.T.: 1A, 1B, 1C, 2A, 2B, 2C, 3A, 3B. C.H.A.: 1A, 1B, 1C, 2C, 3B. H.A.S.: 1C, 3B. E.D.D.: 1C, 3B. S.M.: 1C, 3B. P.C.: 1C, 3B. C.B.: 1C, 3B. D.R.S.: 3B. J.K.L.: 3B. A.A.: 1C, 3B. G.E.S.: 1A, 1C, 3B. T.G.B.: 1A, 1B, 1C, 1C, 2C, 3B. |
Digital health, digital medicine, and digital therapeutics in cardiology: current evidence and future perspective in Japan | bbd8707f-bb76-449e-b21a-5cc50275341e | 10230462 | Internal Medicine[mh] | Almost 10 years ago, Japan set out the Action Plan of Growth Strategy that declared the initiatives of digitalization for medicine, nursing care, and healthcare to achieve the world’s most advanced medical care. This action promoted a major push for digital health and digital medicine in Japan. Specific plans included the following: (1) construction of a digital infrastructure for medicine, nursing care, and healthcare; (2) utilization of the digital infrastructure; (3) advanced digitalization of on-site operations; and (4) system establishment for utilizing medical and personal information. These initiatives formed the foundation of the Japanese national strategy and have been continuously refined, resulting in the current environment of digital health and digital medicine . In this article, we define digital health-related terminologies. First, “digital health” is a comprehensive concept of the utilization of information and communication technology (ICT) for all medical, nursing care, or healthcare support. ICT includes digital technologies such as medical big data of genomic and electronic health information, artificial intelligence (AI), or extended reality (XR) . This term often implies the use of the latest and state-of-the-art digital technology to solve various problems in healthcare fields as the objective. Digital technology seems well suited in the following fields: patient treatment; health promotion including primary prevention; the conduct and support of clinical research including decentralized clinical trials; medical education; observation and evaluation of the patients’ clinical course; and public health monitoring for the general population or specific disease cohorts. Moreover, “digital medicine” refers to digital health related to medical care and broadly supporting medicine practice.” In digital medicine, the use of digital technology for disease treatment is referred to as “digital therapeutics (DTx)” (Fig. ) . For years, various studies and clinical applications other than digital technologies have been conducted to solve healthcare issues. However, digital technology has become one of the powerful tools to solve healthcare-related problems in any medical field and has strongly assisted the evolution of digital health, along with the recent leap in ICT development, miniaturization and technical advantages of mobile devices, easy access to vast and organized data and ample computational resources, and establishment of 5th (5 G) or higher-generation mobile communication systems that allow for high-capacity, low-latency, and multiple connections. Typical digital technologies include online medical services, AI and machine learning (AI/ML), Web 3.0 (web3) and blockchain technology, XR including metaverse, electronic health records (EHR) and personal health records (PHR), and mobile health (mHealth). Telehealth and telemedicine As for online medical service, the Ministry of Health, Labour, and Welfare (MHLW) in Japan has provided “the guidance for the appropriate implementation of telemedicine.” In this guidance, telehealth refers to health promotion and medically related activities using ICT equipment . It includes not only telemedicine but also online medical advice, remote healthcare communication, and real-time online consultation between physicians, similar in concept to digital health. In particular, telemedicine is strictly defined as real-time examination, diagnosis, explanation of laboratory results, and treatment between the physician and the patient, using remote communication tools installed on mobile devices or computers . Of note, regardless of whether it is an insurance-covered medical treatment or not, telemedicine based on this guidance is required. Owing to the recent coronavirus disease 2019 (COVID-19) pandemic, telemedicine has rapidly and mandatorily gained recognition in Japan. After the state of emergency was declared in April 2020, the MHLW has permitted the use of telemedicine from the first online consultation . Since then, the proportion of hospitals or clinics that could provide telemedicine increased to 15.2% in April 2021 . The revision for medical service fee in 2022 that raised several telemedicine fees (but still less than the face-to-face outpatient fees) may help accelerate the widespread use of telemedicine . However, the number of conducting telemedicine in Japan remains considerably lower than that in European and North American countries . Thus, the advantages and challenges of telemedicine should be reconsidered to further promote its use. AI/ML AI has no single definition. The Japanese Society of Artificial Intelligence defined AI as something aimed to perform advanced inference accurately on a large amount of knowledge data . However, the concept of AI is highly diversified and still under discussion. Thus, when using this term, we need to pay attention to what kind of specific AI technology is referred to Currently, rule-based and ML are frequently used AI technology in medical sciences. In addition, the development of computer resources and easy access to medical big data enable us to utilize ML and its subfield, that is, deep learning, for clinical applications. One of the primary approaches for medical AI implementation today would be to leverage ML, including deep learning or reinforcement learning, as a tool to obtain the target output . In other words, medical AI aims to maximize the performance of the output as “prediction,” “classification,” or “generation” of diseases or data that are currently required in medicine, and numerous efforts are being made to implement it in society. Recently, a large language model of Generative Pre-trained Transformer 3 (GPT-3) with supervised fine-tuning and reinforcement learning from human feedback as InstructGPT and its dialogue-optimized conversation web-console (ChatGPT) attracts huge attention worldwide. Surprisingly, the ChatGPT has already been scored at or near the passing threshold on the United States Medical Licensing Exam . The products applying these natural language processing models have the potential to rapidly penetrate in every aspect of medical fields soon. Web3.0/Metaverse/Blockchain Web3.0 is a next-generation Internet environment utilizing blockchain technology, and the metaverse supports part of this environment. Blockchain technology is a type of database that processes and records transactions using cryptography, directly connecting terminals on information communication networks . It has excellent tamper resistance for efficient monitoring and data management in clinical trials . The term “metaverse” refers to a virtual space where anyone can communicate similar to the real world and engage in economic activities involving money as both fiat currency and cryptocurrency . Metaverse uses cross reality (XR), including virtual reality (VR) as its utilization technology, and XR is becoming noteworthy in the medical field. XR VR is a technology that creates a virtual environment through a computer, stimulating the human senses and making such an environment perceived as “reality.” Currently, VR controls the visual and auditory senses, and it was often defined as an environment where the external “real” world is completely shut off by a full immersive head-mounted display (HMD). Similar concepts include Augmented Reality (AR) and Mixed Reality (MR), which mainly refer to real-time overlaying (for AR) or merging (for MR) of the environment and objects onto the actual reality we perceived using a see-through HMD or smartphones . Clearly distinguishing them is difficult; thus, a comprehensive concept of XR (cross reality or extended reality) emerged. In the medical field, XR has already been used for medical equipment-level surgical support system , medical education , and XR-based rehabilitation system . EHR/PHR EHR is a collection of electronic medical records stored in an electronic chart originally intended to use only in each hospital or clinic but made shareable and accessible in a specific region or nationwide . EHR contains sensitive personal information; thus, it has been managed mainly by medical institutions. Conversely, the PHR refers to securely usable online medical, health, care and well-being information collected and managed by the person who is being described in the record In PHR, health-related information can be shared and aggregated at the individual level. Thus, even if people visited multiple clinics, PHR can manage not only their medical records but also their lifelong data obtained by wearable devices during daily life. mHealth The term “mHealth” generally refers to digital health using mobile devices. Currently, the most used mobile devices are smartphones and wearable devices. With the advancement of “smart” devices, mobile devices can now measure and estimate not only steps or pulse rates but also electrocardiograms, skin temperature, blood oxygen levels, stress levels, blood pressure, or plasma glucose levels . The wearable devices can also be linked with smartphones to allow viewing, verifying, and processing of biometric data in detail and sharing of data with healthcare providers as needed. In mHealth, DTx is attracting attention as a novel third option for disease treatment; it is one of the three core treatment pillars, namely, medical, surgical, and digital therapies. I especially focus on DTx in the following chapters.
As for online medical service, the Ministry of Health, Labour, and Welfare (MHLW) in Japan has provided “the guidance for the appropriate implementation of telemedicine.” In this guidance, telehealth refers to health promotion and medically related activities using ICT equipment . It includes not only telemedicine but also online medical advice, remote healthcare communication, and real-time online consultation between physicians, similar in concept to digital health. In particular, telemedicine is strictly defined as real-time examination, diagnosis, explanation of laboratory results, and treatment between the physician and the patient, using remote communication tools installed on mobile devices or computers . Of note, regardless of whether it is an insurance-covered medical treatment or not, telemedicine based on this guidance is required. Owing to the recent coronavirus disease 2019 (COVID-19) pandemic, telemedicine has rapidly and mandatorily gained recognition in Japan. After the state of emergency was declared in April 2020, the MHLW has permitted the use of telemedicine from the first online consultation . Since then, the proportion of hospitals or clinics that could provide telemedicine increased to 15.2% in April 2021 . The revision for medical service fee in 2022 that raised several telemedicine fees (but still less than the face-to-face outpatient fees) may help accelerate the widespread use of telemedicine . However, the number of conducting telemedicine in Japan remains considerably lower than that in European and North American countries . Thus, the advantages and challenges of telemedicine should be reconsidered to further promote its use.
AI has no single definition. The Japanese Society of Artificial Intelligence defined AI as something aimed to perform advanced inference accurately on a large amount of knowledge data . However, the concept of AI is highly diversified and still under discussion. Thus, when using this term, we need to pay attention to what kind of specific AI technology is referred to Currently, rule-based and ML are frequently used AI technology in medical sciences. In addition, the development of computer resources and easy access to medical big data enable us to utilize ML and its subfield, that is, deep learning, for clinical applications. One of the primary approaches for medical AI implementation today would be to leverage ML, including deep learning or reinforcement learning, as a tool to obtain the target output . In other words, medical AI aims to maximize the performance of the output as “prediction,” “classification,” or “generation” of diseases or data that are currently required in medicine, and numerous efforts are being made to implement it in society. Recently, a large language model of Generative Pre-trained Transformer 3 (GPT-3) with supervised fine-tuning and reinforcement learning from human feedback as InstructGPT and its dialogue-optimized conversation web-console (ChatGPT) attracts huge attention worldwide. Surprisingly, the ChatGPT has already been scored at or near the passing threshold on the United States Medical Licensing Exam . The products applying these natural language processing models have the potential to rapidly penetrate in every aspect of medical fields soon.
Web3.0 is a next-generation Internet environment utilizing blockchain technology, and the metaverse supports part of this environment. Blockchain technology is a type of database that processes and records transactions using cryptography, directly connecting terminals on information communication networks . It has excellent tamper resistance for efficient monitoring and data management in clinical trials . The term “metaverse” refers to a virtual space where anyone can communicate similar to the real world and engage in economic activities involving money as both fiat currency and cryptocurrency . Metaverse uses cross reality (XR), including virtual reality (VR) as its utilization technology, and XR is becoming noteworthy in the medical field.
VR is a technology that creates a virtual environment through a computer, stimulating the human senses and making such an environment perceived as “reality.” Currently, VR controls the visual and auditory senses, and it was often defined as an environment where the external “real” world is completely shut off by a full immersive head-mounted display (HMD). Similar concepts include Augmented Reality (AR) and Mixed Reality (MR), which mainly refer to real-time overlaying (for AR) or merging (for MR) of the environment and objects onto the actual reality we perceived using a see-through HMD or smartphones . Clearly distinguishing them is difficult; thus, a comprehensive concept of XR (cross reality or extended reality) emerged. In the medical field, XR has already been used for medical equipment-level surgical support system , medical education , and XR-based rehabilitation system .
EHR is a collection of electronic medical records stored in an electronic chart originally intended to use only in each hospital or clinic but made shareable and accessible in a specific region or nationwide . EHR contains sensitive personal information; thus, it has been managed mainly by medical institutions. Conversely, the PHR refers to securely usable online medical, health, care and well-being information collected and managed by the person who is being described in the record In PHR, health-related information can be shared and aggregated at the individual level. Thus, even if people visited multiple clinics, PHR can manage not only their medical records but also their lifelong data obtained by wearable devices during daily life.
The term “mHealth” generally refers to digital health using mobile devices. Currently, the most used mobile devices are smartphones and wearable devices. With the advancement of “smart” devices, mobile devices can now measure and estimate not only steps or pulse rates but also electrocardiograms, skin temperature, blood oxygen levels, stress levels, blood pressure, or plasma glucose levels . The wearable devices can also be linked with smartphones to allow viewing, verifying, and processing of biometric data in detail and sharing of data with healthcare providers as needed. In mHealth, DTx is attracting attention as a novel third option for disease treatment; it is one of the three core treatment pillars, namely, medical, surgical, and digital therapies. I especially focus on DTx in the following chapters.
DTx is a novel therapeutic option that provides treatment for illnesses through software applications (apps) delivered via digital devices , and is expanding its scope to disease prevention and management. Currently, smartphones and VR HMDs are prevalently used for this purpose. This concept of DTx was introduced in Japan with the revision of the Act on Securing Quality, Efficacy, and Safety of Products Including Pharmaceuticals and Medical Devices, which demonstrated that software programs (i.e., Software as a Medical Device [SaMD]), including standalone apps themselves, could be certified as medical devices (Fig. ) . Currently, the software app that provides DTx is called “therapeutic apps.” The term “prescription digital therapeutics” is also used, considering that physicians or healthcare professionals “prescribe” the therapeutic app to the patients, let them install it on their digital device, and provide the intended treatment . DTx not only aims to treat illnesses but also provides a direct digital intervention to patients that have a scientifically proven treatment effect, and it has been approved by regulatory agencies. In addition, compared with medical care provided in hospitals or clinics, DTx can provide seamless treatment interventions through mobile digital devices even in patients’ daily lives.
Presently, the US and Germany are particularly leading the digital health-related policies, regulations, and their development status. In the US, regulation of mobile medical applications (MMAs), including DTx, was first issued by the Food and Drug Administration (FDA) in 2013, and was updated in 2015, 2019, and 2022 . Similar to traditional medical devices, MMAs that have a significant impact on patients or medical decision-making requires appropriate regulation processes, including clinical trials. However, the conventional medical device approval process does not automatically adopt the rapidly evolving digital technology used in software or MMA development. Therefore, in July 2017, the FDA launched the Digital Health Innovation Action Plan, which includes the Software Pre-Cert Pilot Program, a system used for faster and safer review and approval processes of digital health products, including MMAs . This innovative system assesses the development capabilities and safety of the development manufacturers rather than each individual medical device software, allowing the companies to bring their FDA-cleared software to market faster and more efficiently. Although the pilot program was completed in September 2022 , the FDA continues to develop policies with the Digital Health Center of Excellence, a digital health resource center, to improve regulatory processes related to medical device software, enabling digital health stakeholders to advance all aspects of healthcare through high-quality digital health innovation . Germany is releasing more DTx medical device software to the market than the US. Germany installs the same public health insurance system as Japan, and the implemented policies provide important hints on how to generalize DTx and medical device software. In November 2019, Germany launched the Digital Healthcare Act ( Digitale-Versorgung-Gesetz or DVG) , which explains the review and approval process of low-risk medical devices essentially based on digital technologies such as Digital Health Applications ( Digitale Gesundheitsanwendungen or DiGA) . Same as FDA, the German Federal Institute for Drug and Medical Devices ( Bundesinstitut für Arzneimittel und Medizinprodukte or BFArM) assesses DiGA according to the following requirements: safety, functionality, quality, data protection, data security, and positive effects on care under the DVG. However, the DVG’s striking point is that even the DiGA and its manufacturer satisfy all requirements excluding the “positive effects on care,” the DiGA can still provisionally be registered in the BFArM directory . Therefore, even if the manufacturer has not submitted the DiGA’s clinical efficacy validation data through regulation processes, such as clinical trials, the DiGA can still be registered and tentatively reimbursed by health insurance as long as the app’s safety, functionality, quality, data protection, and data security are satisfactory. The provisional reimbursement period is limited to 12 months (or can be extended to 24 months in a specific situation) until the clinical efficacy evaluation is confirmed. However, during this period, the manufacturer can conduct the DiGA’s pivotal trials, or real-world data can be collected while distributing the app in the market with health insurance coverage . As of February 2023, 48 DiGAs have been registered in the BFArM directory. Of these apps, 16 (33%) have reached permanent reimbursement, 5 (approximately 10%) were removed from the list, and 27 (56%) are in the provisional reimbursement period and are closely monitored if they can demonstrate sufficient clinical efficacy to obtain permanent reimbursement .
DTx in Japan has been led mainly by several start-ups since 2014 when the Act on Securing Quality, Efficacy, and Safety of Products Including Pharmaceuticals and Medical Devices were revised. Same in the US or Germany, Japan’s medical device software showing a therapeutic effect for diseases needs to be regulated by the MHLW. In 2020, to further promote early implementation of novel SaMD products, including DTx apps in Japan, the MHLW launched the Digital Transformation Action Strategies in Healthcare (DASH) for SaMD . This strategy included the following: (1) seeking promising technologies; (2) arranging and disclosing the concept of a review process specialized in SaMD; (3) centralizing the SaMD consultation service; (4) establishing a SaMD-compatible rapid, efficient, and flexible review system; and (5) reinforcing the review system for early SaMD implementation. Such regulatory efforts to implement SaMD have improved the related guidance and guidelines , leading to a better environment to develop medical device software in Japan. As of February 2023, two types of DTx (for nicotine addiction [CureApp SC TM ] and for hypertension [CureApp HT TM ]) have been approved and reimbursed by the MHLW in Japan. Additionally, DTx for insomnia (SUSMED Med CBT-i) has newly been cleared by MHLW . The following section focuses on the two former DTx apps relating to cardiovascular medicine. DTx system for nicotine dependence The CureApp SC TM DTx for nicotine dependence is a therapeutics system that aims to provide intervention and support for psychological dependence to quit smoking in addition to the 12-week standard smoking cessation program in Japan . This DTx system consists of a smartphone therapeutic app, a Bluetooth-paired mobile checker device for exhaled carbon monoxide (CO), and a web-based personal computer software for physicians . It provides individually tailored behavioral therapy and quit-smoking guidance content through a therapeutic app, thereby intensifying the treatment for psychological dependence on smoking. Moreover, an equipped mobile CO breath analyzer allows patients to measure their expiratory CO levels daily and view their cessation progress through a smartphone app or web-based software for physicians. A multicenter randomized controlled trial assessed the usefulness of the DTx for nicotine dependence . A total of 584 patients diagnosed with nicotine dependence were allocated to either of the following groups: intervention group (using the DTx system for nicotine dependence in addition to a standard smoking cessation program) and control group (using a sham app in addition to a standard smoking cessation program). The primary outcome of the continuous abstinence rate from weeks 9 to 24 was significantly higher in the DTx intervention group than in the control group (63.9% vs. 50.5%; odds ratio [OR], 1.73; 95% confidence interval [CI], 1.24–2.42, P = 0.001), and this DTx add-on effect continued at least up to 52 weeks. Hence, the DTx system for nicotine dependence significantly improved the continuous abstinence rate when added to a standard smoking cessation program. Based on these results, the CureApp SC TM DTx system was approved and reimbursed by the MHLW in Japan in 2020 as the first DTx in Asia. SaMD DTx app for hypertension The CureApp HT TM DTx for hypertension is a SaMD therapeutic app that aims to provide continuous treatment for high blood pressure, not only during intermittent clinic visits but also in their daily life. This app was developed to efficiently support and maximize the blood pressure-lowering effect of lifestyle modification , which is recommended for all patients with high blood pressure by the hypertension management guidelines . Although many physicians think that hypertension treatment links directly to pharmacological therapy, nonpharmacological therapy has also demonstrated robust blood pressure-lowering effects. Nonpharmacological therapy includes a low-salt diet, weight reduction, regular exercise, moderate alcohol consumption, good sleep, stress management . With this background in the algorithm, the DTx app for hypertension aims to educate, practice, and habituate each nonpharmacological therapy for patients with hypertension through the app during daily life outside hospitals or clinics. The app first provides knowledge and techniques to the users for the six non-pharmacological therapy for hypertension (Step 1: input and education). Next, with the app’s support, the users implement specific lifestyle modifications related to the nonpharmacological therapy based on the knowledge and techniques obtained in Step 1 (Step 2: app-supported experiences). Finally, the users independently set, implement, and evaluate their own goals and achievements of lifestyle modification and truly habituate the target nonpharmacological therapy in their daily life (Step 3: self-planning and evaluation) . The efficacy of DTx for hypertension was tested in the HERB-DH1 pivotal clinical trial . The trial enrolled 390 patients aged 65 years or younger who had essential hypertension (grade I or II) but were not taking antihypertensive agents; they were then allocated to either of the DTx intervention group (received the DTx app for hypertension and lifestyle modification guidance according to the guidelines) or the control group (only received lifestyle modification education according to the guidelines) . The primary endpoint of the change in 24-hour systolic blood pressure by ambulatory blood pressure monitoring from baseline (week 0) to week 12 was −4.9 and −2.3 mmHg mmHg in the DTx intervention and control groups, respectively. Hence, the DTx app intervention group had a significantly greater reduction in blood pressure than the control group (mean difference, −2.4 mmHg; 95% CI, −4.5 to −0.3; P = 0.024). Additionally, the reduction of morning home systolic blood pressure from baseline to week 12 was greater in the DTx intervention group than in the control group (−10.6 mmHg vs. −6.2 mmHg; mean difference, −4.3 mmHg; 95% CI, −6.7 to −1.9; P < 0.001). Furthermore, these blood pressure reduction effects persisted at week 24 at least. In conclusion, the DTx for hypertension in addition to the guideline-based hypertension management was effective in patients aged 65 years or younger who had essential hypertension without antihypertensive agents. On top of that, we conducted a cost-effectiveness analysis of the DTx for hypertension by using the background characteristics and effect data of both intervention and control groups in the HERB-DH1 trial . In this analysis, we examined the medical economic effects of using the therapeutic app of DTx for hypertension with a time horizon. The differences in medical costs and quality-adjusted life years (QALY) between the DTx intervention group and the control group were 110 717 yen (higher in the DTx intervention group) and 0.092 (longer in the app intervention group). Therefore, the incremental cost-effectiveness ratio (ICER) was calculated to be 1 199 880 yen/QALY . This ICER value was lower than the “willingness-to-pay” threshold of 5 million yen/QALY, which is one of the acceptable medical costs for each increase in 1 QALY. Thus, prescribing the DTx app might be cost-effective through life. Considering these series of evidence, the CureApp HT TM DTx for hypertension was cleared and was reimbursed by the MLHW in Japan as the world’s first hypertension therapeutic app in 2022.
The CureApp SC TM DTx for nicotine dependence is a therapeutics system that aims to provide intervention and support for psychological dependence to quit smoking in addition to the 12-week standard smoking cessation program in Japan . This DTx system consists of a smartphone therapeutic app, a Bluetooth-paired mobile checker device for exhaled carbon monoxide (CO), and a web-based personal computer software for physicians . It provides individually tailored behavioral therapy and quit-smoking guidance content through a therapeutic app, thereby intensifying the treatment for psychological dependence on smoking. Moreover, an equipped mobile CO breath analyzer allows patients to measure their expiratory CO levels daily and view their cessation progress through a smartphone app or web-based software for physicians. A multicenter randomized controlled trial assessed the usefulness of the DTx for nicotine dependence . A total of 584 patients diagnosed with nicotine dependence were allocated to either of the following groups: intervention group (using the DTx system for nicotine dependence in addition to a standard smoking cessation program) and control group (using a sham app in addition to a standard smoking cessation program). The primary outcome of the continuous abstinence rate from weeks 9 to 24 was significantly higher in the DTx intervention group than in the control group (63.9% vs. 50.5%; odds ratio [OR], 1.73; 95% confidence interval [CI], 1.24–2.42, P = 0.001), and this DTx add-on effect continued at least up to 52 weeks. Hence, the DTx system for nicotine dependence significantly improved the continuous abstinence rate when added to a standard smoking cessation program. Based on these results, the CureApp SC TM DTx system was approved and reimbursed by the MHLW in Japan in 2020 as the first DTx in Asia.
The CureApp HT TM DTx for hypertension is a SaMD therapeutic app that aims to provide continuous treatment for high blood pressure, not only during intermittent clinic visits but also in their daily life. This app was developed to efficiently support and maximize the blood pressure-lowering effect of lifestyle modification , which is recommended for all patients with high blood pressure by the hypertension management guidelines . Although many physicians think that hypertension treatment links directly to pharmacological therapy, nonpharmacological therapy has also demonstrated robust blood pressure-lowering effects. Nonpharmacological therapy includes a low-salt diet, weight reduction, regular exercise, moderate alcohol consumption, good sleep, stress management . With this background in the algorithm, the DTx app for hypertension aims to educate, practice, and habituate each nonpharmacological therapy for patients with hypertension through the app during daily life outside hospitals or clinics. The app first provides knowledge and techniques to the users for the six non-pharmacological therapy for hypertension (Step 1: input and education). Next, with the app’s support, the users implement specific lifestyle modifications related to the nonpharmacological therapy based on the knowledge and techniques obtained in Step 1 (Step 2: app-supported experiences). Finally, the users independently set, implement, and evaluate their own goals and achievements of lifestyle modification and truly habituate the target nonpharmacological therapy in their daily life (Step 3: self-planning and evaluation) . The efficacy of DTx for hypertension was tested in the HERB-DH1 pivotal clinical trial . The trial enrolled 390 patients aged 65 years or younger who had essential hypertension (grade I or II) but were not taking antihypertensive agents; they were then allocated to either of the DTx intervention group (received the DTx app for hypertension and lifestyle modification guidance according to the guidelines) or the control group (only received lifestyle modification education according to the guidelines) . The primary endpoint of the change in 24-hour systolic blood pressure by ambulatory blood pressure monitoring from baseline (week 0) to week 12 was −4.9 and −2.3 mmHg mmHg in the DTx intervention and control groups, respectively. Hence, the DTx app intervention group had a significantly greater reduction in blood pressure than the control group (mean difference, −2.4 mmHg; 95% CI, −4.5 to −0.3; P = 0.024). Additionally, the reduction of morning home systolic blood pressure from baseline to week 12 was greater in the DTx intervention group than in the control group (−10.6 mmHg vs. −6.2 mmHg; mean difference, −4.3 mmHg; 95% CI, −6.7 to −1.9; P < 0.001). Furthermore, these blood pressure reduction effects persisted at week 24 at least. In conclusion, the DTx for hypertension in addition to the guideline-based hypertension management was effective in patients aged 65 years or younger who had essential hypertension without antihypertensive agents. On top of that, we conducted a cost-effectiveness analysis of the DTx for hypertension by using the background characteristics and effect data of both intervention and control groups in the HERB-DH1 trial . In this analysis, we examined the medical economic effects of using the therapeutic app of DTx for hypertension with a time horizon. The differences in medical costs and quality-adjusted life years (QALY) between the DTx intervention group and the control group were 110 717 yen (higher in the DTx intervention group) and 0.092 (longer in the app intervention group). Therefore, the incremental cost-effectiveness ratio (ICER) was calculated to be 1 199 880 yen/QALY . This ICER value was lower than the “willingness-to-pay” threshold of 5 million yen/QALY, which is one of the acceptable medical costs for each increase in 1 QALY. Thus, prescribing the DTx app might be cost-effective through life. Considering these series of evidence, the CureApp HT TM DTx for hypertension was cleared and was reimbursed by the MLHW in Japan as the world’s first hypertension therapeutic app in 2022.
This review introduces the latest and various digital health technologies with specific terminologies along with the DTx in cardiovascular medicine. Although only three DTx apps have been approved by MHLW in Japan, several manufacturers, including DTx start-ups and pharmaceutical companies, continuously develop DTx and conduct clinical research to obtain regulatory approval. The number of DTx development pipelines in Japan surpasses more than 30, which continues to increase every year (Table ). The movement to promote DTx in cardiovascular medicine, which applies various digital technologies to patients with cardiovascular diseases and considers the technologies’ safety, efficacy, and cost-effectiveness, will accelerate not only through basic experiments and clinical studies but also through social implementation.
|
Top Concerns of Tweeters During the COVID-19 Pandemic: Infoveillance Study | 1eeb66f6-4e52-4029-b62a-b26b29c3ff3e | 7175788 | Health Communication[mh] | Since the 1980s, human disease outbreaks have become increasingly frequent and diverse due to a plethora of ecological, environmental, and socioeconomic factors . The family of coronaviruses was not considered to be highly pathogenic until 2003 and 2012 with the appearance of the severe acute respiratory syndrome in China followed by the Middle East respiratory syndrome in Saudi Arabia . In December 2019, a series of patients with pneumonia of an unknown cause emerged in Wuhan, China . Through contact tracing, these patients were linked back to a seafood and wet animal wholesale market in Wuhan . To further investigate the symptoms, Chinese authorities conducted deep sequence analysis that provided ample evidence that the novel coronavirus was the causative agent of the disease , which is now known as the coronavirus disease (COVID-19). Since then, COVID-19 has quickly spread in China and other countries around the world. The disease is highly infectious, and, on average, each patient can spread the infection from 2 to 4 other individuals . Worldwide, a total of 1,279,722 cases of COVID-19 and 72,614 deaths were confirmed in 212 countries by April 7, 2020 . With the worldwide spread of the COVID-19 infection, individual activity on social media platforms such as Facebook, Twitter, and YouTube began to increase. A number of studies have shown that social media can play an important role as a source of data for detecting outbreaks but also in understanding public attitudes and behaviors during a crisis as a way to support crisis communication and health promotion messaging . To assist public health professionals to make better decisions and aide their public health monitoring, advanced surveillance systems are developed to sort through large amounts of real time data from social media concerning public health information on a global scale . Publicly accessible data posted on social media platforms by users around the world can be used to quickly identify the main thoughts, attitudes, feelings, and topics that are occupying the minds of individuals in relation to the COVID-19 pandemic. Such data can help policymakers, health care professionals, and the public identify primary issues that of concern and address them in a more appropriate manner. A growing body of literature has been centered on examining the use of Twitter for public health research. A systematic review paper identified six main uses of Twitter for public health: analysis of shared content, surveillance of public health topics or diseases, public engagement, recruitment of research participants, Twitter-based public health interventions, and network analysis of Twitter users . Other studies analyzed twitter data for sentiment analysis and the use of Twitter to propagate credible vaccine-related web pages . Building on previous work, this study aims to identify the main topics posted by Twitter users related to the COVID-19 pandemic. Analyzing such information can help policy makers and health care organizations assess the needs of their stakeholders and address them in an appropriate and relevant manner.
Data Collection We collected coronavirus-related tweets between February 2, 2020, and March 15, 2020, using the Twitter standard search application programming interface (API) consisting of a set of predefined search terms (“corona,” “2019-nCov,” and “COVID-19”), which are the most widely used scientific and news media terms relating to the novel coronavirus. We extracted and stored the text and metadata of the tweets using the time stamp, number of likes and retweets, and user profile information including the number of followers. We stored the tweets in a database table, where the primary key of the table was tweet ID. As a result, the duplicates were not stored in our database. Only English language tweets were collected in the study. Since the metadata of tweets such as the number of likes and retweets might change over time, we recollected the updated metadata of the tweets at the end of the study period using the tweet IDs of the already collected tweets. Twitter standard search API allows the access of old tweets using tweet IDs. We used the Tweepy Python (Python Software Foundation) library for accessing the Twitter API and PostgreSQL (PostgreSQL Global Development Group) database for storing the collected tweets. Data Preprocessing We identified non-English tweets using the language field in the tweets metadata and removed them from the analysis. We identified and removed retweets from the analysis. We also removed punctuation, stop words such as an and the , and nonprintable characters such as emojis from the tweets. We normalized Twitter user mentions by converting, for example, “@Alaa” to “@username.” Furthermore, various forms of the same word (eg, travels, traveling, and travel’s) were lemmatized by converting them to the main word (eg, travel) using the WordNetLemmatizer module of the Natural Language Toolkit Python library. The data preprocessing is depicted in . Following the terms and conditions, terms of use, and privacy policies of Twitter, all data were anonymized and were not reported verbatim to any third party. Data Analysis The processed tweets were analyzed using word frequencies of single words (unigram) and double-word (bigrams) combinations, and they were visualized through word clouds to identify the most common topics. In addition, we used the topic modeling technique to identify the most common topics in the tweets. Topic modeling is an unsupervised machine learning technique that can find clusters in a collection of documents (tweets in this case). We used the latent Dirichlet allocation (LDA) algorithm from the Python sklearn package. LDA requires a fixed set of topics, where each topic is represented by a set of words. The objective of LDA is to map the given documents to the set of topics so that the words in each document are mostly captured by those topics. LDA is a widely used topic modeling algorithm. We used it to find natural clusters in the language of tweets. We applied topic modeling by specifying the number of topics required by the LDA to separate the set of tweets into various clusters. Based on our previous work, we selected 30 to be the number of topics for running the LDA . We took the top representative words of each of the 30 topics produced by the LDA topic modelling algorithm (see LDA output in ) and the common words from the word cloud (see word cloud in ) and manually analyzed both sets of words. From this manual analysis, the authors reached a consensus on 12 topics and associated terms, unigram and bigram, for each topic (see associated terms for each topic in ). These terms were used to classify tweets, using a rule-based classification script , into different topics and compute the prevalence of each topic. Next, we developed a rule-based classification script written in Python to check for the presence of any of the preidentified unigrams and bigrams in each tweet. The classification script used a simple string-matching technique to see if a given tweet contains the selected keywords of the topics. A tweet that contained a selected keyword related to a certain topic was classified as belonging to that topic. We also performed other analyses such as sentiment analysis, which extracts the mean number of retweets, likes, and followers for each topic and then calculates the interaction rate for each topic. The sentiment analysis was performed on the tweet text using the Python textblob library. The sentiment score varied between –1.0 to 1.0, with –1.0 as the most negative text and 1.0 as the most positive text. We calculated the mean sentiment and the mean number of likes, retweets, and followers for each topic. We also calculated the interaction rate for each topic by summing the total number of retweets and likes per topic divided by the sum of the total number of followers per topic. These measures provided additional insight into the topics and users who posted in these topics.
We collected coronavirus-related tweets between February 2, 2020, and March 15, 2020, using the Twitter standard search application programming interface (API) consisting of a set of predefined search terms (“corona,” “2019-nCov,” and “COVID-19”), which are the most widely used scientific and news media terms relating to the novel coronavirus. We extracted and stored the text and metadata of the tweets using the time stamp, number of likes and retweets, and user profile information including the number of followers. We stored the tweets in a database table, where the primary key of the table was tweet ID. As a result, the duplicates were not stored in our database. Only English language tweets were collected in the study. Since the metadata of tweets such as the number of likes and retweets might change over time, we recollected the updated metadata of the tweets at the end of the study period using the tweet IDs of the already collected tweets. Twitter standard search API allows the access of old tweets using tweet IDs. We used the Tweepy Python (Python Software Foundation) library for accessing the Twitter API and PostgreSQL (PostgreSQL Global Development Group) database for storing the collected tweets.
We identified non-English tweets using the language field in the tweets metadata and removed them from the analysis. We identified and removed retweets from the analysis. We also removed punctuation, stop words such as an and the , and nonprintable characters such as emojis from the tweets. We normalized Twitter user mentions by converting, for example, “@Alaa” to “@username.” Furthermore, various forms of the same word (eg, travels, traveling, and travel’s) were lemmatized by converting them to the main word (eg, travel) using the WordNetLemmatizer module of the Natural Language Toolkit Python library. The data preprocessing is depicted in . Following the terms and conditions, terms of use, and privacy policies of Twitter, all data were anonymized and were not reported verbatim to any third party.
The processed tweets were analyzed using word frequencies of single words (unigram) and double-word (bigrams) combinations, and they were visualized through word clouds to identify the most common topics. In addition, we used the topic modeling technique to identify the most common topics in the tweets. Topic modeling is an unsupervised machine learning technique that can find clusters in a collection of documents (tweets in this case). We used the latent Dirichlet allocation (LDA) algorithm from the Python sklearn package. LDA requires a fixed set of topics, where each topic is represented by a set of words. The objective of LDA is to map the given documents to the set of topics so that the words in each document are mostly captured by those topics. LDA is a widely used topic modeling algorithm. We used it to find natural clusters in the language of tweets. We applied topic modeling by specifying the number of topics required by the LDA to separate the set of tweets into various clusters. Based on our previous work, we selected 30 to be the number of topics for running the LDA . We took the top representative words of each of the 30 topics produced by the LDA topic modelling algorithm (see LDA output in ) and the common words from the word cloud (see word cloud in ) and manually analyzed both sets of words. From this manual analysis, the authors reached a consensus on 12 topics and associated terms, unigram and bigram, for each topic (see associated terms for each topic in ). These terms were used to classify tweets, using a rule-based classification script , into different topics and compute the prevalence of each topic. Next, we developed a rule-based classification script written in Python to check for the presence of any of the preidentified unigrams and bigrams in each tweet. The classification script used a simple string-matching technique to see if a given tweet contains the selected keywords of the topics. A tweet that contained a selected keyword related to a certain topic was classified as belonging to that topic. We also performed other analyses such as sentiment analysis, which extracts the mean number of retweets, likes, and followers for each topic and then calculates the interaction rate for each topic. The sentiment analysis was performed on the tweet text using the Python textblob library. The sentiment score varied between –1.0 to 1.0, with –1.0 as the most negative text and 1.0 as the most positive text. We calculated the mean sentiment and the mean number of likes, retweets, and followers for each topic. We also calculated the interaction rate for each topic by summing the total number of retweets and likes per topic divided by the sum of the total number of followers per topic. These measures provided additional insight into the topics and users who posted in these topics.
Search Results As shown in , a total of 2,787,247 tweets were obtained between February 2, 2020, and March 15, 2020. Of these tweets, 1,636,422 (58.71%) non-English tweets were removed. Of the 1,150,825 remaining English tweets, 735,182 (63.88%) retweets were excluded. A further 248,570 (21.60%) tweets with no coronavirus-related terms in the text were also removed. These tweets were captured by Twitter API either because the name or the profile description of users matched the search terms. Accordingly, the study analyzed 167,073 unique tweets from 160,829 unique users. Results of Tweet Analysis Topics Emerged From Tweets We identified 12 topics from the analyzed tweets. The 12 topics were grouped into four themes: the origin of COVID-19, the source of a novel coronavirus, the impact of COVID-19 on people and countries, and the methods for decreasing the spread of COVID-19. summarizes the prevalence of the identified topics. Values on the diagonal of the table refer to numbers and percentages of tweets in a topic, and values in the off-diagonal of the table indicate numbers and percentages of tweets in the intersection of the two topics. For instance, a hypothetical tweet such as “while the death toll due to COVID-19 continues to rise, the travel ban imposed by countries to limit the spread of coronavirus infection started to affect the daily life of many people” could be classified under travel and death. The value at the intersection for these 2 topics in the table represents the number and percentage of tweets containing keywords related to both topics. More details about themes in these topics are elaborated in the following subsections. Theme 1: Origin of COVID-19 This theme contains two topics that discuss the origin of COVID-19. The first topic was China, which was the most common topic of all identified topics. Tweeters talked about China as it was the country where the novel coronavirus originated from. The second topic was the outbreak. The tweets in this topic talked about the details of the outbreak, such as how, when, and where the outbreak emerged. Theme 2: Source of the Novel Coronavirus This theme included tweets about the causes leading to the transfer of COVID-19 to humans. Tweeters identified two sources of a novel coronavirus, which formed two topics in this study: eating meat and developing bioweapons. The former topic (eating meat) was identified in tweets mentioning the role of meat in the spread of COVID-19. Most of these tweets blamed nonvegetarians for the outbreak of COVID-19 and asked them to stop eating meat to stop the coronavirus spread. The latter topic (bioweapon) was formed by the tweets of individuals debating whether or not the COVID-19 virus originated from a Chinese biological military laboratory. Theme 3: Impact of COVID-19 on People and Countries The third theme was generated from tweets about the influence of COVID-19 on people, companies, and countries. The tweets in this theme identified six effects of COVID-19, which also formed six topics. The first topic related to the number of deaths caused by COVID-19. The tweets that belonged to this topic mainly showed statistics and numbers of deaths caused by a coronavirus in different cities and countries. The second topic was the fear and stress caused by COVID-19. Twitter users in these tweets expressed their fear and stress about the coronavirus due to its quick spread and the lack of treatments or vaccines for the disease caused by the coronavirus. The third topic was related to the effects of COVID-19 on travel from and to China and other countries. These tweets mostly discussed flight cancellations, postponements, travel bans, and restrictions as well as travel warnings imposed by many countries due to the coronavirus pandemic. The impact of COVID-19 on the economy was the fourth topic. These tweets mostly showed actual or expected losses in the economy of many companies and countries due to, for example, closure of markets, a decrease of oil demands, delays in production, and canceling of important events, which came as a result of the COVID-19 outbreak. Panic buying was the fifth topic identified. These tweets talked about how individuals in many countries became panic buyers in preparation for curfews, lockdowns, and stay-at-home orders due to the COVID-19 pandemic, and how supermarkets and shops controlled and prevented panic buying. The last topic identified in this theme related to racism. Specifically, users in most of the tweets reported the spreading of racist, prejudiced, and xenophobic attacks (eg, rude comments or dirty looks) against East Asians given that COVID-19 originated from their countries. Theme 4: Methods for Decreasing the Spread of COVID-19 The last theme brought together tweets that discussed methods for decreasing the spread of COVID-19. Two methods were identified from these tweets and formed the following two topics: wearing masks and the quarantine of people. Most of the tweets from the former topic talked about either the importance of face masks in decreasing the outbreak of the coronavirus or their shortage in several countries. Most of the tweets from the latter topic were about quarantining individuals who were infected with or suspected to have the coronavirus to reduce or prevent the spread of the disease. As shown in the off-diagonal values in , the most common topic overlap was between China and deaths caused by COVID-19, followed by China and eating meat, China and the outbreak of COVID-19, deaths caused by COVID-19 and eating meat, and China and fear and stress about COVID-19. Results of Sentiment and Interaction Rate Analysis As shown in , the mean of sentiment was positive in all topics except two: deaths caused by COVID-19 and increased racism. The highest mean of positive sentiments was for the eating meat topic, followed by the wearing masks topic. The highest mean of negative sentiments was for “deaths caused by COVID-19” topic. The mean of followers for tweeters who posted the collected tweets ranged from 2878 (in increased racism) to 13,361 followers (in economic losses). The economic loss topic had the highest mean of likes. On the other hand, travel ban and warning-related topics had the lowest mean of likes. The mean of retweets for the collected tweets varied between 0.89 (for panic buying) and 7.11 (for eating meat). The lowest interaction rate was for panic buying–related tweets, and the highest interaction rate was for racism-related tweets followed by bioweapon-related tweets and eating meat–related tweets . User mentions were the most common in China-related tweets, but they were the least common in racism-related tweets . Similarly, link sharing was the most common in China-related tweets, whereas they were the least common in racism-related tweets . shows more descriptive statistics (ie, medians, variances, standard deviations, maximums, and minimums) for all previously mentioned measures.
As shown in , a total of 2,787,247 tweets were obtained between February 2, 2020, and March 15, 2020. Of these tweets, 1,636,422 (58.71%) non-English tweets were removed. Of the 1,150,825 remaining English tweets, 735,182 (63.88%) retweets were excluded. A further 248,570 (21.60%) tweets with no coronavirus-related terms in the text were also removed. These tweets were captured by Twitter API either because the name or the profile description of users matched the search terms. Accordingly, the study analyzed 167,073 unique tweets from 160,829 unique users.
Topics Emerged From Tweets We identified 12 topics from the analyzed tweets. The 12 topics were grouped into four themes: the origin of COVID-19, the source of a novel coronavirus, the impact of COVID-19 on people and countries, and the methods for decreasing the spread of COVID-19. summarizes the prevalence of the identified topics. Values on the diagonal of the table refer to numbers and percentages of tweets in a topic, and values in the off-diagonal of the table indicate numbers and percentages of tweets in the intersection of the two topics. For instance, a hypothetical tweet such as “while the death toll due to COVID-19 continues to rise, the travel ban imposed by countries to limit the spread of coronavirus infection started to affect the daily life of many people” could be classified under travel and death. The value at the intersection for these 2 topics in the table represents the number and percentage of tweets containing keywords related to both topics. More details about themes in these topics are elaborated in the following subsections. Theme 1: Origin of COVID-19 This theme contains two topics that discuss the origin of COVID-19. The first topic was China, which was the most common topic of all identified topics. Tweeters talked about China as it was the country where the novel coronavirus originated from. The second topic was the outbreak. The tweets in this topic talked about the details of the outbreak, such as how, when, and where the outbreak emerged. Theme 2: Source of the Novel Coronavirus This theme included tweets about the causes leading to the transfer of COVID-19 to humans. Tweeters identified two sources of a novel coronavirus, which formed two topics in this study: eating meat and developing bioweapons. The former topic (eating meat) was identified in tweets mentioning the role of meat in the spread of COVID-19. Most of these tweets blamed nonvegetarians for the outbreak of COVID-19 and asked them to stop eating meat to stop the coronavirus spread. The latter topic (bioweapon) was formed by the tweets of individuals debating whether or not the COVID-19 virus originated from a Chinese biological military laboratory. Theme 3: Impact of COVID-19 on People and Countries The third theme was generated from tweets about the influence of COVID-19 on people, companies, and countries. The tweets in this theme identified six effects of COVID-19, which also formed six topics. The first topic related to the number of deaths caused by COVID-19. The tweets that belonged to this topic mainly showed statistics and numbers of deaths caused by a coronavirus in different cities and countries. The second topic was the fear and stress caused by COVID-19. Twitter users in these tweets expressed their fear and stress about the coronavirus due to its quick spread and the lack of treatments or vaccines for the disease caused by the coronavirus. The third topic was related to the effects of COVID-19 on travel from and to China and other countries. These tweets mostly discussed flight cancellations, postponements, travel bans, and restrictions as well as travel warnings imposed by many countries due to the coronavirus pandemic. The impact of COVID-19 on the economy was the fourth topic. These tweets mostly showed actual or expected losses in the economy of many companies and countries due to, for example, closure of markets, a decrease of oil demands, delays in production, and canceling of important events, which came as a result of the COVID-19 outbreak. Panic buying was the fifth topic identified. These tweets talked about how individuals in many countries became panic buyers in preparation for curfews, lockdowns, and stay-at-home orders due to the COVID-19 pandemic, and how supermarkets and shops controlled and prevented panic buying. The last topic identified in this theme related to racism. Specifically, users in most of the tweets reported the spreading of racist, prejudiced, and xenophobic attacks (eg, rude comments or dirty looks) against East Asians given that COVID-19 originated from their countries. Theme 4: Methods for Decreasing the Spread of COVID-19 The last theme brought together tweets that discussed methods for decreasing the spread of COVID-19. Two methods were identified from these tweets and formed the following two topics: wearing masks and the quarantine of people. Most of the tweets from the former topic talked about either the importance of face masks in decreasing the outbreak of the coronavirus or their shortage in several countries. Most of the tweets from the latter topic were about quarantining individuals who were infected with or suspected to have the coronavirus to reduce or prevent the spread of the disease. As shown in the off-diagonal values in , the most common topic overlap was between China and deaths caused by COVID-19, followed by China and eating meat, China and the outbreak of COVID-19, deaths caused by COVID-19 and eating meat, and China and fear and stress about COVID-19.
We identified 12 topics from the analyzed tweets. The 12 topics were grouped into four themes: the origin of COVID-19, the source of a novel coronavirus, the impact of COVID-19 on people and countries, and the methods for decreasing the spread of COVID-19. summarizes the prevalence of the identified topics. Values on the diagonal of the table refer to numbers and percentages of tweets in a topic, and values in the off-diagonal of the table indicate numbers and percentages of tweets in the intersection of the two topics. For instance, a hypothetical tweet such as “while the death toll due to COVID-19 continues to rise, the travel ban imposed by countries to limit the spread of coronavirus infection started to affect the daily life of many people” could be classified under travel and death. The value at the intersection for these 2 topics in the table represents the number and percentage of tweets containing keywords related to both topics. More details about themes in these topics are elaborated in the following subsections. Theme 1: Origin of COVID-19 This theme contains two topics that discuss the origin of COVID-19. The first topic was China, which was the most common topic of all identified topics. Tweeters talked about China as it was the country where the novel coronavirus originated from. The second topic was the outbreak. The tweets in this topic talked about the details of the outbreak, such as how, when, and where the outbreak emerged. Theme 2: Source of the Novel Coronavirus This theme included tweets about the causes leading to the transfer of COVID-19 to humans. Tweeters identified two sources of a novel coronavirus, which formed two topics in this study: eating meat and developing bioweapons. The former topic (eating meat) was identified in tweets mentioning the role of meat in the spread of COVID-19. Most of these tweets blamed nonvegetarians for the outbreak of COVID-19 and asked them to stop eating meat to stop the coronavirus spread. The latter topic (bioweapon) was formed by the tweets of individuals debating whether or not the COVID-19 virus originated from a Chinese biological military laboratory. Theme 3: Impact of COVID-19 on People and Countries The third theme was generated from tweets about the influence of COVID-19 on people, companies, and countries. The tweets in this theme identified six effects of COVID-19, which also formed six topics. The first topic related to the number of deaths caused by COVID-19. The tweets that belonged to this topic mainly showed statistics and numbers of deaths caused by a coronavirus in different cities and countries. The second topic was the fear and stress caused by COVID-19. Twitter users in these tweets expressed their fear and stress about the coronavirus due to its quick spread and the lack of treatments or vaccines for the disease caused by the coronavirus. The third topic was related to the effects of COVID-19 on travel from and to China and other countries. These tweets mostly discussed flight cancellations, postponements, travel bans, and restrictions as well as travel warnings imposed by many countries due to the coronavirus pandemic. The impact of COVID-19 on the economy was the fourth topic. These tweets mostly showed actual or expected losses in the economy of many companies and countries due to, for example, closure of markets, a decrease of oil demands, delays in production, and canceling of important events, which came as a result of the COVID-19 outbreak. Panic buying was the fifth topic identified. These tweets talked about how individuals in many countries became panic buyers in preparation for curfews, lockdowns, and stay-at-home orders due to the COVID-19 pandemic, and how supermarkets and shops controlled and prevented panic buying. The last topic identified in this theme related to racism. Specifically, users in most of the tweets reported the spreading of racist, prejudiced, and xenophobic attacks (eg, rude comments or dirty looks) against East Asians given that COVID-19 originated from their countries. Theme 4: Methods for Decreasing the Spread of COVID-19 The last theme brought together tweets that discussed methods for decreasing the spread of COVID-19. Two methods were identified from these tweets and formed the following two topics: wearing masks and the quarantine of people. Most of the tweets from the former topic talked about either the importance of face masks in decreasing the outbreak of the coronavirus or their shortage in several countries. Most of the tweets from the latter topic were about quarantining individuals who were infected with or suspected to have the coronavirus to reduce or prevent the spread of the disease. As shown in the off-diagonal values in , the most common topic overlap was between China and deaths caused by COVID-19, followed by China and eating meat, China and the outbreak of COVID-19, deaths caused by COVID-19 and eating meat, and China and fear and stress about COVID-19.
This theme contains two topics that discuss the origin of COVID-19. The first topic was China, which was the most common topic of all identified topics. Tweeters talked about China as it was the country where the novel coronavirus originated from. The second topic was the outbreak. The tweets in this topic talked about the details of the outbreak, such as how, when, and where the outbreak emerged.
This theme included tweets about the causes leading to the transfer of COVID-19 to humans. Tweeters identified two sources of a novel coronavirus, which formed two topics in this study: eating meat and developing bioweapons. The former topic (eating meat) was identified in tweets mentioning the role of meat in the spread of COVID-19. Most of these tweets blamed nonvegetarians for the outbreak of COVID-19 and asked them to stop eating meat to stop the coronavirus spread. The latter topic (bioweapon) was formed by the tweets of individuals debating whether or not the COVID-19 virus originated from a Chinese biological military laboratory.
The third theme was generated from tweets about the influence of COVID-19 on people, companies, and countries. The tweets in this theme identified six effects of COVID-19, which also formed six topics. The first topic related to the number of deaths caused by COVID-19. The tweets that belonged to this topic mainly showed statistics and numbers of deaths caused by a coronavirus in different cities and countries. The second topic was the fear and stress caused by COVID-19. Twitter users in these tweets expressed their fear and stress about the coronavirus due to its quick spread and the lack of treatments or vaccines for the disease caused by the coronavirus. The third topic was related to the effects of COVID-19 on travel from and to China and other countries. These tweets mostly discussed flight cancellations, postponements, travel bans, and restrictions as well as travel warnings imposed by many countries due to the coronavirus pandemic. The impact of COVID-19 on the economy was the fourth topic. These tweets mostly showed actual or expected losses in the economy of many companies and countries due to, for example, closure of markets, a decrease of oil demands, delays in production, and canceling of important events, which came as a result of the COVID-19 outbreak. Panic buying was the fifth topic identified. These tweets talked about how individuals in many countries became panic buyers in preparation for curfews, lockdowns, and stay-at-home orders due to the COVID-19 pandemic, and how supermarkets and shops controlled and prevented panic buying. The last topic identified in this theme related to racism. Specifically, users in most of the tweets reported the spreading of racist, prejudiced, and xenophobic attacks (eg, rude comments or dirty looks) against East Asians given that COVID-19 originated from their countries.
The last theme brought together tweets that discussed methods for decreasing the spread of COVID-19. Two methods were identified from these tweets and formed the following two topics: wearing masks and the quarantine of people. Most of the tweets from the former topic talked about either the importance of face masks in decreasing the outbreak of the coronavirus or their shortage in several countries. Most of the tweets from the latter topic were about quarantining individuals who were infected with or suspected to have the coronavirus to reduce or prevent the spread of the disease. As shown in the off-diagonal values in , the most common topic overlap was between China and deaths caused by COVID-19, followed by China and eating meat, China and the outbreak of COVID-19, deaths caused by COVID-19 and eating meat, and China and fear and stress about COVID-19.
As shown in , the mean of sentiment was positive in all topics except two: deaths caused by COVID-19 and increased racism. The highest mean of positive sentiments was for the eating meat topic, followed by the wearing masks topic. The highest mean of negative sentiments was for “deaths caused by COVID-19” topic. The mean of followers for tweeters who posted the collected tweets ranged from 2878 (in increased racism) to 13,361 followers (in economic losses). The economic loss topic had the highest mean of likes. On the other hand, travel ban and warning-related topics had the lowest mean of likes. The mean of retweets for the collected tweets varied between 0.89 (for panic buying) and 7.11 (for eating meat). The lowest interaction rate was for panic buying–related tweets, and the highest interaction rate was for racism-related tweets followed by bioweapon-related tweets and eating meat–related tweets . User mentions were the most common in China-related tweets, but they were the least common in racism-related tweets . Similarly, link sharing was the most common in China-related tweets, whereas they were the least common in racism-related tweets . shows more descriptive statistics (ie, medians, variances, standard deviations, maximums, and minimums) for all previously mentioned measures.
Principal Findings Users on Twitter discussed 12 main topics across four main themes related to COVID-19 between February 2, 2020, and March 15, 2020. User mentions and link sharing were the most common in the analyzed tweets. These findings might demonstrate that users on Twitter are interested in notifying or warning their friends and followers about COVID-19. These interpersonal communications indicate that people bond around the topic of COVID-19 on Twitter. Users on Twitter also focused on the impact of coronavirus on people and countries. Specifically, numerous tweets were posted on the number of deaths linked to the coronavirus. Furthermore, the emotional and psychological impact of the coronavirus was mentioned in many tweets. Users on Twitter may show their fear and stress about COVID-19 and the lack of vaccine treatment options to prevent it or specific antiviral treatments . However, the sensationalistic use of Twitter can be a great challenge for public health and outbreak response efforts because of the wild spread of misinformation and conspiracy theories . The infectious outbreak of “fake news” and “distorted evidence” in the digital world can create mass panic and cause damaging and devastating consequences in the real world, distorting evidence and impeding the response efforts and activities of health care workers and public health systems . Additionally, the economic impact of COVID-19 on companies and countries were discussed in several tweets. Tweeters might talk about the economic impact of COVID-19 due to, for example, temporary closures of major fast-food chains and retailers (eg, McDonald’s, KFC, Apple, and Adidas) , decreases in auto sales, drops in oil demand, production delays such as with the iPhone, the canceling or postponing of sporting events such as the Formula One World Championship, or decreases in airline revenues due to flight cancellations . It has been estimated that the spread of COVID-19 could cost the worldwide economy a total of US $2.7 trillion . The last impact of COVID-19 discussed by Twitter users was travel. This topic might have been common because most countries have banned travel from and to countries that confirmed the presence of COVID-19 inside their borders. Tweets also focused on two possible sources of the coronavirus: the eating of meat and a Chinese biological military laboratory. Tweeters mentioned two main methods used to decrease the spread of COVID-19: masks and quarantine. The first method (masks) was discussed frequently on Twitter mainly due to the face mask shortage reported in several countries (eg, China, the United Kingdom, and the United States). The quarantine was a common topic in tweets because it was the first step that countries applied to control the outbreak of COVID-19. Practical and Research Implications Practical Implications Research shows that crisis response activities in reality and online are becoming increasingly “simultaneous and intertwined” . Social media provides a lucrative opportunity to spread and disseminate public health knowledge and information directly to the public . However, social media can also be a powerful weapon and, if not used appropriately, can be destructive to public health efforts, especially during a public health crisis. Therefore, more efforts are needed to build national and international detection and surveillance systems of diseases by examining online content published through the World Wide Web, including social media. There is a need for stronger and more proactive public health presence on social media. Governments and health systems should also “listen” or monitor the tweets from the public that relate to health, especially in a time of crisis, to help inform policies related to public health (eg, social distancing and quarantine) and supply chains among many others. Research Implications The global COVID-19 outbreak and its wild spread across countries demonstrates the need for more vigilant and timely responses aided by the research community. This was not the focus of this study, but future studies should investigate the spread of “fake news” in combination with infectious disease outbreaks . Moreover, there is a need for providing access to a core corpus of social media posts available to the scientific and public health community while maintaining privacy. Additional work is necessary for multilingual sentiment analysis on social media platforms, as most research efforts have been devoted to English-language data , including this study. It could also be useful for future studies to consider longitudinal, multilingual sentiment analysis in addition to concurrent analysis of infectious disease outbreaks on different social media platforms, if feasible. Strengths and Limitations Several strengths and limitations can be attributed to this study analyzing tweets related to the recent COVID-19 outbreak. In this study, no geographical restrictions were applied on the tweets analyzed considering the worldwide spread of the disease. However, the study only analyzed tweets in the English language, which may limit the generalizability of the findings about this worldwide outbreak. In addition, given that the Twitter standard search API does not allow researchers to obtain tweets posted more than 1 week ago , we could not get COVID-19-related tweets posted before February 2, 2020. Thus, the findings may not be generalizable to that period. Moreover, this study could not collect tweets from accounts marked as private. Therefore, findings may not represent all the topics discussed by users on Twitter related to COVID-19. Only posts on Twitter were analyzed in this study, thereby, our findings may not be generalizable to other social media platforms. Furthermore, the findings reported in this study are limited to only those that have access to and use Twitter. Therefore, caution is advised before assuming the generalizability of the results, as Twitter is not used by everyone in the population. Conclusion The COVID-19 pandemic has been affecting many health care systems and nations, claiming the lives of many people. As a vibrant social media platform, Twitter projected this heavy toll through the interactions and posts people made related to COVID-19. It is clear that coordinating public health crisis response activities in the real world and online is paramount, and should be a top priority for all health care systems. We need to build more national and international detection and surveillance systems to detect the spread of infectious diseases and combat the fake news that is usually accompanied by these diseases.
Users on Twitter discussed 12 main topics across four main themes related to COVID-19 between February 2, 2020, and March 15, 2020. User mentions and link sharing were the most common in the analyzed tweets. These findings might demonstrate that users on Twitter are interested in notifying or warning their friends and followers about COVID-19. These interpersonal communications indicate that people bond around the topic of COVID-19 on Twitter. Users on Twitter also focused on the impact of coronavirus on people and countries. Specifically, numerous tweets were posted on the number of deaths linked to the coronavirus. Furthermore, the emotional and psychological impact of the coronavirus was mentioned in many tweets. Users on Twitter may show their fear and stress about COVID-19 and the lack of vaccine treatment options to prevent it or specific antiviral treatments . However, the sensationalistic use of Twitter can be a great challenge for public health and outbreak response efforts because of the wild spread of misinformation and conspiracy theories . The infectious outbreak of “fake news” and “distorted evidence” in the digital world can create mass panic and cause damaging and devastating consequences in the real world, distorting evidence and impeding the response efforts and activities of health care workers and public health systems . Additionally, the economic impact of COVID-19 on companies and countries were discussed in several tweets. Tweeters might talk about the economic impact of COVID-19 due to, for example, temporary closures of major fast-food chains and retailers (eg, McDonald’s, KFC, Apple, and Adidas) , decreases in auto sales, drops in oil demand, production delays such as with the iPhone, the canceling or postponing of sporting events such as the Formula One World Championship, or decreases in airline revenues due to flight cancellations . It has been estimated that the spread of COVID-19 could cost the worldwide economy a total of US $2.7 trillion . The last impact of COVID-19 discussed by Twitter users was travel. This topic might have been common because most countries have banned travel from and to countries that confirmed the presence of COVID-19 inside their borders. Tweets also focused on two possible sources of the coronavirus: the eating of meat and a Chinese biological military laboratory. Tweeters mentioned two main methods used to decrease the spread of COVID-19: masks and quarantine. The first method (masks) was discussed frequently on Twitter mainly due to the face mask shortage reported in several countries (eg, China, the United Kingdom, and the United States). The quarantine was a common topic in tweets because it was the first step that countries applied to control the outbreak of COVID-19.
Practical Implications Research shows that crisis response activities in reality and online are becoming increasingly “simultaneous and intertwined” . Social media provides a lucrative opportunity to spread and disseminate public health knowledge and information directly to the public . However, social media can also be a powerful weapon and, if not used appropriately, can be destructive to public health efforts, especially during a public health crisis. Therefore, more efforts are needed to build national and international detection and surveillance systems of diseases by examining online content published through the World Wide Web, including social media. There is a need for stronger and more proactive public health presence on social media. Governments and health systems should also “listen” or monitor the tweets from the public that relate to health, especially in a time of crisis, to help inform policies related to public health (eg, social distancing and quarantine) and supply chains among many others. Research Implications The global COVID-19 outbreak and its wild spread across countries demonstrates the need for more vigilant and timely responses aided by the research community. This was not the focus of this study, but future studies should investigate the spread of “fake news” in combination with infectious disease outbreaks . Moreover, there is a need for providing access to a core corpus of social media posts available to the scientific and public health community while maintaining privacy. Additional work is necessary for multilingual sentiment analysis on social media platforms, as most research efforts have been devoted to English-language data , including this study. It could also be useful for future studies to consider longitudinal, multilingual sentiment analysis in addition to concurrent analysis of infectious disease outbreaks on different social media platforms, if feasible.
Research shows that crisis response activities in reality and online are becoming increasingly “simultaneous and intertwined” . Social media provides a lucrative opportunity to spread and disseminate public health knowledge and information directly to the public . However, social media can also be a powerful weapon and, if not used appropriately, can be destructive to public health efforts, especially during a public health crisis. Therefore, more efforts are needed to build national and international detection and surveillance systems of diseases by examining online content published through the World Wide Web, including social media. There is a need for stronger and more proactive public health presence on social media. Governments and health systems should also “listen” or monitor the tweets from the public that relate to health, especially in a time of crisis, to help inform policies related to public health (eg, social distancing and quarantine) and supply chains among many others.
The global COVID-19 outbreak and its wild spread across countries demonstrates the need for more vigilant and timely responses aided by the research community. This was not the focus of this study, but future studies should investigate the spread of “fake news” in combination with infectious disease outbreaks . Moreover, there is a need for providing access to a core corpus of social media posts available to the scientific and public health community while maintaining privacy. Additional work is necessary for multilingual sentiment analysis on social media platforms, as most research efforts have been devoted to English-language data , including this study. It could also be useful for future studies to consider longitudinal, multilingual sentiment analysis in addition to concurrent analysis of infectious disease outbreaks on different social media platforms, if feasible.
Several strengths and limitations can be attributed to this study analyzing tweets related to the recent COVID-19 outbreak. In this study, no geographical restrictions were applied on the tweets analyzed considering the worldwide spread of the disease. However, the study only analyzed tweets in the English language, which may limit the generalizability of the findings about this worldwide outbreak. In addition, given that the Twitter standard search API does not allow researchers to obtain tweets posted more than 1 week ago , we could not get COVID-19-related tweets posted before February 2, 2020. Thus, the findings may not be generalizable to that period. Moreover, this study could not collect tweets from accounts marked as private. Therefore, findings may not represent all the topics discussed by users on Twitter related to COVID-19. Only posts on Twitter were analyzed in this study, thereby, our findings may not be generalizable to other social media platforms. Furthermore, the findings reported in this study are limited to only those that have access to and use Twitter. Therefore, caution is advised before assuming the generalizability of the results, as Twitter is not used by everyone in the population.
The COVID-19 pandemic has been affecting many health care systems and nations, claiming the lives of many people. As a vibrant social media platform, Twitter projected this heavy toll through the interactions and posts people made related to COVID-19. It is clear that coordinating public health crisis response activities in the real world and online is paramount, and should be a top priority for all health care systems. We need to build more national and international detection and surveillance systems to detect the spread of infectious diseases and combat the fake news that is usually accompanied by these diseases.
|
Optimization of preventive health care facility locations | d3774ad8-e9b7-42a4-a995-69deb67f567b | 3161374 | Preventive Medicine[mh] | Preventive health care programs aim to save lives and contribute to a better quality of life by diagnosing serious medical conditions early and reducing the likelihood of life-threatening disease. Evidence shows that successful treatment of some health problems is more likely if an illness is diagnosed at an early stage. Thus, efficient and effective preventive health care services have been an integral part of many health care reform programs within the past two decades . Facility location decisions are a critical element in strategic planning in preventive health care programs . Previous research proves that facility location plays a key role in the success of preventive health care programs in terms of the participation rate. A survey by Zimmerman finds that the convenience of access to a facility is a very important factor in a client's decision to have prostate cancer screening. Furthermore, a survey by Facione reveals that perceptions of lack of access to services are related to a decrease in mammography participation. A recent review by Baron et al. finds that the efficacy of reducing structural barriers (including distance required to travel to obtain mammograms) increases community access to breast, cervical, and colorectal cancer screening. Characteristics of preventive health care services are inherently different from other health care services (such as health care for acute diseases), which requires a different location decision methodology. The first characteristic of preventive health services is that people might not seek services from the closest preventive health care facility. Since preventive services are given to people with no clear symptoms of illness, people who seek preventive services have more flexibility as to when and where to receive preventive health care services . For example, for a person living in an area serviced by two preventive health care clinics within an acceptable travelling distance, the person may choose the closer one because of the convenience. Or he/she may go to the farther clinic, located near a shopping mall, because he/she can go shopping after a medical appointment. The second characteristic of preventive health services is that each facility needs to have a minimum number of clients to retain the accreditation, except when there is a policy decision to provide preventive services to sparsely populated neighborhoods. For example, the U.S Food and Drug Administration (FDA) requires a radiologist to interpret at least 960 mammograms and a radiology technician to perform at least 200 mammograms in 24 months to retain their FDA accreditation . According to the report from the World Health Organization , current health care systems do not make optimal use of available resources to support preventive health care programs. One of the reasons is that the location of preventive health care facilities is determined without fully considering the above two characteristics. In the current health care systems most facilities are located based on responding to emergent medical problems, which assumes that people would seek services from the nearest facility. Thus, location optimization is performed based on the distance between people and their assigned closest facility . In this paper, we present a methodology for the optimal location configuration of preventive health care facilities. In order to satisfy the characteristics of preventive health care services, we define the concept of accessibility to preventive health care services as the measurement for location optimization. The accessibility to preventive health care services used in this paper is comprised of three factors: (1) Regional availability of preventive health care services. Regional availability is expressed as a ratio between clients and preventive health care facilities within a region. A client in a higher ratio region has more convenient access to services. Regional availability considers all of the facilities within an acceptable travelling distance of a client when calculating the accessibility of preventive health care services to that client. The assumption behind regional availability is that people may go to any facility within the acceptable travelling distance constraint, which satisfies the first characteristic of preventive health care services that people might not seek services from the closest preventive health care facility. (2) Travelling distance between facilities and clients. The clients within an acceptable travelling distance of a facility do not share this facility equally since usage decreases with distance. The closer client would have higher accessibility to the facility. This factor satisfies the first law of geography , which states that "everything is related to everything else, but near things are more related than distant things" and the well-known fact that distance affects access to health care services . (3) Each facility should attract a minimum number of clients unless the facility is located in a remote place. This factor satisfies the second characteristic of preventive health care services. We use the Huff-based competitive location model to estimate the workload of facilities. The assumption behind the model is that the probability of a client getting service from a facility within the acceptable travelling distance constraint is related to two elements. The first element is the attraction of the facility. In this paper, the attraction of a facility is described by the inverse travelling distance between the facility and a client. The second element is the inverse of the sum of the attractions of all facilities within the acceptable travelling distance constraint, which means the more facilities that are located within an accessible distance of a client, the lower the chance that a particular facility will be used by the client. In this paper, the accessibility of preventive health care services only focuses on structural barriers that are directly related to the number, concentration, and location of healthcare facilities. The financial barriers (e.g., availability of insurance coverage) and personal barriers (e.g., social and cultural aspects) are not discussed. Additionally, in this paper, we only consider the configuration of preventive health care facilities. The potential interaction between preventive health care facilities with other facilities (i.e., primary health care facility) is not considered. Based on the new definition of accessibility, this paper proposes a bi-objective model to optimize the location of preventive health care facilities. As appropriate for publically funded health care facilities, the optimizing objectives are to improve efficiency and coverage of the preventive health care facilities. The bi-objective model is solved using the Interchange algorithm . To accelerate the solving process of the Interchange algorithm, two new data structures, 'population groups' and 'candidate string,' are implemented in order to pre-store the accessibility information. Additionally, this paper uses travelling distance and travelling time to measure the spatial barrier between clients and preventive health care facilities. The travelling distance and travelling time are estimated accurately and dynamically by calling the Google Maps Application Programming Interface (API) . The Google Maps API is a software program that defines how other software can request services (the same services we can get from the http://maps.google.com web page manually) from the Google. The Google Maps API is easier than the previous travelling time estimation methods in that it does not need users to supply speed limit maps and gather traffic rules. Finally, the methodology proposed in this paper is evaluated using a real application: optimizing the configuration of breast cancer screening services in Alberta, Canada. Experiments show that the methodology would help to increase the accessibility of breast cancer screening services in the province. In the following sections we: 1) provide a sketch of relevant background literature; 2) formalize the problem in the paper with respect to the characteristics of preventive health care services and present a solution approach; 3) describe the procedure for applying the methodology to a real-world scenario, namely the Alberta breast cancer screening program; and demonstrate the effectiveness and efficiency of the methodology for this purpose; and finally 4) conclude the paper with a discussion of future research directions. Basic facility location models The location of facilities is critical to the success of health care services . Although the health care facility location problem has been studied for thirty years, the characteristics of preventive health care services have not been fully incorporated into the prevailing facility location models. In this subsection, three basic facility location models are introduced first, which are the foundations of most preventive health care facility location models. Three classic facility location models are: the P-median model, the covering model and the center model . All three models assume that people would seek services from the closest facility. The models optimize facility locations based on the distance from clients to their closest facility. The P -median model seeks, for a given number of facilities, to identify locations that minimize the total travelling distance from all clients to their closest serving facilities. As noted by Church and ReVelle , one important way to measure the effectiveness of a facility location is by determining the average distance traveled by those who visit it. With increasing average travelling distance, facility accessibility decreases, and thus the location's effectiveness decreases. This relationship holds for facilities such as libraries and schools, to which proximity is desirable. However, this model does not consider the "worst case" situation and so it may result in inequities, forcing a few remote clients to travel far. The covering model finds the location of a given number of facilities that maximizes the total clients covered by these facilities within a maximum acceptable distance. The covering model is useful to allocate some facilities when minimizing the average distance traveled may not be appropriate. For example, emergency service facilities such as fire stations or ambulances need to be located within 15 minutes travelling time of every client. The critical nature of demands for service will dictate a maximum "acceptable" travelling distance or time. The covering problem model is widely used to determining the deployment of Emergency Medical Service System (EMS) vehicles in various settings . The center model, for a given number of facilities, identifies a location arrangement that minimizes the maximum distance while requiring coverage of all clients. Unlike the covering model, which takes an input coverage distance, this model determines endogenously the minimal coverage distance associated with locating a given number of facilities. This model is useful when there are not enough facilities in reality while the service has to cover all the clients within a target region. Advanced facility location models for preventive health care facilities Several methodologies for optimizing the configuration of preventive healthcare facilities have been recently proposed. Verter and Lapierre give a formalization of the preventive health care facility location problem. Their model is based on the covering model and considers the characteristics of preventive health care services by adding two constraints: (1) Probability of participation in a preventive program decreases with the distance between clients and their closet facility; (2) Each facility needs to have a minimal number of clients. They use a branch-and-bound based algorithm to identify optimal locations of facilities and to maximize participation in prevention programs. This is one of the main tools for finding the optimal solution of facility location problems . Zhang et al. extend Verter and Lapierre's model by using a queue method to capture the level of congestion of facilities in terms of waiting and service time. The queue method represents a facility as a capacity queue. When a client enters a facility, he/she would be put into a queue waiting for the service until all the others in the queue in front of him/her have been served. The authors calculate the total (travelling, waiting and service) time required for receiving the preventive service and use the total time as the accessibility of preventive health care facilities. They assume that each client would seek the services from that facility that has the minimum expected total time. The probability of participation in a preventive program decreases with the expected total time rather than the distance to be traveled. Additionally, they provide four heuristic methods to find optimal facility locations and compare the differences in terms of accuracy and computational requirements. Although each study mentioned above contains a relevant element and achieves satisfactory results for some real applications, all of them assume that people would seek services from the closest preventive healthcare facility (defined either by traveling distance or total service time), which conflicts with the first characteristic of the preventive health care program, which assumes that people have choices about which preventive health care facility to attend. Solution approaches to facility location models Two types of approaches are used to solve the facility location models: the exact solution approach and heuristic approach . Because the facility location problem is NP-hard , attempting a solution consumes a large amount of computational resources. The exact solution approach, such as branch and bound, can produce the best solution but cannot handle models with large amounts of constraints and variables since this consumes unacceptable amounts of computational resources. In order to solve a model with large amounts of constraints and variables, a heuristic approach is developed. This can produce acceptable solutions with fewer computational resources but will not guarantee finding the best solution. The most well-known algorithm based on the heuristic approach is the Interchange algorithm . The basic idea of the Interchange algorithm is to relocate a facility from its site in the current solution to an unused site. If the relocation produces a better value for a facility location model, then the change is accepted and a new solution is generated. Otherwise, the relocation is cancelled. The search process is repeated until no better solution can be found after relocating every facility. A large number of research approaches for accelerating the Interchange algorithm has been proposed . Densham and Rushton propose to pre-store location information in the three data structures: demand string, candidate string and allocation table. The core idea is to examine only a subset of demand nodes to update the value of facility location models whenever a change of facility locations occurs. The demand string is built for each client location (called demand node in their work). This lists all candidate locations that can serve the demand node within an acceptable travelling distance. The candidate string is built for each candidate location. It lists all of the demand nodes that can be served by the candidate location within an acceptable travelling distance. The allocation table records the distances from each demand node to closest and second closest candidate sites that are occupied by facilities. When one facility moves from one candidate site to another, demand nodes affected by the move can be identified from the candidate strings of the two candidate sites. The change value of the facility location model can then be determined by examining these demand nodes in the allocation table. The allocation table needs to be updated when a change is accepted. Since the above data structures accelerate the Interchange algorithm by recording the closest distance between demand nodes and facilities, the algorithm cannot be directly used to solve a preventive health care facility location model, which assumes that people might not take the service from the closest facility. Measurement of regional availability and facility's workload Besides the travelling distance and total service time, other methods have been developed to measure accessibility of preventive health care services. According to Joseph and Phillips , regional availability is an approach primarily used to measure the accessibility of health care services by finding Health Professional Shortage Areas (HPSA). The approach generally assumes that given a specific range for the service being offered at a facility, every resident within that range is a potential client of the service. The regional availability of health care services is defined as the ratio of the number of people living in a region to the number of health care facilities in that region. People living in a higher ratio region can more conveniently access the service. Regional availability has been successfully used in measuring the accessibility of primary health care services as well as preventive health care services . Luo and Wang compare different methods for measuring regional availability and recommend the usage of the two-step floating catchment area (2SFCA) method proposed by Radke and Mu . The travelling distance catchment area of a facility or a client is an area within travelling distance of the facility or client. The 2SFCA method is implemented in two steps. First, it computes a travelling distance catchment area of each facility and calculates a facility-to-client ratio R j of each facility by counting the number of the clients covered by the facility's catchment area. Second, it computes a travelling distance catchment area of each client and calculates the regional availability of each client by summing up all R j values of the facilities within the client's catchment area. However, the 2SFCA approach cannot be directly used for location decision since it does not explicitly deal with the distance effect. The 2SFCA considers that facilities have the same attraction to clients within their catchment areas regardless of their actual travelling distance. Thus, changing the location of facilities would only result in a change in the facility-to-client ratio R j of each facility. The total ratio between facilities and clients would not change as long as the number of facilities and clients are fixed. In this paper we extend the 2SFCA method by adding the distance factor for measuring the accessibility of preventive health care services. For clients in the catchment areas of multiple facilities, the probability that a client visits each facility can be estimated by using a Huff-based competitive model . The expression of the model is: (1) Where P ij is the probability of a client at site i travelling to a facility j ; S j is the size of a facility j ; T ij is the travelling time/distance between site i and facility j; λ is a parameter to reflect the effect of travelling time/distance. By using the model, the number of the clients in each site going to a facility can be estimated by multiplying the number of clients on the site with the probability that the clients at the site travel to the facility. The workload of the facility is estimated by summing up the number of clients traveling to the facility from all sites.
The location of facilities is critical to the success of health care services . Although the health care facility location problem has been studied for thirty years, the characteristics of preventive health care services have not been fully incorporated into the prevailing facility location models. In this subsection, three basic facility location models are introduced first, which are the foundations of most preventive health care facility location models. Three classic facility location models are: the P-median model, the covering model and the center model . All three models assume that people would seek services from the closest facility. The models optimize facility locations based on the distance from clients to their closest facility. The P -median model seeks, for a given number of facilities, to identify locations that minimize the total travelling distance from all clients to their closest serving facilities. As noted by Church and ReVelle , one important way to measure the effectiveness of a facility location is by determining the average distance traveled by those who visit it. With increasing average travelling distance, facility accessibility decreases, and thus the location's effectiveness decreases. This relationship holds for facilities such as libraries and schools, to which proximity is desirable. However, this model does not consider the "worst case" situation and so it may result in inequities, forcing a few remote clients to travel far. The covering model finds the location of a given number of facilities that maximizes the total clients covered by these facilities within a maximum acceptable distance. The covering model is useful to allocate some facilities when minimizing the average distance traveled may not be appropriate. For example, emergency service facilities such as fire stations or ambulances need to be located within 15 minutes travelling time of every client. The critical nature of demands for service will dictate a maximum "acceptable" travelling distance or time. The covering problem model is widely used to determining the deployment of Emergency Medical Service System (EMS) vehicles in various settings . The center model, for a given number of facilities, identifies a location arrangement that minimizes the maximum distance while requiring coverage of all clients. Unlike the covering model, which takes an input coverage distance, this model determines endogenously the minimal coverage distance associated with locating a given number of facilities. This model is useful when there are not enough facilities in reality while the service has to cover all the clients within a target region.
Several methodologies for optimizing the configuration of preventive healthcare facilities have been recently proposed. Verter and Lapierre give a formalization of the preventive health care facility location problem. Their model is based on the covering model and considers the characteristics of preventive health care services by adding two constraints: (1) Probability of participation in a preventive program decreases with the distance between clients and their closet facility; (2) Each facility needs to have a minimal number of clients. They use a branch-and-bound based algorithm to identify optimal locations of facilities and to maximize participation in prevention programs. This is one of the main tools for finding the optimal solution of facility location problems . Zhang et al. extend Verter and Lapierre's model by using a queue method to capture the level of congestion of facilities in terms of waiting and service time. The queue method represents a facility as a capacity queue. When a client enters a facility, he/she would be put into a queue waiting for the service until all the others in the queue in front of him/her have been served. The authors calculate the total (travelling, waiting and service) time required for receiving the preventive service and use the total time as the accessibility of preventive health care facilities. They assume that each client would seek the services from that facility that has the minimum expected total time. The probability of participation in a preventive program decreases with the expected total time rather than the distance to be traveled. Additionally, they provide four heuristic methods to find optimal facility locations and compare the differences in terms of accuracy and computational requirements. Although each study mentioned above contains a relevant element and achieves satisfactory results for some real applications, all of them assume that people would seek services from the closest preventive healthcare facility (defined either by traveling distance or total service time), which conflicts with the first characteristic of the preventive health care program, which assumes that people have choices about which preventive health care facility to attend.
Two types of approaches are used to solve the facility location models: the exact solution approach and heuristic approach . Because the facility location problem is NP-hard , attempting a solution consumes a large amount of computational resources. The exact solution approach, such as branch and bound, can produce the best solution but cannot handle models with large amounts of constraints and variables since this consumes unacceptable amounts of computational resources. In order to solve a model with large amounts of constraints and variables, a heuristic approach is developed. This can produce acceptable solutions with fewer computational resources but will not guarantee finding the best solution. The most well-known algorithm based on the heuristic approach is the Interchange algorithm . The basic idea of the Interchange algorithm is to relocate a facility from its site in the current solution to an unused site. If the relocation produces a better value for a facility location model, then the change is accepted and a new solution is generated. Otherwise, the relocation is cancelled. The search process is repeated until no better solution can be found after relocating every facility. A large number of research approaches for accelerating the Interchange algorithm has been proposed . Densham and Rushton propose to pre-store location information in the three data structures: demand string, candidate string and allocation table. The core idea is to examine only a subset of demand nodes to update the value of facility location models whenever a change of facility locations occurs. The demand string is built for each client location (called demand node in their work). This lists all candidate locations that can serve the demand node within an acceptable travelling distance. The candidate string is built for each candidate location. It lists all of the demand nodes that can be served by the candidate location within an acceptable travelling distance. The allocation table records the distances from each demand node to closest and second closest candidate sites that are occupied by facilities. When one facility moves from one candidate site to another, demand nodes affected by the move can be identified from the candidate strings of the two candidate sites. The change value of the facility location model can then be determined by examining these demand nodes in the allocation table. The allocation table needs to be updated when a change is accepted. Since the above data structures accelerate the Interchange algorithm by recording the closest distance between demand nodes and facilities, the algorithm cannot be directly used to solve a preventive health care facility location model, which assumes that people might not take the service from the closest facility.
Besides the travelling distance and total service time, other methods have been developed to measure accessibility of preventive health care services. According to Joseph and Phillips , regional availability is an approach primarily used to measure the accessibility of health care services by finding Health Professional Shortage Areas (HPSA). The approach generally assumes that given a specific range for the service being offered at a facility, every resident within that range is a potential client of the service. The regional availability of health care services is defined as the ratio of the number of people living in a region to the number of health care facilities in that region. People living in a higher ratio region can more conveniently access the service. Regional availability has been successfully used in measuring the accessibility of primary health care services as well as preventive health care services . Luo and Wang compare different methods for measuring regional availability and recommend the usage of the two-step floating catchment area (2SFCA) method proposed by Radke and Mu . The travelling distance catchment area of a facility or a client is an area within travelling distance of the facility or client. The 2SFCA method is implemented in two steps. First, it computes a travelling distance catchment area of each facility and calculates a facility-to-client ratio R j of each facility by counting the number of the clients covered by the facility's catchment area. Second, it computes a travelling distance catchment area of each client and calculates the regional availability of each client by summing up all R j values of the facilities within the client's catchment area. However, the 2SFCA approach cannot be directly used for location decision since it does not explicitly deal with the distance effect. The 2SFCA considers that facilities have the same attraction to clients within their catchment areas regardless of their actual travelling distance. Thus, changing the location of facilities would only result in a change in the facility-to-client ratio R j of each facility. The total ratio between facilities and clients would not change as long as the number of facilities and clients are fixed. In this paper we extend the 2SFCA method by adding the distance factor for measuring the accessibility of preventive health care services. For clients in the catchment areas of multiple facilities, the probability that a client visits each facility can be estimated by using a Huff-based competitive model . The expression of the model is: (1) Where P ij is the probability of a client at site i travelling to a facility j ; S j is the size of a facility j ; T ij is the travelling time/distance between site i and facility j; λ is a parameter to reflect the effect of travelling time/distance. By using the model, the number of the clients in each site going to a facility can be estimated by multiplying the number of clients on the site with the probability that the clients at the site travel to the facility. The workload of the facility is estimated by summing up the number of clients traveling to the facility from all sites.
Formulation of the problem Given a set of population centers and a set of candidate sites for facilities, the Preventive Health Care Facility Location (PHCFL) problem is to identify optimal locations for the predefined number of preventive health care facilities that maximize participation. Since the major determinant of participation in a preventive program is the accessibility of health care services , this paper solves the PHCFL problem by optimizing the accessibility of preventive health care services to population centers. In the following, we first introduce how to calculate the accessibility of preventive health care services to each population center. Then, a bi-objective model is given for the location optimization. For the purposes of clarity, the following definitions pertain: I Set of population centers ( i = 1, ..., | I| ); P i Number of clients in a population center i ; J Set of candidate sites for the location of preventive health care facilities ( j = 1, ..., | J |); n The predefined number of preventive health care facilities; y j If a facility opens at the candidate site j , then y j = 1; Otherwise, y j = 0; n j The facility that is the closest to a candidate site j , n j ∈ J ; d ij Travelling distance between a population center i and a candidate site j ; d 0 The travelling distance threshold of a catchment area; d The travelling distance threshold to define the remote place; A i Accessibility of preventive health care services at a population center i ; W min Minimum required workload of a facility. Accessibility of preventive health care services We define the accessibility of preventive health care services as an index to represent the level of convenience for each population center receiving the service. This can be calculated using the following two steps: Step 1 . For each candidate site j , search all the population locations that are within a travelling distance threshold from the candidate site j (that is, the catchment area of j ), and compute the facility-to-client ratio R j , within the catchment area: (2) Where P i is the number of the clients in a population center i . Step 2 . For each population center i , search all the facilities whose locations that are within the travelling distance threshold from a population center i (that is, the catchment area of i ), and the sum up the inverse distance-weighted facility-to-client ratio R j . (3) Constraint (a) requires the number of facilities to be equal to a predefined number n . Constraint (b) ensures that the population covered by each facility is beyond the minimum workload or that a facility is open in a remote place. In constraint (b), first we use the Huff-based competitive model to estimate the probability of a client in a population center i traveling to a candidate site j as . Compared with equation (1), S j is set to one since we assume the size of each preventive health care facility is the same. λ is set to one. Second, from the Huff-based model, the number of clients in a population center i traveling to a candidate site j is estimated by multiplying the number of clients in the population center i with the probability that the clients in the population center i traveling to the candidate site j . Therefore, the workload of the facility in a candidate site j is estimated by summing up the number of clients from all the population centers within the candidate site j 's catchment area. In addition, we use a predefined travelling distance d as a threshold for choosing remote places. For remote areas, the constraint of the minimum workload is not required. We define a place as remote if the distance from it to other facilities is over d (Usually d >> d 0 ). In Step 1, the facility-to-client ratio R j describes the regional availability of each facility. A higher ratio indicates that fewer clients share a facility, and vice-versa. Step 2 first adds the distance factor by multiplying the inverse distance with the facility-to-client ratio R j . This takes into account the fact that all the clients within a facility's catchment area do not share this facility equally, rather that usage decreases with distance from the facility; second, the accessibility to a population center is calculated by summing up the inverse distance-weighted facility-to-population ratios of the facilities within the population center's catchment area. This step satisfies the assumption that people may go to any facility as long as it is within an acceptable travelling distance, which is defined as the travelling distance threshold d 0 . In other words, for a given population center, the more facilities are within the acceptable travelling distance and the closer these facilities are to this population center, the higher possibility the clients in the population center access a preventive health care service. A bi-objective model For the optimal design of preventive health care programs, two important objectives should be considered, efficiency and coverage . The efficiency objective aims to maximize social welfare by achieving an optimal arrangement of health care facilities. Coverage aims to serve more people within a target area. In the above definition, the clients in a population center i can access services as long as the value A i is not zero and a larger value of A i indicates a better accessibility at a population center i . In this paper, we achieve the efficiency objective by maximizing the sum of population weighted accessibility values (equation (3)). We achieve the coverage objective by maximizing the number of people within the acceptable travelling distance of at least one facility (equation (5)). Therefore, the PHCFL problem can be formalized as a bi-objective model, shown as equation (6). (4) (5) (6) Where α is defined as a co-efficient for balancing the two objectives. The value of α is determined by the importance of each objective according to the requirements of real-world applications. If α = 0, then the objective focuses only on service efficiency pertaining to overloaded facilities in high density population areas. With an increase in the α value, increased attention is paid to service 'coverage'. If α = + ∞, then the objective is only to eliminate uneven accessibility, thereby making the analysis the same as for the covering model. Solution approach to the bi-objective model We use the Interchange algorithm to solve the bi-objective model. Since the data structures proposed by Densham and Rushton do not record the accessibility values, they cannot be directly used to solve the bi-objective model. To accelerate the Interchange algorithm, we build two new data structures: population group and candidate string . The rationale for building these two data structures is the same as the idea in Densham and Rushton , which is to accelerate the Interchange algorithm by examining only a subset of population centers to update the value of the bi-objective model whenever a change of facility locations occurs. Population group is a data structure that aggregates similar population centers. Since the population centers in the same group are covered by the same set of candidate sites, they have the same accessibility value. For the example shown in Figure , Table lists the population groups. Each population group records the candidate sites covering it and the potential population weighted accessibility value contributed from those candidate sites. For example, { O 4 } is covered by the catchment areas of a , b and c . According to equation (3), the accessibility value A 4 of the population center O 4 is . So, the potential population weighted accessibility value contributed from the candidate site a is ; from the candidate site b is ; from the candidate site c is , where P 4 is the number of clients in the population center O 4 . A candidate string is built for every candidate site. The candidate string lists all of the population groups that can be covered by the candidate site. It is used to quickly find the population groups affected by the change of facility locations. As shown in Table , three candidate strings are built for the example in Figure . In the candidate string of the candidate site a , three population groups { O 1 }, { O 2 , O 3 } and { O 4 } are listed. Population centers { O 2 , O 3 }, { O 4 }, { O 5 } and { O 6 } are listed in the candidate string of the candidate site b . The candidate string of the candidate site c has three population centers: { O 4 }, { O 5 } and { O 7 }. When moving a facility from one candidate site to another, the change value of the bi-objective model (equation (6)) can be calculated by only examining the population groups listed under the candidate strings of the two sites. According to equation (6), the value of the bi-objective model includes the sum of population weighted accessibility values and the number of people covered by the facilities. The change of the total population weighted accessibility value that results from moving from one site to another can be calculated by subtracting the population weighted accessibility value contributed from one site by that of another. For example, a facility is changed from the candidate site a to c . The population groups listed in the candidate string of the candidate site a is { O 1 }, { O 2 , O 3 } and { O 4 }. From the population group data structure, we know that the population weighted accessibility value contributed from the candidate site a in population group { O 1 } is , population group { O 2 , O 3 } is , and population group { O 4 } is . Therefore, the population weighted accessibility value contributed by the candidate site a is . The population groups listed in the candidate string of the candidate site c is { O 4 }, { O 5 } and { O 7 }. The population weighted accessibility value contributed from the candidate site c in population group { O 4 }, { O 5 } and { O 7 } are , and , respectively. The population weighted accessibility value contributed from the candidate site c is . Thus, the change of the population weighted accessibility value from the candidate site a to c can be calculated by: Similarly, the change in the number of people covered is the difference between the number of people covered by the original site and the number of people covered by the new site. For our example, the number of clients covered by a is P 1 + P 2 + P 3 + P 4 , and the number of clients covered by c is P 4 + P 5 + P 7 . So, when the facility location moves from a to c , the change of the number of clients covered is ( P 4 + P 5 + P 7 ) - ( P 1 + P 2 + P 3 + P 4 ). Compared to the data structures in , the population group and candidate string do not need to be updated after facility locations change. The reason is that, given an acceptable traveling distance threshold, the catchment area of each candidate site and population center do not change. Neither the number of facilities in a population center's catchment area nor the number of population centers in a candidate site's catchment area change.
Given a set of population centers and a set of candidate sites for facilities, the Preventive Health Care Facility Location (PHCFL) problem is to identify optimal locations for the predefined number of preventive health care facilities that maximize participation. Since the major determinant of participation in a preventive program is the accessibility of health care services , this paper solves the PHCFL problem by optimizing the accessibility of preventive health care services to population centers. In the following, we first introduce how to calculate the accessibility of preventive health care services to each population center. Then, a bi-objective model is given for the location optimization. For the purposes of clarity, the following definitions pertain: I Set of population centers ( i = 1, ..., | I| ); P i Number of clients in a population center i ; J Set of candidate sites for the location of preventive health care facilities ( j = 1, ..., | J |); n The predefined number of preventive health care facilities; y j If a facility opens at the candidate site j , then y j = 1; Otherwise, y j = 0; n j The facility that is the closest to a candidate site j , n j ∈ J ; d ij Travelling distance between a population center i and a candidate site j ; d 0 The travelling distance threshold of a catchment area; d The travelling distance threshold to define the remote place; A i Accessibility of preventive health care services at a population center i ; W min Minimum required workload of a facility. Accessibility of preventive health care services We define the accessibility of preventive health care services as an index to represent the level of convenience for each population center receiving the service. This can be calculated using the following two steps: Step 1 . For each candidate site j , search all the population locations that are within a travelling distance threshold from the candidate site j (that is, the catchment area of j ), and compute the facility-to-client ratio R j , within the catchment area: (2) Where P i is the number of the clients in a population center i . Step 2 . For each population center i , search all the facilities whose locations that are within the travelling distance threshold from a population center i (that is, the catchment area of i ), and the sum up the inverse distance-weighted facility-to-client ratio R j . (3) Constraint (a) requires the number of facilities to be equal to a predefined number n . Constraint (b) ensures that the population covered by each facility is beyond the minimum workload or that a facility is open in a remote place. In constraint (b), first we use the Huff-based competitive model to estimate the probability of a client in a population center i traveling to a candidate site j as . Compared with equation (1), S j is set to one since we assume the size of each preventive health care facility is the same. λ is set to one. Second, from the Huff-based model, the number of clients in a population center i traveling to a candidate site j is estimated by multiplying the number of clients in the population center i with the probability that the clients in the population center i traveling to the candidate site j . Therefore, the workload of the facility in a candidate site j is estimated by summing up the number of clients from all the population centers within the candidate site j 's catchment area. In addition, we use a predefined travelling distance d as a threshold for choosing remote places. For remote areas, the constraint of the minimum workload is not required. We define a place as remote if the distance from it to other facilities is over d (Usually d >> d 0 ). In Step 1, the facility-to-client ratio R j describes the regional availability of each facility. A higher ratio indicates that fewer clients share a facility, and vice-versa. Step 2 first adds the distance factor by multiplying the inverse distance with the facility-to-client ratio R j . This takes into account the fact that all the clients within a facility's catchment area do not share this facility equally, rather that usage decreases with distance from the facility; second, the accessibility to a population center is calculated by summing up the inverse distance-weighted facility-to-population ratios of the facilities within the population center's catchment area. This step satisfies the assumption that people may go to any facility as long as it is within an acceptable travelling distance, which is defined as the travelling distance threshold d 0 . In other words, for a given population center, the more facilities are within the acceptable travelling distance and the closer these facilities are to this population center, the higher possibility the clients in the population center access a preventive health care service. A bi-objective model For the optimal design of preventive health care programs, two important objectives should be considered, efficiency and coverage . The efficiency objective aims to maximize social welfare by achieving an optimal arrangement of health care facilities. Coverage aims to serve more people within a target area. In the above definition, the clients in a population center i can access services as long as the value A i is not zero and a larger value of A i indicates a better accessibility at a population center i . In this paper, we achieve the efficiency objective by maximizing the sum of population weighted accessibility values (equation (3)). We achieve the coverage objective by maximizing the number of people within the acceptable travelling distance of at least one facility (equation (5)). Therefore, the PHCFL problem can be formalized as a bi-objective model, shown as equation (6). (4) (5) (6) Where α is defined as a co-efficient for balancing the two objectives. The value of α is determined by the importance of each objective according to the requirements of real-world applications. If α = 0, then the objective focuses only on service efficiency pertaining to overloaded facilities in high density population areas. With an increase in the α value, increased attention is paid to service 'coverage'. If α = + ∞, then the objective is only to eliminate uneven accessibility, thereby making the analysis the same as for the covering model.
We define the accessibility of preventive health care services as an index to represent the level of convenience for each population center receiving the service. This can be calculated using the following two steps: Step 1 . For each candidate site j , search all the population locations that are within a travelling distance threshold from the candidate site j (that is, the catchment area of j ), and compute the facility-to-client ratio R j , within the catchment area: (2) Where P i is the number of the clients in a population center i . Step 2 . For each population center i , search all the facilities whose locations that are within the travelling distance threshold from a population center i (that is, the catchment area of i ), and the sum up the inverse distance-weighted facility-to-client ratio R j . (3) Constraint (a) requires the number of facilities to be equal to a predefined number n . Constraint (b) ensures that the population covered by each facility is beyond the minimum workload or that a facility is open in a remote place. In constraint (b), first we use the Huff-based competitive model to estimate the probability of a client in a population center i traveling to a candidate site j as . Compared with equation (1), S j is set to one since we assume the size of each preventive health care facility is the same. λ is set to one. Second, from the Huff-based model, the number of clients in a population center i traveling to a candidate site j is estimated by multiplying the number of clients in the population center i with the probability that the clients in the population center i traveling to the candidate site j . Therefore, the workload of the facility in a candidate site j is estimated by summing up the number of clients from all the population centers within the candidate site j 's catchment area. In addition, we use a predefined travelling distance d as a threshold for choosing remote places. For remote areas, the constraint of the minimum workload is not required. We define a place as remote if the distance from it to other facilities is over d (Usually d >> d 0 ). In Step 1, the facility-to-client ratio R j describes the regional availability of each facility. A higher ratio indicates that fewer clients share a facility, and vice-versa. Step 2 first adds the distance factor by multiplying the inverse distance with the facility-to-client ratio R j . This takes into account the fact that all the clients within a facility's catchment area do not share this facility equally, rather that usage decreases with distance from the facility; second, the accessibility to a population center is calculated by summing up the inverse distance-weighted facility-to-population ratios of the facilities within the population center's catchment area. This step satisfies the assumption that people may go to any facility as long as it is within an acceptable travelling distance, which is defined as the travelling distance threshold d 0 . In other words, for a given population center, the more facilities are within the acceptable travelling distance and the closer these facilities are to this population center, the higher possibility the clients in the population center access a preventive health care service.
For the optimal design of preventive health care programs, two important objectives should be considered, efficiency and coverage . The efficiency objective aims to maximize social welfare by achieving an optimal arrangement of health care facilities. Coverage aims to serve more people within a target area. In the above definition, the clients in a population center i can access services as long as the value A i is not zero and a larger value of A i indicates a better accessibility at a population center i . In this paper, we achieve the efficiency objective by maximizing the sum of population weighted accessibility values (equation (3)). We achieve the coverage objective by maximizing the number of people within the acceptable travelling distance of at least one facility (equation (5)). Therefore, the PHCFL problem can be formalized as a bi-objective model, shown as equation (6). (4) (5) (6) Where α is defined as a co-efficient for balancing the two objectives. The value of α is determined by the importance of each objective according to the requirements of real-world applications. If α = 0, then the objective focuses only on service efficiency pertaining to overloaded facilities in high density population areas. With an increase in the α value, increased attention is paid to service 'coverage'. If α = + ∞, then the objective is only to eliminate uneven accessibility, thereby making the analysis the same as for the covering model.
We use the Interchange algorithm to solve the bi-objective model. Since the data structures proposed by Densham and Rushton do not record the accessibility values, they cannot be directly used to solve the bi-objective model. To accelerate the Interchange algorithm, we build two new data structures: population group and candidate string . The rationale for building these two data structures is the same as the idea in Densham and Rushton , which is to accelerate the Interchange algorithm by examining only a subset of population centers to update the value of the bi-objective model whenever a change of facility locations occurs. Population group is a data structure that aggregates similar population centers. Since the population centers in the same group are covered by the same set of candidate sites, they have the same accessibility value. For the example shown in Figure , Table lists the population groups. Each population group records the candidate sites covering it and the potential population weighted accessibility value contributed from those candidate sites. For example, { O 4 } is covered by the catchment areas of a , b and c . According to equation (3), the accessibility value A 4 of the population center O 4 is . So, the potential population weighted accessibility value contributed from the candidate site a is ; from the candidate site b is ; from the candidate site c is , where P 4 is the number of clients in the population center O 4 . A candidate string is built for every candidate site. The candidate string lists all of the population groups that can be covered by the candidate site. It is used to quickly find the population groups affected by the change of facility locations. As shown in Table , three candidate strings are built for the example in Figure . In the candidate string of the candidate site a , three population groups { O 1 }, { O 2 , O 3 } and { O 4 } are listed. Population centers { O 2 , O 3 }, { O 4 }, { O 5 } and { O 6 } are listed in the candidate string of the candidate site b . The candidate string of the candidate site c has three population centers: { O 4 }, { O 5 } and { O 7 }. When moving a facility from one candidate site to another, the change value of the bi-objective model (equation (6)) can be calculated by only examining the population groups listed under the candidate strings of the two sites. According to equation (6), the value of the bi-objective model includes the sum of population weighted accessibility values and the number of people covered by the facilities. The change of the total population weighted accessibility value that results from moving from one site to another can be calculated by subtracting the population weighted accessibility value contributed from one site by that of another. For example, a facility is changed from the candidate site a to c . The population groups listed in the candidate string of the candidate site a is { O 1 }, { O 2 , O 3 } and { O 4 }. From the population group data structure, we know that the population weighted accessibility value contributed from the candidate site a in population group { O 1 } is , population group { O 2 , O 3 } is , and population group { O 4 } is . Therefore, the population weighted accessibility value contributed by the candidate site a is . The population groups listed in the candidate string of the candidate site c is { O 4 }, { O 5 } and { O 7 }. The population weighted accessibility value contributed from the candidate site c in population group { O 4 }, { O 5 } and { O 7 } are , and , respectively. The population weighted accessibility value contributed from the candidate site c is . Thus, the change of the population weighted accessibility value from the candidate site a to c can be calculated by: Similarly, the change in the number of people covered is the difference between the number of people covered by the original site and the number of people covered by the new site. For our example, the number of clients covered by a is P 1 + P 2 + P 3 + P 4 , and the number of clients covered by c is P 4 + P 5 + P 7 . So, when the facility location moves from a to c , the change of the number of clients covered is ( P 4 + P 5 + P 7 ) - ( P 1 + P 2 + P 3 + P 4 ). Compared to the data structures in , the population group and candidate string do not need to be updated after facility locations change. The reason is that, given an acceptable traveling distance threshold, the catchment area of each candidate site and population center do not change. Neither the number of facilities in a population center's catchment area nor the number of population centers in a candidate site's catchment area change.
In this section, we apply our method to a real-world application, the breast cancer screening program in Alberta, Canada. Problem statement and data issues Breast cancer is the most common cancer among Canadian women. In 2009, an estimated 22,700 Canadian women will be diagnosed with breast cancer and 5,400 would die from the disease; one in 9 women is expected to develop breast cancer during her lifetime and one in 28 will die from it . Evidence from randomized controlled trials supports the recommendation that women aged 50 to 69 years be screened with annual or biennial mammography to reduce their risk of dying from breast cancer . A population-based program to increase the number of Alberta women screened regularly for breast cancer was implemented in 1990 and today the Alberta Breast Cancer Screening Program (ABCSP) recommends Alberta women between the ages of 50 and 69 have a screening mammogram at least once every two years . A key challenge is to determine the optimal number of screening facilities and their locations. Our research considers the demand for services as measured by population in target groups (women between the ages of 50 and 69) in various locations. Estimates of the target population (Alberta women aged 50 to 69 years) were derived from census data at the Dissemination Area (DA) level from the 2006 Canadian census (Statistics Canada). There are 327830 women within the target age in Alberta. In order to calculate the distance between the DAs and the facilities, we used the Postal Code Conversion File (PCCF) to estimate the location of the DAs. A total of 5180 DAs were used in the research. Their values range from 0 to 920. The existing 53 screening sites providing screening mammography in Alberta were extracted from the ABCSP. In addition, 92 candidate screening sites in Alberta were extracted from the Alberta Health Services website . The candidate screening sites were defined as hospitals and cancer care facilities registered in Alberta but not used for breast cancer screening. The locations of clinics are geocoded to point locations using the GIS address matching technique . Figure shows the location of the DAs, the location of existing clinics, and the candidate sites for the screening service. Travelling distance and travelling time estimation In this subsection, we will briefly introduce how we use the Google maps API to estimate the travelling distance and travelling time between any pair of DA and facility. The process is comprised of four steps (as shown in Figure ): (1) Save the location information of facilities in the Facility Table as a Six digit postal code attribute. Create the Facility Coordinates Table by geocoding each six digit postal code in the Facility Table to the coordinates. (2) Save the ID number and the population number of each DA in the DA Table. Create the DA Coordinates Table by using the PCCF to estimate the coordinates of each DA record in the DA Table. (3) Create the Euclidean Distance Table by calculating Euclidean distance between any pair of the DA in the DA Coordinates Table and the facility in the Facility Coordinates Table. (4) Create the Travelling Distance and Time Table by calculating the travelling distance and time between the DA and the facility in each record in the Euclidean Distance Table. The calculation is implemented in JavaScript by calling the Google maps API. The pseudo code in Figure shows how to calculate the travelling distance and time between one DA/Facility pair. First, an object instance called directionObject is created for the class GDirections in line 1. GDirections is a class defined in the Google Maps API and is used to obtain driving information and display these on a map. Second, the coordinates of the facility and the DA are uploaded as a string query using the function load in the GDirections class (lines 2-3). The load function extracts the coordinates from the string and sets the departure and destination location for the next step in the calculation. Finally, the travelling distance and time between the uploaded DA and facility are calculated by using the functions getDuration and getDistance in the GDirections class (lines 4-5). Optimal facility configuration In this subsection, our method is used to optimize the locations of screening clinics. Since the number of current screening sites in Alberta is 53, the predefined number of preventive health care facilities n is set to 53. The threshold travelling distance d 0 of each facility is defined as thirty minutes driving time distance, a standard used by the U.S. Department of Health and Human Services for defining service areas . Minimum required workload at each facility W min is set to 4000 according to the policy decision made by the Ministry of Health . The predefined travelling distance for remote location d is set to 2 * d 0 . The coefficient factor α in the objective model is equal to 30. Figure shows the influence of the accessibility measurement on the existing facility configuration. The accessibility values of population centers range from 0 to 115.95. In Figure , it is obvious that most screening clinics are located in two large metropolitan areas, Calgary and Edmonton while remote locations, such as the east border area, are lacking clinics. Figure and show the location of facilities in Calgary metropolitan and Edmonton metropolitan areas, respectively. Based on the workload estimation method mentioned above, one facility in north Calgary and one facility in southwest Edmonton cannot serve enough clients. Figure shows the influence of the accessibility measurement on optimal facility configuration. The accessibility values of population centers range from 0 to 66.37. Compared with the existing facility configuration, the accessibility values in seven areas under the optimal facility configuration (shown in the circles in Figure ) dramatically higher. The facilities in Calgary metropolitan and Edmonton metropolitan areas are shown in Figure and respectively. In addition, all of the facilities have sufficient clients. Table compares the optimal facility configuration with the existing facility configuration based on average accessibility, coverage, and maximal accessibility. The Average accessibility records the average population weighted accessibility value of all population centers (i.e., ). The Coverage records the percentage of population that can access the service within the travelling distance threshold d 0 (i.e., ). Table shows that optimal facility configuration achieves better results in that it increases the average accessibility from 0.35 to 0.40 and improves the coverage from 78.42% to 81.86%. The value of maximal accessibility is smaller in the optimal facility configuration compared to that of the existing facility configuration because with our method some facilities in the high accessibility value area in the existing facility configuration are relocated to remote places. We also separate the accessibility value into different value segments and compare the number of people under the optimal facility configuration with the number of people under the existing facility configuration in each segment. People in the zero segment cannot be 'not covered' by any facility. The optimal facility configuration is better than the existing configuration because it reduces the number of people in this segment. People in the non-zero segment can be covered by at least one facility. People in higher value segments can get more convenient service. Compared with the existing facility configuration, the optimal facility configuration brings more people into higher value segments. Parametric analyses In this subsection, we perform sensitivity analyses on the impact of the following parameters in the real application. • α the coefficient factor in the objective function; • n the predefined number of preventive health care facilities; In Figure , we plot the optimal facility configurations on different parameters and the existing facility configuration into a solution space. Since we formalized the PHCFL problem as a bi-objective model, the solution space should have two dimensions: Y axis represents the efficiency (the average accessibility value of a facility configuration) and the X axis represents the coverage (the coverage value of that facility configuration). From Figure , two conclusions can be made. First, changing the value of α cannot improve the efficiency and coverage simultaneously. The optimal facility configurations denoted by dots show that with the increase of α , the efficiency of the optimal facility configuration decreases while the coverage of the optimal facility configuration increases. Second, with an increase in the predefined number of facilities allowed for a given facility configuration, both the efficiency and coverage of that configuration increase (denoted by squares). In addition, the optimal facility configuration can produce higher efficiency and coverage value with just 49 facilities, rather than with the existing configuration of 53 facilities.
Breast cancer is the most common cancer among Canadian women. In 2009, an estimated 22,700 Canadian women will be diagnosed with breast cancer and 5,400 would die from the disease; one in 9 women is expected to develop breast cancer during her lifetime and one in 28 will die from it . Evidence from randomized controlled trials supports the recommendation that women aged 50 to 69 years be screened with annual or biennial mammography to reduce their risk of dying from breast cancer . A population-based program to increase the number of Alberta women screened regularly for breast cancer was implemented in 1990 and today the Alberta Breast Cancer Screening Program (ABCSP) recommends Alberta women between the ages of 50 and 69 have a screening mammogram at least once every two years . A key challenge is to determine the optimal number of screening facilities and their locations. Our research considers the demand for services as measured by population in target groups (women between the ages of 50 and 69) in various locations. Estimates of the target population (Alberta women aged 50 to 69 years) were derived from census data at the Dissemination Area (DA) level from the 2006 Canadian census (Statistics Canada). There are 327830 women within the target age in Alberta. In order to calculate the distance between the DAs and the facilities, we used the Postal Code Conversion File (PCCF) to estimate the location of the DAs. A total of 5180 DAs were used in the research. Their values range from 0 to 920. The existing 53 screening sites providing screening mammography in Alberta were extracted from the ABCSP. In addition, 92 candidate screening sites in Alberta were extracted from the Alberta Health Services website . The candidate screening sites were defined as hospitals and cancer care facilities registered in Alberta but not used for breast cancer screening. The locations of clinics are geocoded to point locations using the GIS address matching technique . Figure shows the location of the DAs, the location of existing clinics, and the candidate sites for the screening service.
In this subsection, we will briefly introduce how we use the Google maps API to estimate the travelling distance and travelling time between any pair of DA and facility. The process is comprised of four steps (as shown in Figure ): (1) Save the location information of facilities in the Facility Table as a Six digit postal code attribute. Create the Facility Coordinates Table by geocoding each six digit postal code in the Facility Table to the coordinates. (2) Save the ID number and the population number of each DA in the DA Table. Create the DA Coordinates Table by using the PCCF to estimate the coordinates of each DA record in the DA Table. (3) Create the Euclidean Distance Table by calculating Euclidean distance between any pair of the DA in the DA Coordinates Table and the facility in the Facility Coordinates Table. (4) Create the Travelling Distance and Time Table by calculating the travelling distance and time between the DA and the facility in each record in the Euclidean Distance Table. The calculation is implemented in JavaScript by calling the Google maps API. The pseudo code in Figure shows how to calculate the travelling distance and time between one DA/Facility pair. First, an object instance called directionObject is created for the class GDirections in line 1. GDirections is a class defined in the Google Maps API and is used to obtain driving information and display these on a map. Second, the coordinates of the facility and the DA are uploaded as a string query using the function load in the GDirections class (lines 2-3). The load function extracts the coordinates from the string and sets the departure and destination location for the next step in the calculation. Finally, the travelling distance and time between the uploaded DA and facility are calculated by using the functions getDuration and getDistance in the GDirections class (lines 4-5).
In this subsection, our method is used to optimize the locations of screening clinics. Since the number of current screening sites in Alberta is 53, the predefined number of preventive health care facilities n is set to 53. The threshold travelling distance d 0 of each facility is defined as thirty minutes driving time distance, a standard used by the U.S. Department of Health and Human Services for defining service areas . Minimum required workload at each facility W min is set to 4000 according to the policy decision made by the Ministry of Health . The predefined travelling distance for remote location d is set to 2 * d 0 . The coefficient factor α in the objective model is equal to 30. Figure shows the influence of the accessibility measurement on the existing facility configuration. The accessibility values of population centers range from 0 to 115.95. In Figure , it is obvious that most screening clinics are located in two large metropolitan areas, Calgary and Edmonton while remote locations, such as the east border area, are lacking clinics. Figure and show the location of facilities in Calgary metropolitan and Edmonton metropolitan areas, respectively. Based on the workload estimation method mentioned above, one facility in north Calgary and one facility in southwest Edmonton cannot serve enough clients. Figure shows the influence of the accessibility measurement on optimal facility configuration. The accessibility values of population centers range from 0 to 66.37. Compared with the existing facility configuration, the accessibility values in seven areas under the optimal facility configuration (shown in the circles in Figure ) dramatically higher. The facilities in Calgary metropolitan and Edmonton metropolitan areas are shown in Figure and respectively. In addition, all of the facilities have sufficient clients. Table compares the optimal facility configuration with the existing facility configuration based on average accessibility, coverage, and maximal accessibility. The Average accessibility records the average population weighted accessibility value of all population centers (i.e., ). The Coverage records the percentage of population that can access the service within the travelling distance threshold d 0 (i.e., ). Table shows that optimal facility configuration achieves better results in that it increases the average accessibility from 0.35 to 0.40 and improves the coverage from 78.42% to 81.86%. The value of maximal accessibility is smaller in the optimal facility configuration compared to that of the existing facility configuration because with our method some facilities in the high accessibility value area in the existing facility configuration are relocated to remote places. We also separate the accessibility value into different value segments and compare the number of people under the optimal facility configuration with the number of people under the existing facility configuration in each segment. People in the zero segment cannot be 'not covered' by any facility. The optimal facility configuration is better than the existing configuration because it reduces the number of people in this segment. People in the non-zero segment can be covered by at least one facility. People in higher value segments can get more convenient service. Compared with the existing facility configuration, the optimal facility configuration brings more people into higher value segments.
In this subsection, we perform sensitivity analyses on the impact of the following parameters in the real application. • α the coefficient factor in the objective function; • n the predefined number of preventive health care facilities; In Figure , we plot the optimal facility configurations on different parameters and the existing facility configuration into a solution space. Since we formalized the PHCFL problem as a bi-objective model, the solution space should have two dimensions: Y axis represents the efficiency (the average accessibility value of a facility configuration) and the X axis represents the coverage (the coverage value of that facility configuration). From Figure , two conclusions can be made. First, changing the value of α cannot improve the efficiency and coverage simultaneously. The optimal facility configurations denoted by dots show that with the increase of α , the efficiency of the optimal facility configuration decreases while the coverage of the optimal facility configuration increases. Second, with an increase in the predefined number of facilities allowed for a given facility configuration, both the efficiency and coverage of that configuration increase (denoted by squares). In addition, the optimal facility configuration can produce higher efficiency and coverage value with just 49 facilities, rather than with the existing configuration of 53 facilities.
This paper presents a method for locating preventive health care facilities so as to maximize participation. Assuming that the accessibility of a preventive health care service is a major determinant of participation to that service, this paper formalizes and solves the preventive health care facility location problem by optimizing the accessibility of preventive health care service. Unlike the traditional methods which measure the accessibility based only on distance, this paper defines a new accessibility measurement that combines the two-step floating catchment area method, the distance factor and the Huff-based model. The new accessibility measurement is suitable for preventive health care services because it considers two unique characteristics of preventive health care services. It also proposes a bi-objective model for performing location optimization. The bi-objective model is solved by the Interchange algorithm. To accelerate the solving process, we implement the Interchange algorithm by using population group and candidate string. In addition, this paper estimates the travelling distance and travelling time accurately by calling the Google Maps API. Experiments show that our work improves the performance of the Alberta breast cancer screening program. Several extensions to our method are worth further investigation. First, in our method, the Interchange algorithm is implemented by following the idea proposed by Densham and Rushton . While this can dramatically speed up the solving process, the accuracy is not improved. Recently, some meta-heuristic algorithms, such as VNS (Variable Neighborhood Search) and Tabu , have been developed to improve optimization accuracy. Therefore, it would be interesting to incorporate strategies from meta-heuristic algorithms in order to increase accuracy. Second, there is a need for analyzing screening records of breast cancer in order to understand disease patterns. The disease patterns would help us to set the factors in the method precisely, such as the travelling distance threshold d 0 . Finally, Lapierre et al. suggest that the use of satellite or mobile facilities might constitute an effective approach for improving participation of preventive health care programs. Indeed, the ABCSP has a program of mobile mammography sites that extends the reach of mammography services to Alberta women living in rural communities. Thus, extending the current location model to a hierarchical location model by considering both fixed and mobile facilities is meaningful.
The authors declare that they have no competing interests.
WG participated in the conceptualization of the study, designed the methodology, gathered the data and implemented the experiments. XW participated in the conceptualization of the study, designed the methodology, gathered the data and supervised the experiments. SEM participated in the conceptualization of the study and gathered the data. All authors read and approved of the final manuscript.
|
Quantitative proteomic analysis unveils a critical role of VARS1 in hepatocellular carcinoma aggressiveness through the modulation of MAGI1 expression | a3718ca2-e47e-4aa1-b755-ad3952f5616a | 11731432 | Biochemistry[mh] | Hepatocellular carcinoma (HCC) represents the most common type of liver cancer. HCC development is mostly linked to the presence of an underlying chronic liver disease as infection by hepatitis B (HBV) or C (HCV) viruses, excessive alcohol consumption, or metabolic dysfunction-associated steatotic liver disease (MASLD) . Although the prevalence of HBV/HCV-related HCC is the highest, the cases of MASLD-related HCC are growing faster than other aetiologies and are envisioned to be the leading cause of HCC in the next years . Since the approval of sorafenib in 2007 for advanced HCC [median overall survival of 10.7 months (sorafenib) vs. 7.9 (placebo)] , the landscape of medical alternatives has evolved with new surgical approaches , novel first and second-line systemic treatments , and more recently with the approval of the combination of the immune checkpoint inhibitor atezolizumab with the anti-angiogenic monoclonal antibody bevacizumab for unresectable HCC. This combination reached an overall survival at 12 months of 67.2% vs. a 54.6% with sorafenib . However, the identification of more precise diagnostic and prognostic biomarkers and the development of additional therapeutic alternatives are still clinical needs in HCC. Unfortunately, the molecular pathogenesis of HCC varies according to the aetiology, which hampers the identification of more general biomarkers and therapeutic targets . For these reasons, great efforts are being made to identify, characterise, and validate molecular HCC subclasses with prognostic and therapeutic potential, leveraging from recent advances in -omic technologies. Integration of clinical, genomic, and transcriptomic data led to a first comprehensive molecular classification of HCC in which the proliferative and non-proliferative subclasses where defined . Since then, many studies have complemented this classification to increase its medical relevance [i.e., immune subclasses to predict immune-checkpoint inhibitors susceptibility , molecular-to-histological subclass correspondence for implantation in diagnosis/prognosis , or transcriptomic expression panels predicting tumour prognosis ]. However, proteomic profiles have not successfully incorporated in this molecular characterization despite an increasing knowledge indicating that the proteome is not a static reflection of the transcriptome . Different reasons, including technical limitations or patient selection bias, have hampered a wider recognition of the proteomic studies in HCC. Indeed, previous proteomic studies have been characterised by an incomplete coverage of the proteome, especially in targeted proteomics approaches where only a pre-defined subset of proteins is quantified or by aetiology biases towards viral hepatitis or MASLD-related HCC . In a recent study by Ng, C. et al. many of these issues were addressed through a multi-omic analysis of HCC derived from different aetiologies, in which some cellular pathways (i.e., RNA processing) were consistently dysregulated at all levels while others (i.e., the translational machinery) had a distinct regulation at RNA and protein levels. Thus, further knowledge about the HCC proteome is essential for finding potentially relevant clinical tools for the management of this challenging pathology. This study aimed to gain novel insights on HCC proteome composition and dynamics through non-targeted quantitative proteomics of subcellular fractioned hepatic samples from a well-characterised, aetiology-balanced cohort of patients. With this approach, a deep characterization of general and aetiologic-specific molecular alterations in HCC was obtained and two proteomic tumoral subgroups with prognostic value were defined. We also identified and propose herein the overexpression of a cellular machinery, named aminoacyl-tRNA synthetases (ARSs), as a prognostic marker and a potentially targetable vulnerability in HCC. Quantitative proteomic profiling of cytosolic and nuclear fractions in HCC Cytosolic and nuclear fractions of tumour tissues (T) and non-tumour adjacent tissues (NTAT) from 42 HCC patients and 5 healthy hepatic tissues (Fig. A; Supp. Table ) were subjected to quantitative proteomic analysis (SWATH-MS). Raw data resulting from this analysis are available as Supplemental Data 2A and 2B. This served to identify 1532 proteins in the cytosolic fractions and 2102 proteins in the nucleus; being 313 of them found in both fractions (Fig. B). Subcellular fractioning was confirmed by Western Blot with fraction-specific markers (NTAT vs. tumour; cytosol vs. nucleus, Supp. Figure A). The proteome of NTATs was comparable to the proteome of healthy hepatic tissues when analysed by non-supervised hierarchical clustering, as no differential clustering was observed (Supp. Figure B), thus indicating that NTATs represent reliable, heathy-like controls for the paired NTAT vs. tumour comparative. In contrast, the proteome of HCC samples was considerably altered compared with NTATs, in that 524 proteins (34.2%) exhibited a significantly different ( p < 0.05) abundance in the cytosolic fractions and 1013 proteins (48.2%) exhibited a significantly different abundance in nuclear fractions (Fig. B-C). Principal Component Analysis (PCA) further confirmed a profound dysregulation of HCC proteome, with a clear separation between HCC and NTAT samples based on nuclear proteomes and higher variation in cytosolic proteomes (Supp. Figure C). The cytosolic HCC proteome (Fig. D) was characterized by the alteration of metabolic pathways (carbohydrates, lipids and amino acids metabolism), together with cell movement (alteration of integrin signalling) and mRNA/protein metabolism [nonsense-mediated decay (NMD), protein folding]. Further analyses using IPA software, which predicts the activation/repression of cellular pathways, confirmed the activation of LXR/RXR signalling and a general inactivation of xenobiotic metabolism (Supp. Figure A). The nuclear HCC proteome (Fig. E) was characterised by the alteration of gene expression processes (RNA polymerase, transcription termination), RNA metabolism (mRNA splicing, tRNA aminoacylation), proteostatis-related cellular functions [Unfolded Protein Response (UPR), IRE1a, and Xbp1 signalling], and mitochondrial damage-related pathways. In this case, IPA analyses showed the activation of the spliceosome and sirtuin signalling, as well as the inactivation of oxidative phosphorylation and degradative processes (Supp. Figure B). According to the aetiology-dependent molecular pathogenesis of HCC, several proteomic alterations in the cytosolic and nuclear proteomes were aetiology-specific, with HBV, HCV, and mixed HCC samples exhibiting a similar profile; and MASLD-derived and ethylic HCC samples sharing many proteomic alterations such as PI3K/AKT signalling, spliceosome cycle, DNA double-strand break, and BER pathways. In addition, MASLD-derived HCC samples showed specific proteomic features including cell cycle regulation, telomere maintenance, and protein translation-related pathways (Supp. Figure A-B), suggesting that protein homeostasis alteration found in the general HCC cohort could be overrepresented in MASLD-derived HCC patients. The cytosolic and nuclear proteomic fingerprints define prognostic HCC subtypes To interrogate the existence of HCC subclasses based on cytosolic or nuclear proteomic fingerprints, unsupervised hierarchical clustering of HCC and NTAT samples was performed. In both cases, cytosolic and nuclear proteomes defined two HCC subgroups (PR1 and PR2), clearly differentiated from NTAT samples [Fig. A (Cytosol) and 2B (Nucleus)]. Most patients were clustered in the same subgroup by both cytosolic and nuclear fingerprints (Supplemental Table ). PR1 and PR2 were molecularly characterised by specific cytosolic and nuclear protein clusters. In particular, PR1 samples had a cytosolic proteome enriched in proteins of the Cyto3 cluster, and a nuclear proteome enriched in proteins of the Nuc2 cluster, which were, in both cases, related to the acute phase response, LXR and FXR signalling, consistent with a subgroup partially maintaining hepatic characteristics. On the other hand, PR2 samples were defined by an overexpression of Cyto4 and Nuc4 clusters, which were enriched for cancer-associated pathways such as EIF2, mTOR, spliceosome, or DNA repair, together with other less explored pathways such as tRNA aminoacylation. Patients from the PR1 and PR2 subgroup also had different clinical characteristics [Supp. Figure A (Cytosol) and 4B (Nucleus)]. Patients from PR2 subgroup showed a worse prognosis (i.e., bigger tumours, lower survival, higher recurrence and dedifferentiation). Although there was no statistically different distribution of aetiologies between PR1 and PR2, all the MASLD-derived HCC patients were included in PR2. Thus, the proteomic PR1 subgroup could be associated to a low aggressive HCC with maintained hepatic function while PR2 represents a highly aggressive/poor prognosis subtype of HCC. These proteome-defined subclasses correlated with the previously defined HCC molecular subclasses, as demonstrated by Gene Set Enrichment Analysis (GSEA) comparing the cytosolic and nuclear proteomes of both subgroups (Supp. Figure C). Particularly, the PR2 cytosolic proteome presents an enrichment in the gene sets defining Boyault G12 and G3 clusters, while a negative NES score was obtained for genes downregulated in the CTNNB1 subgroup proposed by Chiang . Similarly, PR2 nuclear proteome presents an enrichment in the gene sets defining the Boyault G3 and Hoshida S2 subclasses, while PR1 nuclear proteome was enriched in genes from the Unannotated subclass. Thus, PR1 shares common characteristics with clusters from the non-proliferative HCC subclass, while PR2 molecular fingerprint coincides with the proliferative subclass. Aminoacyl-tRNA synthetases (ARSs) are profoundly dysregulated in HCC Transfer RNA (tRNA) aminoacylation, a crucial biological process catalysed by the aminoacyl-tRNA synthetases (ARSs) and poorly explored in HCC, was one of the most consistently altered feature in the HCC proteome compared with NTAT and also in the proteomic proliferative-like subclass (PR2) compared with PR1. Among the 17 ARSs detected in the proteome, DARS1, EPRS1, NARS1, and VARS1 were upregulated in cytosolic fractions from HCC samples. In nuclear fractions, DARS1, FARSA, FARSB, NARS1, VARS1, and WARS1 were upregulated and EPRS1 and RARS1 were downregulated (Fig. A). Cytosolic VARS1 and NARS1 and nuclear FARSB had the highest discriminatory capacity between HCC and NTAT samples (Fig. A). ARSs dysregulation in HCC was further confirmed in other available in silico HCC cohorts (Fig. B). VARS1 and EPRS1 were consistently overexpressed in tumour samples of 5 out of 6 mRNA cohorts and in the protein cohort (CPTAC-Zhou), while NARS1, DARS1, and FARSA were overexpressed in more than 3 mRNA cohorts and in the CPTAC-Zhou cohort. Furthermore, VARS1 was among the top 3 genes with the higher discriminatory capacity (HCC vs. controls) in 4 of the cohorts (Fig. C). The general dysregulation of the ARSs family had a prognostic potential as shown by the evaluation of the clinical characteristics of different cohorts [(cytosolic and nuclear proteomics’ cohorts, TCGA-LIHC (mRNA), and CPTAC-Zhou (protein)] when patients were clustered in 3 groups according to the expression of the ARSs family (High, Medium, and Low ARSs expression) (Cytosolic proteomics – Fig. D; Nuclear proteomics – Supp. Figure A; CPTAC-Zhou - Supp. Figure B; TCGA-LIHC - Supp. Figure C). Patients with high cytosolic ARSs abundance had a significant lower survival and a higher recurrence rate (Fig. E). High-ARSs molecular fingerprint was enriched for the Boyault G123 gene set, while Low-ARSs patients were enriched for the Hoshida S3 and the Boyault G6 subclusters (Fig. F). Similar results were obtained for the nuclear proteomic analysis (Supp. Figure A), for the CPTAC-Zhou protein cohort (Supp. Figure B), and for the TCGA-LIGH cohort (Supp. Figure C). The TCGA-LIGH cohort also revealed an association between the High-ARSs subgroup and a higher mutational frequency of genes typically mutated in the proliferative HCC subclass (i.e., TP53 , ARID1/A , TSC1/2 ) (Supp. Figure C). A deeper molecular analysis (Supp. Figure A) associated the High-ARSs subgroup with the alteration of proteostasis [Unfolded Protein Response (UPR)], mRNA metabolism, or motility pathways. Consistently, an increased expression of typical mesenchymal markers (i.e., YWHAZ, YWHAE, ACTN4, etc.) and downregulation of hepatocyte markers (i.e., ALB, APOB, SERPINA1, etc.) (Supp. Figure B) was found in the High-ARSs subgroup. These results confirmed the upregulation of ARSs family in aggressive/proliferative and undifferentiated HCC samples. Mutations and genomic alterations in the ARSs genes were found at low frequency in HCC in the TCGA-LIHC cohort, ranging from 0.2 to 4% (Supp. Figure A). However, patients with at least one mutation/genomic alteration in ARSs genes (“ARS altered group”) had lower-disease free survival than the rest of patients (“ARS unaltered group”) (Supp. Figure B). The ARS altered group was also characterised by a lower prevalence of fibrosis but a higher occurrence of stablished cirrhosis (Supp. Figure C) and higher histologic grades (Supp. Figure D). Thus, ARSs mutations and genomic alterations could have also a potential prognostic usefulness in HCC. Valine-tRNA synthetase (VARS1) is dysregulated and correlates with bad prognosis in HCC Based on these results, valine-tRNA synthetase (VARS1) exhibited the most prominent alteration in HCC. Indeed, VARS1 protein levels were upregulated in cytosolic and nuclear fractions of HCC samples (compared to NTATs) and this upregulation was corroborated in the CPTAC protein cohort and 5 additional mRNA cohorts (Fig. A). A subset of samples from the Retrospective-2 cohort (Protein validation cohort) was used to confirm increased VARS1 protein abundance in HCC by Western Blot (Fig. B). Of note, VARS1 expression levels were also higher in HCC-derived adrenal and lung metastasis compared to the primary tumour (Fig. A). In addition, higher VARS1 cytosolic protein abundance was significantly associated with larger tumours, microinvasion, and low differentiation grade; and higher VARS1 nuclear abundance with portal hypertension. Finally, lower survival and higher recurrence was found in patients with high VARS1 in both, the cytosolic proteomic cohort and the in silico TCGA-LIHC cohort (Fig. C-D). VARS1 modulation in vitro alters stemness-related parameters in HCC cell lines Functional consequences of VARS1 were assessed by overexpression (plasmid) and silencing (2 siRNAs) in two HCC-derived cell lines with different aggressiveness (validation studies in Supp. Figure A-B). VARS1 overexpression had no significant effect on Hep3B and SNU-387 proliferation or migration, although VARS1 silencing slightly but significantly reduced proliferation and migration in both cell lines (Supp. Figure C). In contrast, VARS1 overexpression significant increased tumorspheres size and colony formation, while the opposite effect was observed in VARS1 silencing (Fig. E), suggesting a more relevant role of VARS1 in epithelial-to-mesenchymal transition (EMT) and tumour establishment. Indeed, VARS1 silencing was associated with a general upregulation of epithelial-related adherence molecules ( CDH2 and ZO-1 ) and downregulation of the RAC-RHO axis, while stemness/progenitor markers were mostly unaltered (Supp. Figure D). Consistently, VARS1 overexpression modulated the expression of some EMT and stemness/progenitor markers including the Ras Homolog Family Member A (RHOA) and B (RHOB), the MET Proto-Oncogene, and the Thy-1 Cell Surface Antigen (THY1) (Supp. Figure D). VARS1 implication in tumour establishment was confirmed by two in vivo approaches. First, an in vivo Extreme Limiting Dilution Assay (ELDA) was performed, demonstrating that VARS1 -overexpressing cells formed bigger subcutaneous tumours compared to control cells at the three cell concentrations tested (1 million, 100,000, and 10,000 cells/flank) (Fig. A-B, Supp. Figure E). The differences in tumour volume/weight and tumour number were higher when subcutaneously injecting a lower concentration of cells, a condition in which control cells formed small or no tumours. Second, VARS1 -overexpressing, luciferase-expressing Hep3B cells were used to stablish an orthotopic HCC model that confirmed that, under challenging conditions (injection of a sub-optimal concentration of cells), VARS1 -overexpressing cells were more successful at initiating tumour formation and promoted faster tumour growth (Fig. C-D). VARS1 has a valine-independent effect on HCC proteome and modulates MAGI1 expression To further elucidate the molecular consequences of VARS1 overexpression, non-targeted quantitative proteomics on Hep3B and SNU-387 cell lines overexpressing VARS1 were performed. Raw data are available as Supplemental Data 2 C. VARS1 overexpression altered the expression of 257 proteins in Hep3B and 348 proteins in SNU-387 (Fig. A). In both cell lines, these changes were associated to common cellular processes such as mRNA metabolism, SUMOylation, and the RHO GTPases cycle. Interestingly, there were also other cell line-specific alterations, including a decrease in proteins controlling apoptosis in Hep3B and a shift on inflammasomes predominancy in SNU-387 (Fig. B). Remarkably, VARS1 overexpression induced changes in 20 common proteins between the two cell lines. Most of these altered proteins (16/20) had a median-centred composition of valine (Supp. Figure A), suggesting no bias toward valine-rich proteins. Also, a general downregulation of valine-tRNAs (the substrates of VARS1) in HCC tumours compared to control liver tissues form the TCGA-LIHC cohort was observed, wherein only 5 of the 30 valine-tRNAs had a significant correlation, all of them negative, with the expression of VARS1 in tumour tissues (Supp. Figure B). These data demonstrate that VARS1 overexpression is not accompanied by an increased expression of tRNAs that incorporate valine into nascent proteins, pointing to a valine-independent effect of VARS1 in HCC. Among these 20 commonly altered proteins, MAGI1, EBNA1BP2, IDE, and MFSD10 were consistently altered (increased or reduced expression) in both cell lines (Fig. C). From those, MAGI1, which has been proposed as a tumour suppressor in HCC , presented expression levels inversely correlated with VARS1 expression in three different HCC cohorts (Fig. D, Supp. Figure C). In vitro, VARS1 silencing induced MAGI1 mRNA and protein upregulation, while VARS1 overexpression induced MAGI1 mRNA and protein downregulation in both cell lines (Fig. E-F). This modulation of MAGI1 by VARS1 may be exerted at transcriptional and post-transcriptional level. Indeed, the in silico hTFtarget database revealed 30 transcription factors that bind to MAGI1 promoter in the liver (Supp. Figure A), two of them (FOXA1 and JUN) being VARS1 interactors according to the IntAct database from EBI (Supp. Figure B). Similarly, publicly available eCLIPseq data from the ENCORI project served to identify 135 MAGI1 mRNA-binding proteins (RBPs). Subsequently, we interrogated our quantitative proteomics data on cell lines to identify RBPs significantly altered after VARS1 overexpression (Supp. Figure C). Among the dysregulated MAGI1-mRNA-binding proteins we identified some key splicing-related proteins as PRPF8, DHX9, RBMX, and HNRNPA2B1, as well as other proteins involved in mRNA metabolism, as UPF1 and EXOSC5. Although further experiments should be performed to confirm these VARS1-mediated MAGI1 regulatory mechanisms, these results provide a possible explanation of VARS1 transcriptional and post-transcriptional modulation of MAGI1. MAGI1 acts as a tumour suppressor in HCC and mediates VARS1-induced increased aggressiveness MAGI1 was first characterized in HCC cohorts, where a significant downregulation in HCC samples compared to controls was confirmed (Fig. A). MAGI1 overexpression in cell lines decreased cellular proliferation especially in the SNU-387 cell lines, while both cell models experienced at least a 50% of reduction in the formation of colonies and tumorspheres (Fig. B; validation studies in Supp. Figure ), confirming that MAGI1 is also involved in stemness and tumour establishment, as seen for VARS1. To assess whether MAGI1 downregulation is required for VARS1 promotion of aggressiveness, rescue experiments simultaneously overexpressing VARS1 and MAGI1 were performed (validation studies in Fig. C). This approach showed that the promotion of colonies and tumorspheres formation induced by VARS1 overexpression was abolished by MAGI1 overexpression in both cell lines (Fig. D), demonstrating a MAGI1-mediated effect of VARS1 overexpression in HCC. Cytosolic and nuclear fractions of tumour tissues (T) and non-tumour adjacent tissues (NTAT) from 42 HCC patients and 5 healthy hepatic tissues (Fig. A; Supp. Table ) were subjected to quantitative proteomic analysis (SWATH-MS). Raw data resulting from this analysis are available as Supplemental Data 2A and 2B. This served to identify 1532 proteins in the cytosolic fractions and 2102 proteins in the nucleus; being 313 of them found in both fractions (Fig. B). Subcellular fractioning was confirmed by Western Blot with fraction-specific markers (NTAT vs. tumour; cytosol vs. nucleus, Supp. Figure A). The proteome of NTATs was comparable to the proteome of healthy hepatic tissues when analysed by non-supervised hierarchical clustering, as no differential clustering was observed (Supp. Figure B), thus indicating that NTATs represent reliable, heathy-like controls for the paired NTAT vs. tumour comparative. In contrast, the proteome of HCC samples was considerably altered compared with NTATs, in that 524 proteins (34.2%) exhibited a significantly different ( p < 0.05) abundance in the cytosolic fractions and 1013 proteins (48.2%) exhibited a significantly different abundance in nuclear fractions (Fig. B-C). Principal Component Analysis (PCA) further confirmed a profound dysregulation of HCC proteome, with a clear separation between HCC and NTAT samples based on nuclear proteomes and higher variation in cytosolic proteomes (Supp. Figure C). The cytosolic HCC proteome (Fig. D) was characterized by the alteration of metabolic pathways (carbohydrates, lipids and amino acids metabolism), together with cell movement (alteration of integrin signalling) and mRNA/protein metabolism [nonsense-mediated decay (NMD), protein folding]. Further analyses using IPA software, which predicts the activation/repression of cellular pathways, confirmed the activation of LXR/RXR signalling and a general inactivation of xenobiotic metabolism (Supp. Figure A). The nuclear HCC proteome (Fig. E) was characterised by the alteration of gene expression processes (RNA polymerase, transcription termination), RNA metabolism (mRNA splicing, tRNA aminoacylation), proteostatis-related cellular functions [Unfolded Protein Response (UPR), IRE1a, and Xbp1 signalling], and mitochondrial damage-related pathways. In this case, IPA analyses showed the activation of the spliceosome and sirtuin signalling, as well as the inactivation of oxidative phosphorylation and degradative processes (Supp. Figure B). According to the aetiology-dependent molecular pathogenesis of HCC, several proteomic alterations in the cytosolic and nuclear proteomes were aetiology-specific, with HBV, HCV, and mixed HCC samples exhibiting a similar profile; and MASLD-derived and ethylic HCC samples sharing many proteomic alterations such as PI3K/AKT signalling, spliceosome cycle, DNA double-strand break, and BER pathways. In addition, MASLD-derived HCC samples showed specific proteomic features including cell cycle regulation, telomere maintenance, and protein translation-related pathways (Supp. Figure A-B), suggesting that protein homeostasis alteration found in the general HCC cohort could be overrepresented in MASLD-derived HCC patients. To interrogate the existence of HCC subclasses based on cytosolic or nuclear proteomic fingerprints, unsupervised hierarchical clustering of HCC and NTAT samples was performed. In both cases, cytosolic and nuclear proteomes defined two HCC subgroups (PR1 and PR2), clearly differentiated from NTAT samples [Fig. A (Cytosol) and 2B (Nucleus)]. Most patients were clustered in the same subgroup by both cytosolic and nuclear fingerprints (Supplemental Table ). PR1 and PR2 were molecularly characterised by specific cytosolic and nuclear protein clusters. In particular, PR1 samples had a cytosolic proteome enriched in proteins of the Cyto3 cluster, and a nuclear proteome enriched in proteins of the Nuc2 cluster, which were, in both cases, related to the acute phase response, LXR and FXR signalling, consistent with a subgroup partially maintaining hepatic characteristics. On the other hand, PR2 samples were defined by an overexpression of Cyto4 and Nuc4 clusters, which were enriched for cancer-associated pathways such as EIF2, mTOR, spliceosome, or DNA repair, together with other less explored pathways such as tRNA aminoacylation. Patients from the PR1 and PR2 subgroup also had different clinical characteristics [Supp. Figure A (Cytosol) and 4B (Nucleus)]. Patients from PR2 subgroup showed a worse prognosis (i.e., bigger tumours, lower survival, higher recurrence and dedifferentiation). Although there was no statistically different distribution of aetiologies between PR1 and PR2, all the MASLD-derived HCC patients were included in PR2. Thus, the proteomic PR1 subgroup could be associated to a low aggressive HCC with maintained hepatic function while PR2 represents a highly aggressive/poor prognosis subtype of HCC. These proteome-defined subclasses correlated with the previously defined HCC molecular subclasses, as demonstrated by Gene Set Enrichment Analysis (GSEA) comparing the cytosolic and nuclear proteomes of both subgroups (Supp. Figure C). Particularly, the PR2 cytosolic proteome presents an enrichment in the gene sets defining Boyault G12 and G3 clusters, while a negative NES score was obtained for genes downregulated in the CTNNB1 subgroup proposed by Chiang . Similarly, PR2 nuclear proteome presents an enrichment in the gene sets defining the Boyault G3 and Hoshida S2 subclasses, while PR1 nuclear proteome was enriched in genes from the Unannotated subclass. Thus, PR1 shares common characteristics with clusters from the non-proliferative HCC subclass, while PR2 molecular fingerprint coincides with the proliferative subclass. Transfer RNA (tRNA) aminoacylation, a crucial biological process catalysed by the aminoacyl-tRNA synthetases (ARSs) and poorly explored in HCC, was one of the most consistently altered feature in the HCC proteome compared with NTAT and also in the proteomic proliferative-like subclass (PR2) compared with PR1. Among the 17 ARSs detected in the proteome, DARS1, EPRS1, NARS1, and VARS1 were upregulated in cytosolic fractions from HCC samples. In nuclear fractions, DARS1, FARSA, FARSB, NARS1, VARS1, and WARS1 were upregulated and EPRS1 and RARS1 were downregulated (Fig. A). Cytosolic VARS1 and NARS1 and nuclear FARSB had the highest discriminatory capacity between HCC and NTAT samples (Fig. A). ARSs dysregulation in HCC was further confirmed in other available in silico HCC cohorts (Fig. B). VARS1 and EPRS1 were consistently overexpressed in tumour samples of 5 out of 6 mRNA cohorts and in the protein cohort (CPTAC-Zhou), while NARS1, DARS1, and FARSA were overexpressed in more than 3 mRNA cohorts and in the CPTAC-Zhou cohort. Furthermore, VARS1 was among the top 3 genes with the higher discriminatory capacity (HCC vs. controls) in 4 of the cohorts (Fig. C). The general dysregulation of the ARSs family had a prognostic potential as shown by the evaluation of the clinical characteristics of different cohorts [(cytosolic and nuclear proteomics’ cohorts, TCGA-LIHC (mRNA), and CPTAC-Zhou (protein)] when patients were clustered in 3 groups according to the expression of the ARSs family (High, Medium, and Low ARSs expression) (Cytosolic proteomics – Fig. D; Nuclear proteomics – Supp. Figure A; CPTAC-Zhou - Supp. Figure B; TCGA-LIHC - Supp. Figure C). Patients with high cytosolic ARSs abundance had a significant lower survival and a higher recurrence rate (Fig. E). High-ARSs molecular fingerprint was enriched for the Boyault G123 gene set, while Low-ARSs patients were enriched for the Hoshida S3 and the Boyault G6 subclusters (Fig. F). Similar results were obtained for the nuclear proteomic analysis (Supp. Figure A), for the CPTAC-Zhou protein cohort (Supp. Figure B), and for the TCGA-LIGH cohort (Supp. Figure C). The TCGA-LIGH cohort also revealed an association between the High-ARSs subgroup and a higher mutational frequency of genes typically mutated in the proliferative HCC subclass (i.e., TP53 , ARID1/A , TSC1/2 ) (Supp. Figure C). A deeper molecular analysis (Supp. Figure A) associated the High-ARSs subgroup with the alteration of proteostasis [Unfolded Protein Response (UPR)], mRNA metabolism, or motility pathways. Consistently, an increased expression of typical mesenchymal markers (i.e., YWHAZ, YWHAE, ACTN4, etc.) and downregulation of hepatocyte markers (i.e., ALB, APOB, SERPINA1, etc.) (Supp. Figure B) was found in the High-ARSs subgroup. These results confirmed the upregulation of ARSs family in aggressive/proliferative and undifferentiated HCC samples. Mutations and genomic alterations in the ARSs genes were found at low frequency in HCC in the TCGA-LIHC cohort, ranging from 0.2 to 4% (Supp. Figure A). However, patients with at least one mutation/genomic alteration in ARSs genes (“ARS altered group”) had lower-disease free survival than the rest of patients (“ARS unaltered group”) (Supp. Figure B). The ARS altered group was also characterised by a lower prevalence of fibrosis but a higher occurrence of stablished cirrhosis (Supp. Figure C) and higher histologic grades (Supp. Figure D). Thus, ARSs mutations and genomic alterations could have also a potential prognostic usefulness in HCC. Based on these results, valine-tRNA synthetase (VARS1) exhibited the most prominent alteration in HCC. Indeed, VARS1 protein levels were upregulated in cytosolic and nuclear fractions of HCC samples (compared to NTATs) and this upregulation was corroborated in the CPTAC protein cohort and 5 additional mRNA cohorts (Fig. A). A subset of samples from the Retrospective-2 cohort (Protein validation cohort) was used to confirm increased VARS1 protein abundance in HCC by Western Blot (Fig. B). Of note, VARS1 expression levels were also higher in HCC-derived adrenal and lung metastasis compared to the primary tumour (Fig. A). In addition, higher VARS1 cytosolic protein abundance was significantly associated with larger tumours, microinvasion, and low differentiation grade; and higher VARS1 nuclear abundance with portal hypertension. Finally, lower survival and higher recurrence was found in patients with high VARS1 in both, the cytosolic proteomic cohort and the in silico TCGA-LIHC cohort (Fig. C-D). Functional consequences of VARS1 were assessed by overexpression (plasmid) and silencing (2 siRNAs) in two HCC-derived cell lines with different aggressiveness (validation studies in Supp. Figure A-B). VARS1 overexpression had no significant effect on Hep3B and SNU-387 proliferation or migration, although VARS1 silencing slightly but significantly reduced proliferation and migration in both cell lines (Supp. Figure C). In contrast, VARS1 overexpression significant increased tumorspheres size and colony formation, while the opposite effect was observed in VARS1 silencing (Fig. E), suggesting a more relevant role of VARS1 in epithelial-to-mesenchymal transition (EMT) and tumour establishment. Indeed, VARS1 silencing was associated with a general upregulation of epithelial-related adherence molecules ( CDH2 and ZO-1 ) and downregulation of the RAC-RHO axis, while stemness/progenitor markers were mostly unaltered (Supp. Figure D). Consistently, VARS1 overexpression modulated the expression of some EMT and stemness/progenitor markers including the Ras Homolog Family Member A (RHOA) and B (RHOB), the MET Proto-Oncogene, and the Thy-1 Cell Surface Antigen (THY1) (Supp. Figure D). VARS1 implication in tumour establishment was confirmed by two in vivo approaches. First, an in vivo Extreme Limiting Dilution Assay (ELDA) was performed, demonstrating that VARS1 -overexpressing cells formed bigger subcutaneous tumours compared to control cells at the three cell concentrations tested (1 million, 100,000, and 10,000 cells/flank) (Fig. A-B, Supp. Figure E). The differences in tumour volume/weight and tumour number were higher when subcutaneously injecting a lower concentration of cells, a condition in which control cells formed small or no tumours. Second, VARS1 -overexpressing, luciferase-expressing Hep3B cells were used to stablish an orthotopic HCC model that confirmed that, under challenging conditions (injection of a sub-optimal concentration of cells), VARS1 -overexpressing cells were more successful at initiating tumour formation and promoted faster tumour growth (Fig. C-D). To further elucidate the molecular consequences of VARS1 overexpression, non-targeted quantitative proteomics on Hep3B and SNU-387 cell lines overexpressing VARS1 were performed. Raw data are available as Supplemental Data 2 C. VARS1 overexpression altered the expression of 257 proteins in Hep3B and 348 proteins in SNU-387 (Fig. A). In both cell lines, these changes were associated to common cellular processes such as mRNA metabolism, SUMOylation, and the RHO GTPases cycle. Interestingly, there were also other cell line-specific alterations, including a decrease in proteins controlling apoptosis in Hep3B and a shift on inflammasomes predominancy in SNU-387 (Fig. B). Remarkably, VARS1 overexpression induced changes in 20 common proteins between the two cell lines. Most of these altered proteins (16/20) had a median-centred composition of valine (Supp. Figure A), suggesting no bias toward valine-rich proteins. Also, a general downregulation of valine-tRNAs (the substrates of VARS1) in HCC tumours compared to control liver tissues form the TCGA-LIHC cohort was observed, wherein only 5 of the 30 valine-tRNAs had a significant correlation, all of them negative, with the expression of VARS1 in tumour tissues (Supp. Figure B). These data demonstrate that VARS1 overexpression is not accompanied by an increased expression of tRNAs that incorporate valine into nascent proteins, pointing to a valine-independent effect of VARS1 in HCC. Among these 20 commonly altered proteins, MAGI1, EBNA1BP2, IDE, and MFSD10 were consistently altered (increased or reduced expression) in both cell lines (Fig. C). From those, MAGI1, which has been proposed as a tumour suppressor in HCC , presented expression levels inversely correlated with VARS1 expression in three different HCC cohorts (Fig. D, Supp. Figure C). In vitro, VARS1 silencing induced MAGI1 mRNA and protein upregulation, while VARS1 overexpression induced MAGI1 mRNA and protein downregulation in both cell lines (Fig. E-F). This modulation of MAGI1 by VARS1 may be exerted at transcriptional and post-transcriptional level. Indeed, the in silico hTFtarget database revealed 30 transcription factors that bind to MAGI1 promoter in the liver (Supp. Figure A), two of them (FOXA1 and JUN) being VARS1 interactors according to the IntAct database from EBI (Supp. Figure B). Similarly, publicly available eCLIPseq data from the ENCORI project served to identify 135 MAGI1 mRNA-binding proteins (RBPs). Subsequently, we interrogated our quantitative proteomics data on cell lines to identify RBPs significantly altered after VARS1 overexpression (Supp. Figure C). Among the dysregulated MAGI1-mRNA-binding proteins we identified some key splicing-related proteins as PRPF8, DHX9, RBMX, and HNRNPA2B1, as well as other proteins involved in mRNA metabolism, as UPF1 and EXOSC5. Although further experiments should be performed to confirm these VARS1-mediated MAGI1 regulatory mechanisms, these results provide a possible explanation of VARS1 transcriptional and post-transcriptional modulation of MAGI1. MAGI1 was first characterized in HCC cohorts, where a significant downregulation in HCC samples compared to controls was confirmed (Fig. A). MAGI1 overexpression in cell lines decreased cellular proliferation especially in the SNU-387 cell lines, while both cell models experienced at least a 50% of reduction in the formation of colonies and tumorspheres (Fig. B; validation studies in Supp. Figure ), confirming that MAGI1 is also involved in stemness and tumour establishment, as seen for VARS1. To assess whether MAGI1 downregulation is required for VARS1 promotion of aggressiveness, rescue experiments simultaneously overexpressing VARS1 and MAGI1 were performed (validation studies in Fig. C). This approach showed that the promotion of colonies and tumorspheres formation induced by VARS1 overexpression was abolished by MAGI1 overexpression in both cell lines (Fig. D), demonstrating a MAGI1-mediated effect of VARS1 overexpression in HCC. High-throughput methods leverage the obtaining of sufficient molecular data to create a detailed view of the cellular processes affected by a pathological condition. Previous proteomic studies on HCC have deciphered some of these cellular processes . However, limitations associated to these studies [including viral hepatitis-biased studies or the use of targeted proteomics that only reflect the changes in a pre-defined subset of proteins ] have hampered a wider knowledge of proteomic changes occurring in HCC. This is the first quantitative untargeted proteomic study comprising a representative subset of the main HCC aetiologies and independently characterizing two subcellular locations (cytosol and nucleus) thus (1) increasing the coverage and the number of identified proteins; and (2) allowing for estimations of protein regulation through subcellular translocation. Our results highlight a wide proteome reprogramming in HCC, especially the nuclear proteome (46.9% of the proteins altered in HCC). Several of our results are in line with previously described alterations in HCC, as the activation of LXR and FXR receptors or the alteration in processes related to cellular proteostasis, including the UPR, then supporting the proposed potential of UPR targeting in liver cancer . We also found an aetiology-specific pattern of proteome dysregulation, with patients with HCV- and HBV-derived HCC showing similar changes. MASLD and ASH HCC samples also shared common patterns of proteomic alterations that could evidence the common course of the underlying disease . In the case of MASLD-derived HCC, aetiology-specific altered processes included the mTOR pathway that might indicate a special susceptibility of MASLD-derived tumours to protein metabolism-targeting compounds as everolimus derivatives. Irrespective of the underlying aetiology, previous studies have reported molecular subclasses of HCC based on the genomic and transcriptomic profiles that have stablished two robust subclasses of HCC, the proliferative and the non-proliferative. The results presented herein demonstrate that both the cytosolic and nuclear proteomic fingerprints of HCC define two tumoral subgroups that are in accordance with the previously reported transcriptomic and genomic ones. Specifically, PR1 subclass was identified as a good prognosis, non-proliferative-related subgroup with a partial maintenance of hepatic function and differentiation, while PR2 was identified as a poor prognosis group with a molecular upregulation of cancer-related pathways including mTOR, PI3K/AKT or telomere maintenance. These proteomic molecular alterations were sufficient for significantly assign the PR2 group to the previously reported subclusters S2 and G3 from the proliferative subclass and to confirm an exclusion of the CTNNB1 subcluster in PR1, as reported by Sia et al. for the immune subclass. We therefore propose the proteomic fingerprint as a new layer of characterization of HCC molecular subgroups for the study of subtype-specific, clinically relevant, and potentially targetable alterations. The highly aggressive PR2 subgroup was characterised by the dysregulation of a still underexplored cellular machinery in HCC, the tRNA-aminoacylation process, which is catalysed by the aminoacyl-tRNA synthetases (ARSs). The ARSs catalyse the transfer of amino acids to tRNA prior to tRNA translocation to the ribosome . However, ARSs are known to have non-canonical functions as (but not limited to) amino acid levels sensing and protein sequestering through protein-protein interactions . Our results demonstrate a broad dysregulation of the ARSs family in HCC samples that is not restricted to the cytoplasmic compartment but also to the nuclear, suggesting that their non-canonical functions are playing a relevant role in HCC. The upregulation of cytosolic EPRS1 is consistent with previous reports showing a correlation of high EPRS1 levels with poor prognosis in HCC . GARS1 and KARS1 levels, although reported to be higher in HCC samples, were unchanged in our proteomic cohort. DARS2, the only mitochondrial ARS detected through proteomics, was also overexpressed in tumours samples as reported . The complete characterization of the ARS machinery also confirmed a consistent upregulation of other previously unstudied members, including VARS1, DARS1, FARSA, and FARSB, in HCC. Our results also reveal the potential of the ARSs mutational status and/or the ARSs expression profile as prognostic tools capable of predicting the clinical progression of the patients. High tumour ARSs levels robustly define a subgroup of patients with poor prognosis (higher recurrence/lower survival), which was validated in three different HCC cohorts. Molecular differences between the High-ARSs and the Low-ARSs subgroups clustered the High-ARSs subgroup into the proliferative subclass. High-ARSs patients also showed higher mutational rate of TP53 and no significant differences in CTNNB1 mutations that support the HCC CTNNB1 subcluster exclusion. Therefore, the targeted measure of ARSs aberrations (genomics/transcriptomics/proteomics) in HCC patients could represent a valuable strategy for estimating patient’s prognosis. We focused our attention on the most consistently upregulated and aggressiveness-associated ARS member, VARS1, as a proof of concept of the potential of ARSs targeting in HCC. The combined results of in vitro and in vivo experiments with the screening of mesenchymal and cancer stem cell (CSC)-related markers after VARS1 expression modulation consistently showed that VARS1 might be mostly affecting tumour initiation and establishment. A similar role has been described for FARSA in colorectal cancer , which further supports a role of ARSs and, especifically, VARS1 in the maintenance of cells with a CSC-like phenotype. CSCs are proposed to be the main players in therapy resistance and recurrence , therefore, ARSs targeting could be a potential strategy for overcoming them. VARS1 loss-of-function germline mutations are associated to hereditary microcephaly and encephalopathy , consistent with the mitochondrial defect assessed in leukaemia following valine restriction . We did not observe, however, a special enrichment in mitochondrial dysfunction markers following VARS1 overexpression in cell lines. Valine deprivation on HCC cells also decreases colony formation as observed herein for VARS1 silencing, suggesting that both approaches could share common mechanisms. However, while a relevant regulation of the key amino acid sensor, mTOR, by threonine levels has been described , no previous interactions between valine and mTOR are known. In fact, valine supplementing is unable to activate mTOR in threonine-deprived cells which points to a mTOR-independent role of valine and VARS1 in physiology and pathology. Also, further studies should assess the implication of VARS1 on the regulation of immune infiltration as it avoids T CD8 immune infiltration in melanoma and we observed in the mesenchymal HCC cell line SNU-387 a clear switch on the inflammasome spectrum, where VARS1 overexpression increases the expression of NLRP3-related proteins while decreasing the NLRP1-related ones, suggesting its potential as immunomodulant. We identified MAGI1, a known tumour suppressor in HCC acting as a scaffold protein in cellular adherents’ junctions through PTEN recruitment , as a downstream mediator of VARS1 function in HCC. Indeed, MAGI1 was consistently downregulated in both HCC cell lines after VARS1 overexpression and negatively correlated with VARS1 expression in HCC patients. MAGI1 avoids invasiveness in a E-cadherin-dependent manner in kidney cells , thus MAGI1 downregulation after VARS1 overexpression could be favouring a mesenchymal phenotype. In fact, the dysregulation of some known MAGI1-modulated pathways, as Wnt/β-catenin , RHO GTPases , and mTORC1 , was identified in VARS1-overexpressing cells, supporting a MAGI1-mediated effect of VARS1. Importantly, MAGI1 has been shown to regulate the UPR in endothelial cells under inflammatory stimulus , which lead us to hypothesize that MAGI1 could be part of the response caused by proteostasis alterations caused by VARS1 modulation. Our results indicate that VARS1 could modulate MAGI1 levels through transcriptional, instead of translational, regulation. First, both mRNA and protein MAGI1 levels were altered upon VARS1 modulation. Second, MAGI1 and VARS1 levels correlated in HCC cohorts at both mRNA and protein levels. Third, other studies have linked the deletion of one specific ARSs to a decreased translation of the specific amino acid-rich proteins but our codon enrichment analysis does not reveal a significant codon enrichment for valine neither in HCC patients with high VARS1 levels or in HCC cell lines after VARS1 modulation. Finally, it could be hypothesized that valine tRNAs should be increased in HCC if translation were the main underlying pathway as observed in leukaemia ; however, our analysis revealed that valine-tRNAs are even downregulated in HCC compared to control samples. Taken together, we hypothesize that VARS1 might be controlling MAGI1 levels through the interaction with proteins controlling MAGI1 transcription. Further supporting this hypothesis, in silico analyses identified two MAGI1 promoter-binding transcription factors (FOXA1 and JUN) that are known VARS1 interactors. Similarly, we also identified some MAGI1 mRNA-binding proteins altered in our quantitative proteomics data on VARS1-overexpressing cell lines including key splicing-related proteins as PRPF8, which has been reported to be upregulated in HCC by our group , DHX9, RBMX, or HNRNPA2B1. Finally, other proteins involved in mRNA metabolism, as UPF1, the main controller of mRNA degradation by mRNA decay , and EXOSC5, one of the core components of the exosome machinery , also bind to MAGI1 and could control VARS1-mediated MAGI1 mRNA degradation. Therefore, we present herein the first quantitative proteomic characterization of the cytosolic and nuclear proteomes of HCC patients, which has served to determine the existence of two proteomic subgroups associated to previous transcriptomic classification. We have identified the dysregulation of a cellular machinery, called aminoacyl-tRNA synthetases (ARSs), as a key event in patients from the most aggressive subgroup and we propose a panel of ARSs upregulation as a potential clinical tool for assessing patient’s prognosis. Finally, the VARS1-MAGI1 axis has been outlined as a therapeutic vulnerability for aggressive HCC tumours, favouring tumour establishment and CSC-like phenotypes and been therefore of huge relevance for recurrence management. Patients, samples, and in silico cohorts’ analyses Samples from two independent cohorts of patients with HCC were analysed: (1) Retrospective-1: 172 paired HCC and non-tumour adjacent tissue (NTAT), (2) Retrospective-2: HCC tissue ( n = 57), NTAT ( n = 47), and normal liver samples ( n = 5). Paired samples [HCC tissue ( n = 42) and NTAT ( n = 42)] from Retrospective-2 cohort and normal liver samples ( n = 5) were used for the proteomic analysis (Proteomic cohort). A subset of Retrospective-2 (Protein validation cohort) was used for validation by Western Blot. Clinical and demographic characteristics of these cohorts are presented as Supplemental Tables , , and . Data of in silico HCC cohorts were obtained from the The Cancer Genome Atlas (TCGA) and the Genotype-Tissue Expression (GTEx) projects, Xena Browser (CPTAC - PDC000199), and the Gene Expression Omnibus (GEO) database (GSE6764, GSE14323, GSE14520-Training, GSE14520-Testing, GSE3500, and GSE40367). Further details are provided in Supplemental Materials and Methods. RNA processing and expression analyses RNA isolation, retrotranscription, and RNA expression analysis by conventional qPCR were performed as previously described . Simultaneous, multiple gene expression determination (48/96 genes in 48/96 samples) was performed with a qPCR dynamic array based on microfluidic technology (Fluidigm, San Francisco, CA, USA). Primers used for expression analyses are included as Supplemental Table . Further details are provided in Supplemental Material and Methods. Quantitative proteomics Sciex Triple TOF was used for quantitative proteomics on cytosolic and nuclear patients’ samples after a first data-dependent acquisition (DDA) experiment for library construction. A second experiment on definitive samples was performed using the SWATH method and peptide mapping using MarkerView (Sciex). Quantitative proteomics on cell lines was performed on a Bruker timsTOF [DIA-PASEF mode (“short gradient”)] with a library prepared by a previous DDAPASEF experiment. Peptides were identified and mapped using machine learning through Spectronaut 18 (SN18) software version 3.230830.50606 (Biognosys). SN18 parameters are presented in Supplemental Data 2. In both cases, false discovery rate (FDR) was set to 0.01 for peptides and proteins identified. Further details on sample processing and quantitative proteomics workflows are provided in Supplemental Material and Methods. Cell lines and treatments In vitro assays were performed in two liver cancer-derived cell lines (ATCC, Manassas, USA): Hep3B (HB-8064) and SNU-387 (CRL-2237). Cell culture conditions details are provided in Supplemental Materials and Methods. In vitro modulation of gene expression and functional studies Lipid-based transfection was performed in Hep3B and SNU-387 cell lines for transient silencing and overexpression of VARS1 and MAGI1 . Proliferation, migration, colony formation, and tumorsphere formation assays were performed as previously described . The detailed protocols are provided in Supplemental Materials and Methods. In vivo models Stable VARS1 -overexpressing Hep3B cells were used for performing an Extreme Limiting Dilution Assay (ELDA) and orthotopic tumour formation assay in Fox1nu/Foxn1nu mice (Janvier Labs, Le Genest-Saint-Isle, France). The detailed protocols are provided in Supplemental Materials and Methods. Bioinformatic and statistical analysis Quantitative proteomics data were normalized by Total Sum Area. Proteomic and transcriptomic data from HCC cohorts, as in vitro and in vivo experiments’ results, were analysed using Metaboanalist 5.0, Reactome, Ingenuity Pathway Analysis (IPA), Gene Set Enrichment Analysis (GSEA), and GraphPad Prism 9.4. Detailed methods of data analysis are provided in Supplemental Materials and Methods. Samples from two independent cohorts of patients with HCC were analysed: (1) Retrospective-1: 172 paired HCC and non-tumour adjacent tissue (NTAT), (2) Retrospective-2: HCC tissue ( n = 57), NTAT ( n = 47), and normal liver samples ( n = 5). Paired samples [HCC tissue ( n = 42) and NTAT ( n = 42)] from Retrospective-2 cohort and normal liver samples ( n = 5) were used for the proteomic analysis (Proteomic cohort). A subset of Retrospective-2 (Protein validation cohort) was used for validation by Western Blot. Clinical and demographic characteristics of these cohorts are presented as Supplemental Tables , , and . Data of in silico HCC cohorts were obtained from the The Cancer Genome Atlas (TCGA) and the Genotype-Tissue Expression (GTEx) projects, Xena Browser (CPTAC - PDC000199), and the Gene Expression Omnibus (GEO) database (GSE6764, GSE14323, GSE14520-Training, GSE14520-Testing, GSE3500, and GSE40367). Further details are provided in Supplemental Materials and Methods. RNA isolation, retrotranscription, and RNA expression analysis by conventional qPCR were performed as previously described . Simultaneous, multiple gene expression determination (48/96 genes in 48/96 samples) was performed with a qPCR dynamic array based on microfluidic technology (Fluidigm, San Francisco, CA, USA). Primers used for expression analyses are included as Supplemental Table . Further details are provided in Supplemental Material and Methods. Sciex Triple TOF was used for quantitative proteomics on cytosolic and nuclear patients’ samples after a first data-dependent acquisition (DDA) experiment for library construction. A second experiment on definitive samples was performed using the SWATH method and peptide mapping using MarkerView (Sciex). Quantitative proteomics on cell lines was performed on a Bruker timsTOF [DIA-PASEF mode (“short gradient”)] with a library prepared by a previous DDAPASEF experiment. Peptides were identified and mapped using machine learning through Spectronaut 18 (SN18) software version 3.230830.50606 (Biognosys). SN18 parameters are presented in Supplemental Data 2. In both cases, false discovery rate (FDR) was set to 0.01 for peptides and proteins identified. Further details on sample processing and quantitative proteomics workflows are provided in Supplemental Material and Methods. In vitro assays were performed in two liver cancer-derived cell lines (ATCC, Manassas, USA): Hep3B (HB-8064) and SNU-387 (CRL-2237). Cell culture conditions details are provided in Supplemental Materials and Methods. Lipid-based transfection was performed in Hep3B and SNU-387 cell lines for transient silencing and overexpression of VARS1 and MAGI1 . Proliferation, migration, colony formation, and tumorsphere formation assays were performed as previously described . The detailed protocols are provided in Supplemental Materials and Methods. Stable VARS1 -overexpressing Hep3B cells were used for performing an Extreme Limiting Dilution Assay (ELDA) and orthotopic tumour formation assay in Fox1nu/Foxn1nu mice (Janvier Labs, Le Genest-Saint-Isle, France). The detailed protocols are provided in Supplemental Materials and Methods. Quantitative proteomics data were normalized by Total Sum Area. Proteomic and transcriptomic data from HCC cohorts, as in vitro and in vivo experiments’ results, were analysed using Metaboanalist 5.0, Reactome, Ingenuity Pathway Analysis (IPA), Gene Set Enrichment Analysis (GSEA), and GraphPad Prism 9.4. Detailed methods of data analysis are provided in Supplemental Materials and Methods. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 |
The FIGO ovulatory disorders classification system | 8cbaba02-a4bb-4b77-9d21-a5a1852a3b49 | 10086853 | Gynaecology[mh] | INTRODUCTION Ovulatory disorders are common in girls and women of reproductive age and are associated with episodic or chronic dysfunction of the hypothalamic–pituitary–ovarian (H‐P‐O) axis. , These disorders may adversely affect quality of life when they manifest with infertility or as aberrations in menstrual function. Menstrual symptoms may include altered frequency or regularity of flow, as well as prolonged or heavy menstrual bleeding (HMB), or even a complete absence of menstrual blood flow, referred to as amenorrhea. Reproductive function may be adversely impacted as chronic anovulation is a common cause of infertility. While there are numerous known causes and contributors to ovulatory disorders, the entire spectrum of mechanisms of pathogenesis remains to be fully elucidated. Ovulatory disorders are often associated with underlying endocrinopathies, neoplasms, psychological and psychiatric conditions, and the use of specific pharmacologic agents. Optimally effective research, teaching, and clinical management of ovulatory disorders has been impeded by the absence of a comprehensive, internationally recognized and utilized structured classification system. The WHO system for ovulatory disorders was first presented as a monograph in 1973 and has been modified over time in various reviews and book chapters by single authors rather than international consensus. Some 50 years later, much more is known about ovulatory disorders. As a result, the International Federation of Gynecology and Obstetrics (FIGO) has undertaken a process whereby the global community of stakeholders involved with ovulatory disorders has designed a new system to better meet the needs of investigators, clinicians, and medical educators worldwide. The development of the system started with the formation of an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's Committee on Menstrual Disorders (MDC) (now the Committee on Menstrual Disorders and Related Health Impacts, or MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The involvement of the MDRHI reflects the common and important impact of ovulatory disorders on menstrual bleeding experience, an entity referred to as AUB‐O in FIGO System 2 (see below). BACKGROUND AND RATIONALE 2.1 Defining ovulatory disorders In the reproductive years—and in the absence of pregnancy, the process of lactation, or the use of pharmacological agents such as contraceptive steroids—the normal woman releases a mature oocyte from a Graafian follicle in a relatively predictable and cyclical fashion. However, a consensus definition of ovulatory disorders, sometimes called ovulatory dysfunction, has been lacking. The notion of anovulation or absent ovulation is but one manifestation, but there exists a spectrum of chronic or episodic conditions or circumstances that also disrupt the predictable and cyclical ovulatory process. Previously, infrequent ovulation has been termed “oligo‐ovulation,” which typically, but not always, manifests with some combination of infrequent and irregular onset of menstruation as defined in FIGO AUB System 1 (FIGO discontinued the term oligomenorrhea). However, and recognizing that many women with ovulatory disorders may have normal‐length menstrual cycles, no clear definition of infrequent ovulation has been adopted, and this was not addressed in the joint “Committee Opinion” on Infertility Workup for the Women's Health Specialist produced by the American College of Obstetricians and Gynecologists and the American Fertility Society. Furthermore, while an occasional failure to ovulate is expected and may not contribute to infertility, it may well cause an episode of delayed onset of menses and even HMB. This circumstance begs the inclusion of intermittent anovulation in a broad‐based, all‐encompassing definition of ovarian dysfunction. An additional consideration is other aberrations in ovulatory function, such as the luteinized unruptured follicle (LUF) , and the luteal out of phase (LOOP) events 9 that represent, respectively, mechanical failure to release the mature oocyte and the premature recruitment of follicles in the luteal phase, each of which could be candidates for inclusion in the definition of ovulatory dysfunction. As a result of these considerations, it is apparent that there is an unmet need for both a revised definition of ovulatory disorders and a consensus classification system designed to guide research, education, and clinical care across disciplines. 2.2 Existing “system” and its value and limitations The original WHO classification presented three types of ovulatory dysfunction. Group I included “women with amenorrhea and with little or no evidence of endogenous estrogen activity, including patients with (a) hypogonadotrophic ovarian failure, (b) complete or partial hypopituitarism, or (c) pituitary‐hypothalamic dysfunction.” Group II was described as “Women with a variety of menstrual cycle disturbances (including amenorrhea) who exhibit distinct estrogen activity (urinary estrogens usually <10 mcg/24 h), whose urinary and serum gonadotrophins are in the normal range and fluctuating, and who may also have fairly regular spontaneous menstrual bleeds (i.e. 24–38 days apart) but without ovulation.” Group III was described as “Females with primary ovarian failure (sic, now known as primary ovarian insufficiency; POI) associated with low endogenous estrogen activity and pathologically elevated serum and urinary gonadotrophins.” This classification illustrates the now‐outdated assay methodology of the time. A second monograph was published in 1976, which presented an algorithm based upon whether the serum prolactin concentration was elevated or normal, the response to a progestagen challenge test to assess estrogenization, and whether the serum follicle‐stimulating hormone (FSH) concentration was elevated or normal. The results of these assays were to be used to define seven groups: Group I: Hypothalamic pituitary failure Group II: Hypothalamic pituitary dysfunction Group III: Ovarian failure Group IV: Congenital or acquired genital tract disorders Group V: Hyperprolactinemia, with a space‐occupying lesion Group VI: Hyperprolactinemia, with no detectable space‐occupying lesion Group VII: Non‐functioning hypothalamic/pituitary tumors Over the last 40 years, numerous descriptions of the WHO classification have appeared in various monographs and book chapters in textbooks on gynecology, infertility, and reproductive endocrinology. Multiple authors have modified the classification without any evidence of further scientific discussion or consensus development. Interestingly, the UK NICE Guidelines on the investigation and management of infertility, first published in 2004, describe three groups with reference to the WHO Manual for the Standardized Investigation and Diagnosis of the Infertile Couple , published in 1993. Yet this WHO manual does not contain any classification of ovulatory disorders. Nonetheless, the NICE classification encompasses the three groups that most authors refer to currently, namely: Group I: Low gonadotropins and estradiol Group II: “Gonadotropin disorder” and normal estradiol Group III: High gonadotropins and low estradiol In this classification, Group I essentially refers to hypogonadotropic hypogonadism and pituitary insufficiency but also includes hyperprolactinemia. Group II is often referred to as “hypothalamic/pituitary dysfunction,” and most consider this group to primarily comprise women with polycystic ovary syndrome (PCOS), while Group III is consistently primary ovarian insufficiency (POI). However, it is essential to appreciate that hormone levels do not obey clear rules. For example, in those with hypothalamic amenorrhea who are underweight, levels of serum luteinizing hormone (LH) are usually suppressed, while levels of FSH are often in the normal range. , In addition, women with PCOS often have levels of FSH and LH in the normal range. Furthermore, anovulation is only one extreme of ovulatory dysfunction that includes a spectrum of manifestations that range from isolated episodes to chronic ovulatory failure. Since the first iterations of the WHO classification, there have been significant advances in understanding the control of ovulation and the pathophysiology of ovulatory disorders, together with improvements in assay technology and genomics. Consequently, there exists a need for a more comprehensive and updated classification. 2.3 The FIGO Systems for Abnormal Uterine Bleeding ( AUB ) in the Reproductive Years In 2011, and again in 2018, FIGO published its two systems for describing nongestational AUB in the reproductive years, including System 2, the classification system known as “PALM‐COEIN” that categorizes causes of AUB in non‐gravid women of reproductive age, including those with ovulatory disorders (AUB‐O). These systems were developed and designed using a rigorous Delphi process, with the participants including international experts and representation from multiple and diverse stakeholder organizations, including national and subspecialty societies and journals and the US Food and Drug Administration. The overall process also included an examination of the available population databases dealing with menstruation that resulted in new, evidence‐based definitions for normal and abnormal menstrual metrics that are now known as the FIGO AUB System 1. , , The process has been iterative, with periodic revisions of systems that reside in what is described as a “living document.” The whole process has been underpinned and continues to be supported by FIGO and the FIGO Committee on Menstrual Disorders (MDC), which, since 2022, has been known as the Committee on Menstrual Disorders and Related Health Impacts. FIGO AUB System 1 describes non‐gestational normal and AUB in the reproductive years and addresses the features of menstruation, that is, frequency, regularity, duration, and perceived volume of menstrual blood loss in addition to the presence of bleeding between periods (intermenstrual bleeding) as well as unscheduled bleeding associated with the use of gonadal steroids for contraception. The latter is now encompassed by the increasingly used term “contraceptive‐induced menstrual bleeding changes” (CiMBC). Notably, System 1 is currently based upon data from studies of women aged 18–45 years, as evidence from adolescent girls and women in the late reproductive years is less well defined. The second system, FIGO AUB System 2, describes potential causes or contributors to symptoms of AUB that are categorized in System 1. The nine categories, arranged according to the acronym PALM‐COEIN, are as follows: Polyp (AUB‐P); Adenomyosis (AUB‐A); Leiomyoma (AUB‐L); Malignancy and hyperplasia (AUB‐M); Coagulopathy (AUB‐C); Ovulatory dysfunction (AUB‐O); Endometrial disorders (AUB‐E); Iatrogenic (AUB‐I); and Not otherwise classified (AUB‐N). For the present context, ovulatory disorders (AUB‐O) incorporate a range of disturbances in normal ovulatory function ranging from irregular to infrequent to absent ovulation. To date, in the context of management of patients with AUB, the diagnosis of ovulatory disorders has been based mainly on a detailed menstrual history to meet the parameters that comprise FIGO System 1. In the 2018 revisions of the two FIGO systems, the recommendation was made that treatments that may interfere with the H‐P‐O axis and associated with AUB be placed within the “AUB‐I" category. The rationale and methodology for developing a sub‐classification system for AUB‐O are now presented. Defining ovulatory disorders In the reproductive years—and in the absence of pregnancy, the process of lactation, or the use of pharmacological agents such as contraceptive steroids—the normal woman releases a mature oocyte from a Graafian follicle in a relatively predictable and cyclical fashion. However, a consensus definition of ovulatory disorders, sometimes called ovulatory dysfunction, has been lacking. The notion of anovulation or absent ovulation is but one manifestation, but there exists a spectrum of chronic or episodic conditions or circumstances that also disrupt the predictable and cyclical ovulatory process. Previously, infrequent ovulation has been termed “oligo‐ovulation,” which typically, but not always, manifests with some combination of infrequent and irregular onset of menstruation as defined in FIGO AUB System 1 (FIGO discontinued the term oligomenorrhea). However, and recognizing that many women with ovulatory disorders may have normal‐length menstrual cycles, no clear definition of infrequent ovulation has been adopted, and this was not addressed in the joint “Committee Opinion” on Infertility Workup for the Women's Health Specialist produced by the American College of Obstetricians and Gynecologists and the American Fertility Society. Furthermore, while an occasional failure to ovulate is expected and may not contribute to infertility, it may well cause an episode of delayed onset of menses and even HMB. This circumstance begs the inclusion of intermittent anovulation in a broad‐based, all‐encompassing definition of ovarian dysfunction. An additional consideration is other aberrations in ovulatory function, such as the luteinized unruptured follicle (LUF) , and the luteal out of phase (LOOP) events 9 that represent, respectively, mechanical failure to release the mature oocyte and the premature recruitment of follicles in the luteal phase, each of which could be candidates for inclusion in the definition of ovulatory dysfunction. As a result of these considerations, it is apparent that there is an unmet need for both a revised definition of ovulatory disorders and a consensus classification system designed to guide research, education, and clinical care across disciplines. Existing “system” and its value and limitations The original WHO classification presented three types of ovulatory dysfunction. Group I included “women with amenorrhea and with little or no evidence of endogenous estrogen activity, including patients with (a) hypogonadotrophic ovarian failure, (b) complete or partial hypopituitarism, or (c) pituitary‐hypothalamic dysfunction.” Group II was described as “Women with a variety of menstrual cycle disturbances (including amenorrhea) who exhibit distinct estrogen activity (urinary estrogens usually <10 mcg/24 h), whose urinary and serum gonadotrophins are in the normal range and fluctuating, and who may also have fairly regular spontaneous menstrual bleeds (i.e. 24–38 days apart) but without ovulation.” Group III was described as “Females with primary ovarian failure (sic, now known as primary ovarian insufficiency; POI) associated with low endogenous estrogen activity and pathologically elevated serum and urinary gonadotrophins.” This classification illustrates the now‐outdated assay methodology of the time. A second monograph was published in 1976, which presented an algorithm based upon whether the serum prolactin concentration was elevated or normal, the response to a progestagen challenge test to assess estrogenization, and whether the serum follicle‐stimulating hormone (FSH) concentration was elevated or normal. The results of these assays were to be used to define seven groups: Group I: Hypothalamic pituitary failure Group II: Hypothalamic pituitary dysfunction Group III: Ovarian failure Group IV: Congenital or acquired genital tract disorders Group V: Hyperprolactinemia, with a space‐occupying lesion Group VI: Hyperprolactinemia, with no detectable space‐occupying lesion Group VII: Non‐functioning hypothalamic/pituitary tumors Over the last 40 years, numerous descriptions of the WHO classification have appeared in various monographs and book chapters in textbooks on gynecology, infertility, and reproductive endocrinology. Multiple authors have modified the classification without any evidence of further scientific discussion or consensus development. Interestingly, the UK NICE Guidelines on the investigation and management of infertility, first published in 2004, describe three groups with reference to the WHO Manual for the Standardized Investigation and Diagnosis of the Infertile Couple , published in 1993. Yet this WHO manual does not contain any classification of ovulatory disorders. Nonetheless, the NICE classification encompasses the three groups that most authors refer to currently, namely: Group I: Low gonadotropins and estradiol Group II: “Gonadotropin disorder” and normal estradiol Group III: High gonadotropins and low estradiol In this classification, Group I essentially refers to hypogonadotropic hypogonadism and pituitary insufficiency but also includes hyperprolactinemia. Group II is often referred to as “hypothalamic/pituitary dysfunction,” and most consider this group to primarily comprise women with polycystic ovary syndrome (PCOS), while Group III is consistently primary ovarian insufficiency (POI). However, it is essential to appreciate that hormone levels do not obey clear rules. For example, in those with hypothalamic amenorrhea who are underweight, levels of serum luteinizing hormone (LH) are usually suppressed, while levels of FSH are often in the normal range. , In addition, women with PCOS often have levels of FSH and LH in the normal range. Furthermore, anovulation is only one extreme of ovulatory dysfunction that includes a spectrum of manifestations that range from isolated episodes to chronic ovulatory failure. Since the first iterations of the WHO classification, there have been significant advances in understanding the control of ovulation and the pathophysiology of ovulatory disorders, together with improvements in assay technology and genomics. Consequently, there exists a need for a more comprehensive and updated classification. The FIGO Systems for Abnormal Uterine Bleeding ( AUB ) in the Reproductive Years In 2011, and again in 2018, FIGO published its two systems for describing nongestational AUB in the reproductive years, including System 2, the classification system known as “PALM‐COEIN” that categorizes causes of AUB in non‐gravid women of reproductive age, including those with ovulatory disorders (AUB‐O). These systems were developed and designed using a rigorous Delphi process, with the participants including international experts and representation from multiple and diverse stakeholder organizations, including national and subspecialty societies and journals and the US Food and Drug Administration. The overall process also included an examination of the available population databases dealing with menstruation that resulted in new, evidence‐based definitions for normal and abnormal menstrual metrics that are now known as the FIGO AUB System 1. , , The process has been iterative, with periodic revisions of systems that reside in what is described as a “living document.” The whole process has been underpinned and continues to be supported by FIGO and the FIGO Committee on Menstrual Disorders (MDC), which, since 2022, has been known as the Committee on Menstrual Disorders and Related Health Impacts. FIGO AUB System 1 describes non‐gestational normal and AUB in the reproductive years and addresses the features of menstruation, that is, frequency, regularity, duration, and perceived volume of menstrual blood loss in addition to the presence of bleeding between periods (intermenstrual bleeding) as well as unscheduled bleeding associated with the use of gonadal steroids for contraception. The latter is now encompassed by the increasingly used term “contraceptive‐induced menstrual bleeding changes” (CiMBC). Notably, System 1 is currently based upon data from studies of women aged 18–45 years, as evidence from adolescent girls and women in the late reproductive years is less well defined. The second system, FIGO AUB System 2, describes potential causes or contributors to symptoms of AUB that are categorized in System 1. The nine categories, arranged according to the acronym PALM‐COEIN, are as follows: Polyp (AUB‐P); Adenomyosis (AUB‐A); Leiomyoma (AUB‐L); Malignancy and hyperplasia (AUB‐M); Coagulopathy (AUB‐C); Ovulatory dysfunction (AUB‐O); Endometrial disorders (AUB‐E); Iatrogenic (AUB‐I); and Not otherwise classified (AUB‐N). For the present context, ovulatory disorders (AUB‐O) incorporate a range of disturbances in normal ovulatory function ranging from irregular to infrequent to absent ovulation. To date, in the context of management of patients with AUB, the diagnosis of ovulatory disorders has been based mainly on a detailed menstrual history to meet the parameters that comprise FIGO System 1. In the 2018 revisions of the two FIGO systems, the recommendation was made that treatments that may interfere with the H‐P‐O axis and associated with AUB be placed within the “AUB‐I" category. The rationale and methodology for developing a sub‐classification system for AUB‐O are now presented. METHODOLOGY The approach selected was based on RAND Delphi methodology, extensively used for consensus development processes, including classification systems for medical conditions. The two FIGO systems for AUB in the reproductive years, the sub‐classification systems for leiomyomas (AUB‐L) and adenomyosis (AUB‐A), now undergoing validation, have all been developed using a version of this process. , , The project was submitted to and approved by the FIGO Executive, and FIGO's Education Communication and Advocacy Consortium (ECAC) approved the results before submission of the manuscript. 3.1 Ovulatory Disorders Steering Committee The first step was to form an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's MDC (now MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The chairs of each of these committees collaborated to form the ODSC by identifying eight members from their committees, adding an external member who had a leadership position in the Global PCOS Alliance. The resulting nine‐member committee had diverse reach and comprised one from each of the continents of Africa, Asia, and North America, and two from each of the European Union, the United Kingdom, and South America. The ODSC met at regular intervals between June and December 2020 to identify and engage stakeholders and develop and test the consensus process. The scope of the ODSC also included review and analysis of the results of the various rounds and the design and testing of subsequent Delphi rounds. 3.2 Stakeholder and participant identification The first task of the ODSC was to identify and engage the appropriate stakeholders necessary for the Delphi process. The chosen categories included the following: National obstetrical and gynecological societies Subspecialty societies representing reproductive endocrinologists Specialty (obstetrics and gynecology) and subspecialty (reproductive endocrinology and infertility) journals Recognized experts in ovulatory disorders not participating in categories 1–3 Lay organizations interested in infertility, AUB, or PCOS Descriptive letters were created and customized for the various categories describing the rationale for the process and a synopsis of the methodology. Via the FIGO record of member countries, each of the national obstetrical and gynecological societies was contacted and invited by email to support the process by naming a representative. The ODSC identified the spectrum of subspecialty societies on the six continents and contacted leadership to explain the process and solicit support. The descriptive letter was sent electronically to both the society headquarters and the identified participant. A similar process involved the editorial offices of relevant specialty and subspecialty journals. The ODSC then identified recognized experts based on a combination of personal knowledge of the field and a search of the literature, subtracting those identified by national societies, subspecialty societies, or journals for representation. Finally, the ODSC sought to identify lay organizations that could represent women and adolescent girls who may have ovulatory disorders. These groups were generally contacted directly, and if there was interest and an indication of commitment, a lay‐based version of the letter was sent. 3.3 The Delphi consensus process 3.3.1 | Background and scoring system The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. 3.3.2 | Participant orientation meeting Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. 3.3.3 | Conduct of the first round The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. 3.3.4 | Conduct of the second round The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. 3.3.5 | Conduct of the third round As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. 3.3.6 | Participant meeting All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. 3.3.7 | Post‐meeting and fourth survey round The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. 3.3.8 | Lay round The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. Ovulatory Disorders Steering Committee The first step was to form an Ovulatory Disorders Steering Committee (ODSC) comprising members of FIGO's MDC (now MDRHI) and Committee on Reproductive Medicine, Endocrinology, and Infertility. The chairs of each of these committees collaborated to form the ODSC by identifying eight members from their committees, adding an external member who had a leadership position in the Global PCOS Alliance. The resulting nine‐member committee had diverse reach and comprised one from each of the continents of Africa, Asia, and North America, and two from each of the European Union, the United Kingdom, and South America. The ODSC met at regular intervals between June and December 2020 to identify and engage stakeholders and develop and test the consensus process. The scope of the ODSC also included review and analysis of the results of the various rounds and the design and testing of subsequent Delphi rounds. Stakeholder and participant identification The first task of the ODSC was to identify and engage the appropriate stakeholders necessary for the Delphi process. The chosen categories included the following: National obstetrical and gynecological societies Subspecialty societies representing reproductive endocrinologists Specialty (obstetrics and gynecology) and subspecialty (reproductive endocrinology and infertility) journals Recognized experts in ovulatory disorders not participating in categories 1–3 Lay organizations interested in infertility, AUB, or PCOS Descriptive letters were created and customized for the various categories describing the rationale for the process and a synopsis of the methodology. Via the FIGO record of member countries, each of the national obstetrical and gynecological societies was contacted and invited by email to support the process by naming a representative. The ODSC identified the spectrum of subspecialty societies on the six continents and contacted leadership to explain the process and solicit support. The descriptive letter was sent electronically to both the society headquarters and the identified participant. A similar process involved the editorial offices of relevant specialty and subspecialty journals. The ODSC then identified recognized experts based on a combination of personal knowledge of the field and a search of the literature, subtracting those identified by national societies, subspecialty societies, or journals for representation. Finally, the ODSC sought to identify lay organizations that could represent women and adolescent girls who may have ovulatory disorders. These groups were generally contacted directly, and if there was interest and an indication of commitment, a lay‐based version of the letter was sent. The Delphi consensus process 3.3.1 | Background and scoring system The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. 3.3.2 | Participant orientation meeting Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. 3.3.3 | Conduct of the first round The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. 3.3.4 | Conduct of the second round The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. 3.3.5 | Conduct of the third round As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. 3.3.6 | Participant meeting All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. 3.3.7 | Post‐meeting and fourth survey round The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. 3.3.8 | Lay round The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. The Delphi process was developed by the RAND Corporation as a method for determining multi‐stakeholder expert consensus in a semi‐anonymous fashion that minimizes the impact of interpersonal issues on the outcome. Originally designed to forecast the impact of technology on warfare, it has subsequently been utilized across a number of disciplines including health care. Versions of the Delphi Process were used previously in the development of the FIGO AUB systems , , and are generally similar to the original RAND system comprising a series of survey rounds designed to be administered in a web‐based or live environment with electronic scoring. Members of the ODSC did not participate in the Delphi process as participants. The scoring system has nine levels (1–9), with “1” being the most substantial disagreement with a statement, “9” the strongest agreement, and “5” representing neutrality. Scores in the top tertile (7, 8, and 9) indicated “agreement” with a statement, while those in the bottom tertile (1, 2, and 3) were indications of disagreement. As a result, the remaining scores (4, 5, and 6) comprised the “neutral” category, with “4” leaning to disagreement and “6” leaning to agreement. The minimum requirement for consensus agreement was a mean score of at least 7 (scores of 6.5–6.9 were rounded to 7), with no more than 15% in the disagreement category. Conversely, “disagreement” was defined as a mean score of 3 or less (scores of 3.1–3.4 were rounded to 3), with no more than 15% in the agreement category. For each statement or question in a survey, there is a field to allow for free‐text comments by the participants. Before distributing the first round of surveys, two orientation meetings for the participants were held to ensure that the appropriate contact information was in the study database and systems and that all understood the survey mechanisms. The two meetings were held on the Zoom platform (Zoom Video Communications Inc, San Jose, CA, USA), with dates and times selected to facilitate flexibility for the diverse group of participants, particularly considering the spectrum of world time zones involved. Included in the messaging of this meeting was the understanding that Delphi participant answers would remain confidential and that all distributions would be anonymized. Demonstrations of the functionality of the system were provided. A session was recorded and uploaded to an accessible server for individuals who could not attend either of the live, web‐based meetings and to provide a resource for all participants who wished to review the instructions on their own time. It is to be noted that the lay component of the process was planned to occur after the medical stakeholders had developed a draft system. The first round of the Delphi process was designed to identify the participants' age, gender, location, expertise, and constituency and evaluate general opinions, the latter using statements intended to elicit an “agree” or “disagree” response. These statements were crafted in a fashion that invited and measured opinions regarding the clinical relevance of ovulatory disorders, the need for a well‐designed classification system, and the broad categories that should be included if such a system was to be designed. The draft set of questions was created by the Chair of the ODSC, reviewed by the committee members in meetings using the Zoom platform, and then tested on the web‐based survey instrument SurveyMonkey (Momentive, San Mateo, CA, USA). The final version of the first round was distributed to the stakeholders via their identified email addresses within the web‐based survey system. The ODSC Chair, who also functioned as the Facilitator, kept track of responses and sent out reminder emails at intervals of 7–10 days until there were no additional responses. The data were then exported to an Excel (Microsoft Corp, Everett, WA, USA) workbook comprising spreadsheets containing the survey template that automatically calculated means and the percentage of answers in the agree (7–9), neutral (4–6), and disagree (1–3) categories. The free‐text comments made by the participants were also included in the spreadsheet. The ODSC reviewed these data as a prelude to the design of the second round. The aggregate anonymized results were sent to each participant along with a copy of their responses for comparative purposes. The second‐round survey was constructed, in part, based upon the first‐round results. Some “neutral” responses that had marginal scores close to 3 or 7, or defined principally by the outliers, were reviewed in particular because, in such circumstances, it was possible that rewording a question or providing appropriately representative evidence would result in a change in the participant's opinion. It was also possible that “re‐asking” the question in the context of individual participant understanding of the group response might result in changes in individual responses. This information allowed the ODSC to construct a second survey round that eliminated items with defined agreement or disagreement but included reworded statements and new statements seeking to refine and expand the criteria that the participants thought necessary. The distribution of the second‐round survey was confined to those participating in and responding to the first round. The web‐based system, distribution, and follow‐up reminder technique were again employed. The data were retrieved, exported into the same Excel workbook with worksheet templates, and analyzed by the ODSC. Similarly, the participants received an anonymized summary of the participant responses to each of the items and a copy of their answers for comparison. At this point, the committee had enough information to design a draft system that addressed and included the elements identified in the first two Delphi rounds. This was conducted iteratively until a draft acceptable to all ODSC members was created. As a prelude to the live stakeholder meeting, a short clarifying third round was created, tested, distributed, and the results analyzed by the ODSC, conducted in a fashion similar to that of the first two rounds. Included in this round was a version of the draft system with solicitation of preliminary opinions from the participants. As was the case for the first two rounds, each participant was provided an anonymized copy of the results of the previous round and a copy of their responses, all for review before the live participant meeting. All medical participants and the ODSC were invited to participate in the stakeholder meeting held live on the Zoom platform. Here, the overall results of the survey rounds were presented, including those items where consensus one way or the other had not been reached. The draft system was also reviewed. An open discussion was invited, and preliminary polls were taken using the system available on the Zoom platform. The ODSC undertook the post‐meeting analysis. Subsequently, a short fourth‐round poll was conducted to reach a consensus on the remaining elements and include individuals who could not participate in the live meeting. The lay round was designed to query the lay representatives, both for their perception of a need for a classification system and their opinions of the system developed by the expert and representative participants. A separate survey was designed that included some of the items in the medical participant rounds but presented in a fashion accessible by a lay audience. There was a focus on their opinions of clarity and utility in the context of discussion and counseling involving healthcare practitioners and patients. The draft lay‐round elements were reviewed and revised by the ODSC, uploaded to the SurveyMonkey platform, tested, and then distributed to the participants in a fashion similar to that used for the medical participant rounds. The results were reviewed and analyzed by the ODSC, who considered these opinions in revising the system and constructing the manuscript and the design of materials for the lay audience. RESULTS 4.1 Medical expert participants A total of 88 invitations were sent to the responding national gynecological and obstetrical societies, experts at large, and the delegated representatives of journals and subspecialty societies. Ultimately, 46 individuals from all six continents responded and participated in the first Delphi round; approximately half were from Europe (Figure ), with age and gender distribution demonstrated in Figure . Of these, 28 (61%) were men and 18 (39%) were women. Over half of the participants (59%) were national society representatives, and 19% were experts at large (Figure ). Participants were asked about their principal role, and 72% responded “clinical care,” with the rest distributed across clinical research, teaching, and epidemiology. The secondary roles included clinical research, reported by 36%, and education by 24%, with some reporting bench research, administrative duties, and editorial responsibilities (Figure ). 4.2 Results of rounds 1–3 The results from rounds 1, 2, and 3 are shown in Tables , , and , respectively. In round 1, of 37 items, there was consensus on all but five. There was general support for the stated definition of ovulatory disorders and the rationale for a consensus classification system to support research, teaching, and clinical care. Respondents neither supported nor disagreed with the statement “The WHO classification system, in its current form, would meet the needs for a contemporary classification system for ovulatory disorders.” There was broad support for a spectrum of potential causes of ovulatory disorders except for idiopathic mechanisms and LOOP cycles. 9 The ODSC took these results and developed and tested the second Delphi round before distributing it to the 46 respondents in the first round. There were 41 respondents with the results of the 22 items shown in Table . The results of the second round suggested that there would be support for an anatomically based system (hypothalamus, pituitary, ovarian) with a separate category for PCOS. There was general support for this concept, with a mean score of 7.1. The survey also explored the notion of distinguishing chronic from isolated or intermittent ovulatory disorders, and this concept received consensus support with a mean score of 7.5 with no respondent disagreeing. Importantly, no consensus was reached on the question of using the Rotterdam Criteria to define PCOS, as 22.0% were in disagreement despite a mean overall score of 6.7. The second round was also designed to clarify some items from the first round and to identify more granular concepts relating to the pathogenesis of ovulatory disorders. There was a lack of consensus regarding the role of ovarian neoplasms, bacterial and viral infections, and the concept of infectious or inflammatory causes in general. There was also no consensus on the role of an absent surge of LH and LOOP events. While “menopause” as an etiology had a mean score otherwise sufficient to indicate agreement, 15% of the respondents disagreed, thereby preventing the attainment of consensus. With these data, the ODSC devised a draft system based upon anatomy that included a separate component for PCOS. Before distributing to the participants, and as a prelude to the live virtual meeting of the participants in the Delphi process, a five‐item third round was developed, tested, and distributed. Included in the distribution to the participants was evidence describing and evaluating LOOP events and the potential role of ovarian neoplasms and infectious or inflammatory disorders in the pathogenesis of ovulatory dysfunction. Related items were modified, and the results from the 38 respondents are displayed in Table . There was now consensus support for the inclusion of menopause and LOOP events, but lack of agreement on the role of ovarian neoplasms and infectious or other inflammatory disorders in the genesis of ovulatory dysfunction. 4.3 Live meeting For the live meeting, the ODSC distributed the draft system and an Excel workbook comprising a summary of the results of the three rounds and how the consensus agreements attained were integrated into the design. The live meeting was conducted on August 25, 2021, using the Zoom video platform. The meeting agenda included a review of the rationale for the process and the results of the three Delphi rounds, summarizing areas of agreement and focusing on the few places where consensus had not been reached. A total of 22 respondents could attend, so it was impossible to survey them officially. Still, there was a strong indication of support for the system based upon an in‐meeting electronic poll. The formal process was the subject of the fourth round. 4.4 Results of round 4 For this round, the ODSC sought the participants' opinions on the draft system and tried to resolve some of the remaining items upon which there was a persisting lack of consensus. For this four‐item survey, there were 39 respondents, with the results displayed in Table . There was support for the presented system by 95% of the respondents (mean score 8.0), with disagreement of only 2.6%. The fourth round also saw agreement that there should be a category for ovarian neoplasms. Although more than 60% supported the notion of inflammatory or infectious mechanisms, these items failed to achieve the predetermined criteria for consensus. There were some valuable comments about the specific graphical depiction of the system that will be discussed subsequently in the context of the results of the lay round. 4.5 Results of the lay round The lay round, as planned, was conducted following the deliberations of the experts and society, and journal representatives and the development of the draft FIGO Ovulatory Disorders Classification System. The results of the 11‐item survey sent to 17 individuals can be seen in Table . The first three items were designed to obtain demographic data; all 10 respondents were women representing organizations from Africa, Europe, and North America with an age distribution of 25–54 years. There was general agreement on the definition of ovulatory disorders and their potential role in the genesis of infertility. However, there was no consensus on the contribution of ovulatory disorders to symptoms of AUB. While there was agreement that girls and women often do not understand the causes of ovulatory disorders, there was uncertainty regarding reasons unknown to healthcare providers and other medical professionals. There was a clear consensus that a well‐conceived system of classifying ovulatory disorders would improve the design and interpretation of research and facilitate communication between patients and healthcare practitioners. However, the support for the draft system was mixed with a mean score of 4.9 and only 33% agreeing that the system was “understandable” and one that could provide “a platform upon which a lay audience” could “gain insight into the possible causes of ovulatory disorders.” The comments from the participants were illuminating (Table ) and, in some instances, mirrored comments from the other participants. Respecting these comments, the ODSC altered the graphical representation of the system without changing the content, placing the PCOS panel at the bottom, allowing for the use of the acronym “HyPO‐P.” In addition, a draft lay version of the major elements of the system was developed with lay language that was nonetheless compatible with the medical version (Supplementary Material). This draft was distributed to lay participants and their comments were generally incorporated into the text, and into modifications of the graphical content. Medical expert participants A total of 88 invitations were sent to the responding national gynecological and obstetrical societies, experts at large, and the delegated representatives of journals and subspecialty societies. Ultimately, 46 individuals from all six continents responded and participated in the first Delphi round; approximately half were from Europe (Figure ), with age and gender distribution demonstrated in Figure . Of these, 28 (61%) were men and 18 (39%) were women. Over half of the participants (59%) were national society representatives, and 19% were experts at large (Figure ). Participants were asked about their principal role, and 72% responded “clinical care,” with the rest distributed across clinical research, teaching, and epidemiology. The secondary roles included clinical research, reported by 36%, and education by 24%, with some reporting bench research, administrative duties, and editorial responsibilities (Figure ). Results of rounds 1–3 The results from rounds 1, 2, and 3 are shown in Tables , , and , respectively. In round 1, of 37 items, there was consensus on all but five. There was general support for the stated definition of ovulatory disorders and the rationale for a consensus classification system to support research, teaching, and clinical care. Respondents neither supported nor disagreed with the statement “The WHO classification system, in its current form, would meet the needs for a contemporary classification system for ovulatory disorders.” There was broad support for a spectrum of potential causes of ovulatory disorders except for idiopathic mechanisms and LOOP cycles. 9 The ODSC took these results and developed and tested the second Delphi round before distributing it to the 46 respondents in the first round. There were 41 respondents with the results of the 22 items shown in Table . The results of the second round suggested that there would be support for an anatomically based system (hypothalamus, pituitary, ovarian) with a separate category for PCOS. There was general support for this concept, with a mean score of 7.1. The survey also explored the notion of distinguishing chronic from isolated or intermittent ovulatory disorders, and this concept received consensus support with a mean score of 7.5 with no respondent disagreeing. Importantly, no consensus was reached on the question of using the Rotterdam Criteria to define PCOS, as 22.0% were in disagreement despite a mean overall score of 6.7. The second round was also designed to clarify some items from the first round and to identify more granular concepts relating to the pathogenesis of ovulatory disorders. There was a lack of consensus regarding the role of ovarian neoplasms, bacterial and viral infections, and the concept of infectious or inflammatory causes in general. There was also no consensus on the role of an absent surge of LH and LOOP events. While “menopause” as an etiology had a mean score otherwise sufficient to indicate agreement, 15% of the respondents disagreed, thereby preventing the attainment of consensus. With these data, the ODSC devised a draft system based upon anatomy that included a separate component for PCOS. Before distributing to the participants, and as a prelude to the live virtual meeting of the participants in the Delphi process, a five‐item third round was developed, tested, and distributed. Included in the distribution to the participants was evidence describing and evaluating LOOP events and the potential role of ovarian neoplasms and infectious or inflammatory disorders in the pathogenesis of ovulatory dysfunction. Related items were modified, and the results from the 38 respondents are displayed in Table . There was now consensus support for the inclusion of menopause and LOOP events, but lack of agreement on the role of ovarian neoplasms and infectious or other inflammatory disorders in the genesis of ovulatory dysfunction. Live meeting For the live meeting, the ODSC distributed the draft system and an Excel workbook comprising a summary of the results of the three rounds and how the consensus agreements attained were integrated into the design. The live meeting was conducted on August 25, 2021, using the Zoom video platform. The meeting agenda included a review of the rationale for the process and the results of the three Delphi rounds, summarizing areas of agreement and focusing on the few places where consensus had not been reached. A total of 22 respondents could attend, so it was impossible to survey them officially. Still, there was a strong indication of support for the system based upon an in‐meeting electronic poll. The formal process was the subject of the fourth round. Results of round 4 For this round, the ODSC sought the participants' opinions on the draft system and tried to resolve some of the remaining items upon which there was a persisting lack of consensus. For this four‐item survey, there were 39 respondents, with the results displayed in Table . There was support for the presented system by 95% of the respondents (mean score 8.0), with disagreement of only 2.6%. The fourth round also saw agreement that there should be a category for ovarian neoplasms. Although more than 60% supported the notion of inflammatory or infectious mechanisms, these items failed to achieve the predetermined criteria for consensus. There were some valuable comments about the specific graphical depiction of the system that will be discussed subsequently in the context of the results of the lay round. Results of the lay round The lay round, as planned, was conducted following the deliberations of the experts and society, and journal representatives and the development of the draft FIGO Ovulatory Disorders Classification System. The results of the 11‐item survey sent to 17 individuals can be seen in Table . The first three items were designed to obtain demographic data; all 10 respondents were women representing organizations from Africa, Europe, and North America with an age distribution of 25–54 years. There was general agreement on the definition of ovulatory disorders and their potential role in the genesis of infertility. However, there was no consensus on the contribution of ovulatory disorders to symptoms of AUB. While there was agreement that girls and women often do not understand the causes of ovulatory disorders, there was uncertainty regarding reasons unknown to healthcare providers and other medical professionals. There was a clear consensus that a well‐conceived system of classifying ovulatory disorders would improve the design and interpretation of research and facilitate communication between patients and healthcare practitioners. However, the support for the draft system was mixed with a mean score of 4.9 and only 33% agreeing that the system was “understandable” and one that could provide “a platform upon which a lay audience” could “gain insight into the possible causes of ovulatory disorders.” The comments from the participants were illuminating (Table ) and, in some instances, mirrored comments from the other participants. Respecting these comments, the ODSC altered the graphical representation of the system without changing the content, placing the PCOS panel at the bottom, allowing for the use of the acronym “HyPO‐P.” In addition, a draft lay version of the major elements of the system was developed with lay language that was nonetheless compatible with the medical version (Supplementary Material). This draft was distributed to lay participants and their comments were generally incorporated into the text, and into modifications of the graphical content. PROPOSED HyPO‐P SYSTEM 5.1 Rationale and development The system was designed to align with the results of the Delphi process (see Supplementary Table ). There was support for a design that grouped the causes of ovulatory disorders anatomically, a logical extension of the former WHO classification but more precise and more accessible than one based primarily on hormone assays. It was, therefore, rational to design this classification system according to the levels of the H‐P‐O axis as reflected in the second Delphi round (Table , question 1). It was also considered essential to allow for the designation of any element that is known or suspected to alter the functionality of the organ in a fashion that could contribute to the genesis of ovulatory dysfunction, whether related to demonstrable histopathology, abnormal laboratory assays, iatrogenic mechanisms, or even functional disorders without measurable laboratory features. However, it was recognized that an important cause of ovulatory disorders is PCOS since it affects 8%–13% of women of reproductive age. It is a complex and heterogeneous condition with comprehensive international guidelines for diagnosis, investigation, and management , , that cannot be confined to an ovarian origin. Therefore, it was determined that PCOS constitutes a class apart from the anatomical categorization, a notion that was supported in the second round of the Delphi process (Table , question 2). Therefore, the proposed FIGO classification now includes ovulatory disorders categorized into four groups as follows: Type I: Hypothalamic; Type II: Pituitary; Type III: Ovarian; and Type IV: PCOS (Figure ). The system can be referred to by the acronym “HyPO‐P,” where the “P” is separated from the other three categories recognizing that it does not reside in a single anatomic location. The new system provides practical utility and a second layer, or sub‐classification, for each of the three anatomically defined entities, including discrete pathophysiological categories. These can be remembered using the acronym “GAIN‐FIT‐PIE” (Figure ). A detailed description of every known or suspected cause of ovulatory dysfunction is beyond the scope of the present paper. Still, the new classification is presented with references to some of the many included conditions. Supplementary Table shows the linkages between various potential causes or categories of causes and the elements in the FIGO Ovulatory Disorders Classification System. Rationale and development The system was designed to align with the results of the Delphi process (see Supplementary Table ). There was support for a design that grouped the causes of ovulatory disorders anatomically, a logical extension of the former WHO classification but more precise and more accessible than one based primarily on hormone assays. It was, therefore, rational to design this classification system according to the levels of the H‐P‐O axis as reflected in the second Delphi round (Table , question 1). It was also considered essential to allow for the designation of any element that is known or suspected to alter the functionality of the organ in a fashion that could contribute to the genesis of ovulatory dysfunction, whether related to demonstrable histopathology, abnormal laboratory assays, iatrogenic mechanisms, or even functional disorders without measurable laboratory features. However, it was recognized that an important cause of ovulatory disorders is PCOS since it affects 8%–13% of women of reproductive age. It is a complex and heterogeneous condition with comprehensive international guidelines for diagnosis, investigation, and management , , that cannot be confined to an ovarian origin. Therefore, it was determined that PCOS constitutes a class apart from the anatomical categorization, a notion that was supported in the second round of the Delphi process (Table , question 2). Therefore, the proposed FIGO classification now includes ovulatory disorders categorized into four groups as follows: Type I: Hypothalamic; Type II: Pituitary; Type III: Ovarian; and Type IV: PCOS (Figure ). The system can be referred to by the acronym “HyPO‐P,” where the “P” is separated from the other three categories recognizing that it does not reside in a single anatomic location. The new system provides practical utility and a second layer, or sub‐classification, for each of the three anatomically defined entities, including discrete pathophysiological categories. These can be remembered using the acronym “GAIN‐FIT‐PIE” (Figure ). A detailed description of every known or suspected cause of ovulatory dysfunction is beyond the scope of the present paper. Still, the new classification is presented with references to some of the many included conditions. Supplementary Table shows the linkages between various potential causes or categories of causes and the elements in the FIGO Ovulatory Disorders Classification System. USE OF THE FIGO OVULATORY DISORDERS CLASSIFICATION SYSTEM 6.1 Clinical application 6.1.1 | Identifying individuals with ovulatory disorders The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. 6.1.2 | Categorization in the FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. Clinical application 6.1.1 | Identifying individuals with ovulatory disorders The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. 6.1.2 | Categorization in the FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. The new system is designed for clinicians, educators, and investigators, including those involved in basic, translational, clinical, and epidemiological research. Depending on the audience, educators may focus only on the four primary categories or add the detail afforded by the second GAIN‐FIT‐PIE stratification. To be categorized by the system, the individual or patient must be identified as having an ovulatory disorder. Several potential clinical “entry points” are based on suspicion or knowledge about the presence of an ovulatory disorder that range from delayed menarche to infrequent or irregular menstruation through to presentation with primary or secondary infertility or hirsutism or other features or findings associated with PCOS. The term “ovulatory disorder” is not synonymous with the term “anovulation.” Instead, ovulatory disorders are considered to exist on a spectrum ranging from episodic to chronic (Figure ). Individuals may present with a chronic problem or may experience a singular episode where an anovulatory “cycle” manifests with delayed onset of HMB. Especially in the late reproductive years, women may experience regular, predictable cycles of normal length but experience HMB as the development of follicles in the luteal phase contribute to high premenstrual estradiol levels, a process known as a LOOP cycle. 9 Individuals with primary amenorrhea deserve special attention, and details regarding their investigation are beyond the scope of the present paper. However, in general, primary amenorrhea is said to be present when menstruation has not yet occurred by the age of 14 years in the absence of secondary sexual characteristics (when it is called delayed puberty) or 16 years in the presence of secondary sexual characteristics. Associated symptoms such as cyclical pelvic pain may suggest the presence of ovulation in association with a Müllerian anomaly or other obstruction that should be appropriately investigated without delay. Most, but certainly not all, ovulatory disorders are suggested by the presence of symptoms of AUB, ranging from complete absence (amenorrhea) to infrequent or irregular onset of menstrual blood flow. Secondary amenorrhea is generally defined as the cessation of menstruation for 6 months consecutively after at least one previous spontaneous menstrual bleed. Using data from extensive epidemiological studies, FIGO has previously determined that for those aged 18–45 years, and using the 5%–95% percentiles from large‐scale population studies, the normal frequency of menses is 24–38 days. Those with a cycle length of fewer than 24 days are deemed “frequent” while those whose cycle length is more than 38 days “infrequent,” a term designed to replace oligomenorrhea. , , , , Even in this category, regularity varies by age; for those aged either 18–25 or 42–45 years, the difference between the shortest and longest cycle should be 9 days or less, while for those aged 26–41 years, it is 7 days or less. Regardless, those with infrequent or irregular menstrual bleeding should be considered to have an ovulatory disorder. Diagnosing the presence of an ovulatory disorder at the extremes of reproductive age can be challenging, depending on the perception of what is normal. For postmenarcheal girls aged under 18 years, infrequent menstrual bleeding or irregular menstrual cycles suggesting ovulatory dysfunction are common, with available evidence suggesting that the individual's “normal” cycle length may not be established until the sixth year after menarche. , , During this pubertal transition, ovulatory dysfunction impacts about 50% of adolescent girls in the first year after menarche with a cycle length that is typically in the range of 21–45 days , but sometimes is as short as 20 days or may even exceed 60 days. In the years after menarche, these variations change such that 6 years later, the range is similar to those of adults. These issues can be explored in detail elsewhere. , However, it should be remembered that while common, and even “normal,” the individual's experience with this transition can be disruptive at a vulnerable time in their social, psychological, and physical development. A somewhat similar experience exists at the opposite end of the reproductive age spectrum, beyond the age of 45 years, as women enter what has been called the menopausal transition, where cycle length typically becomes more infrequent or irregular before culminating in amenorrhea as ovarian secretion of estradiol declines and ultimately ceases. However, this experience is perhaps even less orderly than that of the post‐menarcheal period, as there may be highly variable endocrine changes resulting in unpredictable impacts on menstrual function . Again, what is common, and often portrayed as “normal”, can be highly disruptive, particularly when coupled with other symptoms. Women who present with infertility may have accompanying menstrual symptoms typical of ovulatory disorders. However, women with cyclically normal onset of menstrual bleeding may not be ovulating, or at least not ovulating regularly, as the frequency of single‐cycle anovulation in the context of normal regular cycles is in the range of 3.7%–26.7%. , , Consequently, further evaluation beyond a detailed history will be necessary to identify those with ovulatory disorders. The optimal way to assess for ovulation and, by extension, confirm ovulatory disorders may vary according to the clinical circumstance. The menstrual history of regular, predictable cycles between 24 and 38 days remains a helpful tool, and reflects the overall experience better than evaluation of endocrine or imaging parameters from a single cycle does. While patients and clinicians have traditionally used measurement of basal body temperature, interpretation can be difficult, so this approach should be used with caution. , If available, ovulation predictor kits that measure the levels of luteinizing hormone in urine samples generally accurately reflect levels of serum luteinizing hormone and are a valuable tool for detecting ovulation in a given cycle. Simply measuring progesterone in the predicted luteal phase may provide satisfactory evidence supporting ovulatory function, particularly when the first day of the next menstrual period is known. Such an approach may be helpful in circumstances such as hirsutism, where the incidence of anovulation in women with cyclically predictable menstrual cycles is higher. There are other, less common ovulatory disorders that may require more complex evaluation to determine if they are present in a given individual. For example, identifying LUF cycles, somewhat common in infertile women, requires both confirmation of the LH surge and the performance of serial ultrasound to demonstrate failed rupture of the dominant follicle. It should be remembered that scrutiny of a single cycle may not reflect the overall experience for a given individual. FIGO Ovulatory Disorders Classification System The new system recognizes three basic strata once an ovulatory disorder has been diagnosed. The first level is categorization by one of the four primary categories as follows: Type I: Hypothalamus; Type II: Pituitary; Type III: Ovary; and Type IV: PCOS. The second level requires assignment to the known or suspected anatomically based abnormality as directed by the GAIN‐FIT‐PIE acronym. The third or tertiary level identifies a specific entity causing or contributing to the ovulatory disorder. Categorizing into these levels requires that the clinician perform whatever investigations deemed appropriate to localize the site and the presumed underlying mechanism contributing to ovulatory dysfunction. For example, the individual with infrequent and irregular menses, galactorrhea, elevated prolactin, and a magnetic resonance image demonstrating a pituitary tumor would categorize as a type 2 – N (pituitary neoplasm). The same might be said about an individual with irregular and infrequent menstruation, mild hirsutism, and sonographic evidence of at least one symmetrically enlarged ovary (≥10 ml) or an ovary with more than 20 follicles without a dominant follicle or corpus luteum, a circumstance that dictates a type 4 – PCOS classification. Use of the 20‐follicle threshold is utilized only when the patient is examined with an endovaginal ultrasound transducer with a high frequency bandwidth of at least 8 MHz. , It is recognized that the precision in determining the anatomic location and the mechanism of pathogenesis is somewhat aspirational and will vary to a degree by the disorder and the resources available to the clinician. Further discussion of the detection, characterization, and management of ovulatory disorders is beyond the spectrum of the present study, which is designed to provide a structure for clinical care, investigation, and education. DISCUSSION AND CONCLUSION The FIGO HyPO‐P system for the classification of ovulatory disorders is submitted for consideration as a worldwide standard designed to harmonize definitions and categories in a fashion that should inform clinical care, facilitate the education of patients and trainees, and improve the ability of basic, translational, clinical, and epidemiologic research to advance our knowledge of ovulatory disorders, their diagnosis, and their management. The development has the general support of a broad spectrum of national and subspecialty societies, relevant journals, and recognized experts in the realm of ovulatory dysfunction. The lay participants agreed with the need for classification. Their comments helped refine the graphical representation and supported the rationale for a lay‐oriented explanation of ovulatory disorders presented in the context of the new system. Finally, no system should be considered permanent, so review and careful modification and revision should be carried out regularly. MGM: Chair of the Ovulatory Disorders Steering Committee (ODSC); responsible for the concept, design and management of the Delphi system; management of ODSC and stakeholder meetings, compiling and analysis of data, manuscript preparation. AHB: At large member of the ODSC; helped lead design and management of the Delphi process; analysis of data; responsible for converting results into the design of the system; manuscript preparation. SHC: Member of the ODSC; participated in the Delphi design and identification of stakeholders, and manuscript preparation. HODC: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. ID: Co‐chair of the ODSC; participated in the Delphi design and identification of stakeholders, assisted with manuscript preparation. RF: Member of the ODSC; participated in the Delphi design and identification of stakeholders and assisted with manuscript preparation. LH: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. EM: Member of the ODSC; participated in the Delphi design and identification of stakeholders, and manuscript preparation. ZVDS: Member of the ODSC; participated in the Delphi design and identification of stakeholders, analysis of data, and manuscript preparation. MGM reports grant funding from AbbVie and Pharmacosmos; consulting fees from Abbvie, Myovant, American Regent, Daiichi Sankyo, Hologic Inc and Pharmacosmos as well as royalty payments from UpToDate. He serves a voluntary role as Chair of the SEUD AUB Task Force, the Past Chair of FIGO's committee on Menstrual Disorders and Related Health Impacts, and Founding and Current Chair of the Women's Health Research Collaborative. AHB reports consulting fees from NovoNordisk and is a member of the WHO's Guideline Development on Infertility and a member of the International PCOS Guideline Group. He is a Trustee of the British Fertility Society and is a Director of Balance Reproductive Health Ltd and Balance Health Ltd. HODC is current Chair, FIGO Committee on Menstrual Disorders and Related Health Impacts. She has received clinical research support for laboratory consumables and staff from Bayer AG (paid to institution) and provides consultancy advice (all paid to institution) for Bayer AG, PregLem SA, Gedeon Richter, Vifor Pharma UK Ltd, AbbVie Inc; Myovant Sciences GmbH. HC has received royalties from UpToDate for articles on abnormal uterine bleeding. The rest of the authors have no conflicts of interest. None. Supplementary Table 1. Linking Delphi rounds to HyPO‐P components. Click here for additional data file. Appendix S1: Supporting information Click here for additional data file. |
Arbuscular mycorrhizal interactions and nutrient supply mediate floral trait variation and pollinator visitation | 2d11943d-595f-422e-b34e-ee0bd7b84fb8 | 11617648 | Microbiology[mh] | Floral traits, including floral display and nutritional rewards from pollen and nectar, drive bee visitation (Willmer, ; Bauer et al ., ; Roy et al ., ; Parachnowitsch et al ., ) and, in turn, greater bee visitation ensures successful pollination for plants (Willmer, ; Bauer et al ., ; Roy et al ., ; Parachnowitsch et al ., ). However, floral resources can vary widely in quality and quantity across environmental contexts (Brunet et al ., ; Goulnik et al ., ; Kuppler et al ., ). Therefore, it is imperative to characterize the ecological mechanisms that can enhance floral resources to increase bee visitation. Recent focus has shifted belowground to examine how microorganisms in the soil can improve plant performance, including floral resource production (Barber & Soper Gorden, ; Hyjazie & Sargent, ). However, there has been less attention to how functionally distinct microbial communities (and the associated trait variation) can directly or indirectly influence the relationship between floral resources and bee visitation. Here, using an experimental approach, we investigate how trait variation in microorganisms, specifically arbuscular mycorrhizal fungi (AMF) and their traits, affects floral resource production and how that, in turn, affects bee visitation. A secondary goal is to determine how differences in AMF ecological strategies and interactions with phosphorus (P) supply affect these characteristics. AMF, which grow symbiotically in the roots of most vascular plants (Smith & Read, ), are known for often improving plant growth and fitness. In this symbiosis, plants provide carbon to AMF, and in exchange, AMF improve access to nutrients such as P and nitrogen (Smith & Read, ). Specifically, AMF acquire and transport nutrients via hyphal networks that extend from outside the root (extraradical hyphae) to inside the root (intraradical hyphae). Nutrients are ultimately transferred to the plant host through structures called arbuscules, attached to intraradical hyphae, that together colonize root cortical cells. In this way, AMF could ultimately influence floral resources by promoting greater uptake of nutrients critical for flower production, including for flower production (size and quantity) and nectar and pollen production (Barber & Soper Gorden, ; Hyjazie & Sargent, ). In fact, some evidence suggests that AMF can influence flower size and number (Gange & Smith, ; Wolfe et al ., ), flowering duration (Sun et al ., ), floral volatiles (Barber et al ., ), nectar quality and quantity (Kaya et al ., ), pollen quality and quantity (Poulton et al ., ; Varga & Kytöviita, ; Pereyra et al ., ), and pollinator behavior (Barber et al ., ), including the composition of the pollinator visitors (Cahill et al ., ; Bennett & Cahill Jr., ). Improvements to floral resources via AMF (Bennett & Meek, ) could increase visitation and pollination services because bees tend to prefer plants with larger and more abundant flowers and flowers with higher nutritional rewards (i.e. greater pollen and nectar quality and quantity; Bauer et al ., ; Willmer, ; Roy et al ., ; Parachnowitsch et al ., ). However, previous studies that examined the connection between AMF and floral traits and/or bee visitation have been restricted to experimental systems with only a single AMF taxon or an uncharacterized AMF community (Kaya et al ., ; Gange & Smith, ; Sun et al ., ; Varga & Kytöviita, ; Pereyra et al ., ); furthermore, few have examined the direct or indirect pathways that can exist between distinct AMF communities, floral traits, and bee visitation (but see Barber et al ., ). Because AMF are not functionally homogeneous (Verbruggen & Kiers, ; Chagnon et al ., ; van der Heijden et al ., ), it is important to assess whether and how compositionally and functionally distinct AMF communities, such as differences in life‐history strategies, alter floral resources and affect bee visitation. In particular, morphological, physiological, and phenological traits can differ among and within AMF species (Kokkoris & Hart, ; Chaudhary et al ., ), which may indicate life‐history strategies for AMF (e.g. trade‐offs between the extent of root colonization and hyphal biomass production; Hart & Reader, ). In different environments, such as nutrient‐rich or poor soils, variations in these AMF traits are thought to result in either a net relative cost or benefit to plants (Johnson, ; Johnson, ). For example, the hypothesized Grime's C‐S‐R framework for AMF communities (Chagnon et al ., ) aims to categorize AMF into three life‐history strategies: competitor, stress‐tolerator, and ruderal. In this framework, competitor AMF supersede other AMF at obtaining carbon from plant hosts by optimizing uptake and transfer of nutrients like P to its plant host, which requires greater investment in extradical hyphal production vs root colonization. S tress‐tolerant AMF prevail in low‐resource and stressful conditions (e.g. low carbon supply from the host) by reducing hyphal biomass production, which in turn provides limited nutrient transfer to its plant host in the short‐term. Ruderal AMF occupy recently disturbed soils through rapid production of spores and reestablishment of hyphal networks and symbiotic interactions (i.e. root colonization), but this high biomass turnover rate may indicate low‐resource use efficiency, ultimately resulting in a disadvantage to plants. Thus, variations in these AMF traits could ultimately impact interactions between plants and AMF and thus AMF function. In this study, we determined how the composition and trait variation of AMF communities affect the relative benefit plants derive from the mycorrhizal associations, including the pathway from plant growth to floral resources to bee visitation, in low vs high P environments. To do this, we conducted a glasshouse experiment comparing how four synthetic AMF communities affected squash ( Cucurbita pepo ) growth and floral resources under two levels of P, and then observed how these changes to experimental plants affected bee behavior in a field setting. The four synthetic communities, which included three pairs of AMF species and a mixture of all six species, were created following the hypothesized Grime's C‐S‐R framework for AMF communities (Chagnon et al ., ) to capture trait variation among AMF species. Specifically, we examined three AMF life‐history strategies: competitor, stress‐tolerator, and ruderal. Importantly, although no conclusive C‐S‐R designation has been identified for individual AMF species, and it remains debated (Treseder, ), this framework offers a starting point to conduct experiments that interrogate how different AMF communities, including the AMF trait variation within these communities, affect floral traits and bee visitation. First, we conducted ‘treatment‐trait correlations’ to examine the effect of the treatment combinations (i.e. distinct AMF communities under different P environments) on the plant (i.e. shoot and root biomass), community‐level AMF (i.e. root colonization, hyphal biomass, and spore production), and floral traits (i.e. flower number, flower size, pollen density and protein, and nectar volume and sugar) in addition to bee visitation. We predicted that more resource‐competitive AMF would bolster P uptake for plants in both low and high P supply environments and, thus, plants that associate with competitor AMF would have improved plant growth, floral resource quantity, and quality, and ultimately greater bee visitation, compared with either stress‐tolerant or ruderal AMF. Additionally, we expected that the effect of competitor AMF species on plant growth, floral resources, and bee visitation would be bolstered when included in a more functionally diverse AMF community. Specifically, a mixture of AMF species with distinct life‐history strategies could result in synergistic interactions, positively affecting plant growth and floral resources. By contrast, we expected ruderal AMF would improve floral resources in low P but not in high P environments because high root colonization in high resource environments may result in a net negative effect on plants. When P is not limiting, investing in AMF may be a net carbon cost to plants (Johnson, ). Next, we conducted ‘trait–trait correlations’ using a path analysis to test the direct and indirect pathways between community‐level AMF traits and floral traits and their effect on bee visitation. We predicted that the variation in AMF traits would indirectly influence bee visitation via the effect of AMF on floral resource quantity and quality. Specifically, we expected that greater hyphal biomass relative to AMF root colonization would increase floral resource production. As a result, bee visitation (i.e. number of visits or duration) would respond positively to improvements in the quantity or quality of floral resources (e.g. increased flower size or pollen protein). Therefore, if AMF enhance plant nutrient acquisition and increase floral resource production, then the presence of AMF should ultimately support bee visitation. Overall, by determining how distinct AMF communities alter floral resources and ultimately drive bee visitation, we link belowground interactions to aboveground interactions while taking into account trait differentiation within AMF communities. Study system In this experiment Cucurbita pepo L. var. cylindrica (hereafter ‘squash’) was used to study the relationship between AMF functional groups (following Grime's C‐S‐R framework in Chagnon et al ., ) and low‐/high P additions on bee visitation and pollination in a glasshouse and experimental field setting at the University of California, Berkeley (Berkeley, CA, USA) between June 29, 2019 and August 31, 2019. We used two nutrient levels (low vs high P supply) and four different synthetic AMF mixtures (competitor, stress‐tolerant, and ruderal species plus a mixture of all four species) and a control, in a factorial design for a total of 10 treatment combinations with five replicates each (Fig. ). Squash is a widely grown, monoecious annual plant, which produces flowers that are only viable for pollination 1 d from sunrise to midday. Squash forms associations with a diversity of AMF species (Smith & Read, ). Squash is pollinated by a wide range of bees, including generalist bees (e.g. honey bees, Apis mellifera L., and bumble bees, Bombus spp., and solitary bees such as Halictadae) and specialist bees (e.g. Peponapis sp.). AMF inoculum We chose two different AMF species per C‐S‐R group to create 4 different AMF inoculation mixtures plus a control (Fig. ): (1) competitor species, Gigaspora rosea and G. albida ; (2) stress‐tolerant species, Acaulospora morrawiae and A. spinosa ; (3) ruderal species, Rhizophagus intraradices and Funneliformis mosseae ; (4) all CSR species (competitor, stress‐tolerant, and ruderal); and (5) a no AMF species control with an autoclaved (twice 48 h apart at 121°C for 45 min) mixture of all species. AMF richness thus varied across the mixtures: richness of 2 for AMF mixtures 1–3 (competitor, stress‐tolerant, ruderal), 6 for mixture 4 (CSR), and effectively 0 for mixture 5 (control), in which inoculum was autoclaved. AMF inoculum was acquired from INVAM (West Virginia University, Morgantown, WV, USA), which prepares the inoculum from roots, spores, hyphae, and the original growth medium. We used 30 g of each of the two species in AMF inoculation mixtures 1–3 (competitor, stress‐tolerant, R) and 10 g of each of the six species in mixtures 4–5 (CSR and control) for a total of 60 g of inoculum in each mixture for each pot. At planting, half of the inoculum (30 g) was mixed into the sand–clay mix and the other half (30 g) was put directly into the planting hole, where the seeds were placed, for a total of 60 g of inoculum. Experimental conditions On 29 June 2019, we planted 3 squash seeds (variety ‘Black Beauty’ zucchini; Baker Creek Heirloom Seed Co., Mansfield, MO, USA) in 5.4‐l nursery pots filled with 5 kg of 2 : 1 (v/v) growing medium mix of silica sand and a calcinated, attapulgite clay soil conditioner (Agsorb 5/20 LVM‐G, Chicago, IL, USA) modified from (Hodge et al ., ; Thirkell et al ., ), hereafter, ‘sand–clay mix’, and 60 g of AMF inoculum to a final bulk density of 0.923 g cm −3 . The sand–clay mix was autoclaved twice 48 h apart at 121°C for 45 min to ensure a sterile growing medium. Drainage holes (9‐2 cm 2 circular holes) in pots were covered with 20 μm mesh to prevent roots from growing out while still allowing water to drain. Seeds were surface sterilized using a 10% bleach solution and then rinsed with deionized water. On 5 July 2019, seedlings were thinned to a single seedling per pot. Pots were routinely rearranged in a random order in rows that were 1 m apart in a glasshouse at c . 27°C with a 14 h photoperiod with supplemental lighting (Oxford Tract, UC Berkeley, Berkeley, CA, USA). On 9 August 2019, after at least one flower had emerged for each plant, all plants were transferred to a nearby field setting for bee observations and floral resource measurements (Oxford Tract; UC Berkeley). The field is adjacent to an urban garden which supplies diverse floral resources attracting a diverse group of bees (Wojcik et al ., ). Pots were placed on the ground and were randomly arranged in rows that were 1 m apart. Water and nutrient supply To determine water holding capacity (WHC), a 5.4‐l pot was filled with 5 kg of sand–clay mixture, the same amount at the same bulk density used in experimental pots, and then saturated with water and allowed to drain for 48 h; then, the gravimetric water content (GWC) was measured. The GWC of the sand–clay mix at WHC capacity was 17%. Using this information, pots were weighed and watered every other day to maintain WHC with deionized water for the duration of the experiment. To supply nutrients, we used a modified Long Ashton solution, following Rouphael & Colla , consisting of N (16.0 mM), P (1.5 mM), K (5.5 mM), S (3.5 mM), and Ca (7.0 mM) for the ‘high’ P supply treatment. For the ‘low’ P supply treatment, we used one‐tenth the concentration of P (0.15 mM) and the same concentrations of the other macronutrients and micronutrients in the ‘high’ P solution following Valentine et al . . The nutrient solution (200 ml) was applied at planting and, thereafter, once every 4 d with watering events. Floral traits Between 9 and 19 August 2019, floral trait measurements were taken every day in the field setting. One day before sampling plants, flowers were covered using insect exclusion bags made from a woven polyester fabric to prevent insects from collecting nectar or pollen. Not all plants produce flowers each day. Floral traits per plant were measured as: (1) floral display (flower size and number); (2) nectar resources (volume and sugar concentration); and (3) pollen resources (volume and protein concentration). Flower size refers to the average length of the petals to the base of the flower. Nectar volume was measured using calibrated microcapillary tubes, and sucrose concentration was measured using a refractometer (Eclipse Handheld Refractometer; Bellingham & Stanley Ltd, Tunbridge Wells, UK). For pollen measurements, anthers were collected and frozen at −20°C for later processing. A 1 mg subsample of pollen was used to determine pollen protein concentration using a Bradford Assay following (Vaudo et al ., ). The remaining sample was suspended in 1 ml 50–50 glycerol water, and a 10 μl aliquot was mounted on a slide to determine the relative density of pollen grains (pollen density) by counting the total number of pollen grains. Pollinator survey We surveyed bees for 7 d from August 23 to 30 for a total of 24.5 person‐hours of observations. All surveys were performed from 8:30 h to 12:00 h when bees were most active at the site and before flowers closed. We followed individual bees within the experimental plot and used handheld digital voice recorders to flower visitation, measured as the number of flowers visited and time spent per flower in seconds, only if bees probed the stamen, pistil, or nectary following Barber et al . . Since our methods relied on following individual pollinators, our observations only consisted of bees, which were the most actively mobile pollinators at the experimental plot at the time of observations. Bees were identified as honey bees ( Apis mellifer a), squash bees ( Peponapis spp. and Xenoglossa spp.), or within six other flower visitor categories used in observational surveys of flower visitors in this region (Supporting Information Table ; Kremen et al ., ); all identified bees are known pollinators of squash. Individual bees were followed as long as possible or until they left the plot. We calculated the number of bee visits as the number of flower visits per day on each plant and bee visitation time as the total time spent by bee per day on each plant. Plant growth traits At the end of the pollinator survey, plants were destructively harvested to determine shoot and root biomass. Shoots were cut at the surface of the sand–clay mixture. The root structure was carefully removed from the sand–clay mixture, and any adhered sand and clay particles were rinsed off the roots in dH 2 O. All plant material was dried at 60° competitor, and shoot dry weights and root dry weights were determined. The remaining sand–clay mixture was stored at 4°C for extradical hyphal length measurements, and a subsample of the roots was taken before drying for root colonization measurements. AMF traits Root colonization We determined root colonization by counting AMF composition in stained roots. Roots were cleared in 10% KOH, acidified in 1% HCl, and stained with trypan blue (Koske & Gemma, ). Percent colonization by AMF was determined using the intersections method at 200× magnification (McGonigle et al ., ). AMF colonization in this study refers to percent root colonization by arbuscules, vesicles, or hyphae over the total intersections counted ( c . 100 intersections per sample). Hyphal length As a proxy for AMF hyphal biomass, the total length of extraradical hyphae was measured on extracted hyphae using the membrane filter technique modified after Hanssen et al . . Briefly, two 5 g samples of sand–clay mixture from each pot were suspended in 15 ml of dH2O and 20 ml of sodium hexametaphosphate (35%) and stirred overnight. The soil suspension was then sieved through a 32 μm sieve and resuspended with 100 ml dH2O. Next, 10 ml of the suspension was filtered onto a 0.47 μm nitrocellulose filter paper (gridded, 25 mm diameter), which was then stained with trypan blue (Koske & Gemma, ). The filters were placed on slides with 50–50 glycerol water. Hyphal length ( H ) on the slide was calculated using the equation H = ( I π A )/(2 l ), where I is the average number of intersections per grid, A is the grid area, and L is the total length of the grid lines. Then, the total length of fungal hyphae ( F ) in each pot (mg −1 of sand–clay mixture) was estimated using the equation F = H × 10 −6 ( A / B ) (1/ S ), where A is the area of the filter, B is the grid area, and S is the amount of soil filtered (Bloem et al ., ). Ratio of root colonization to hyphal length To account for root colonization vs the production of hyphae, we calculated the ratio of percent root colonization to hyphal length (root colonization : hyphal length) for each pot. Spore count The number of spores was measured using the sucrose density gradient centrifugation method following Brundrett et al . . First, we blended 100 g of the sand–clay mixture with 200 ml of deionized water for 30 s at high speed using a blender. The blended material was poured through a 32 μm and 500 μm sieve. The contents of the 500 μm sieve were transferred to a 50 ml centrifuge tube with a 20–60% sucrose gradient and centrifuged at 960× g for 3 min. The supernatant was decanted into a 32 μm sieve, and the contents were transferred to a gridded Petri dish with 20 ml deionized water. The total number of spores was then counted under the microscope. Statistical analyses We first examined the effect of the treatment combinations on the plant, AMF, and floral traits in addition to bee visitation (‘treatment‐trait models’). Then, using a path analysis, we tested the direct and indirect pathways between AMF traits and floral traits and their effect on bee visitation (‘trait–trait models’). Treatment‐trait models We tested the effect of AMF inoculation, P addition (low and high P supply), and their interaction on the multiple plants, floral, AMF traits measured, and bee visitation. Bee visitation was modeled for all bee groups (e.g. honey bees and other wild bees) combined because there was insufficient data for each bee group to model them separately (Table ). All models had the same model structure: AMF inoculation treatment, P supply treatment, and their interaction as the fixed effects. We used generalized linear models (GLM) for all treatment‐trait tests except for floral traits and bee visitation; these variables were measured on individual plants over multiple days and, thus, we used generalized linear mixed models (GLMM), with individual plant identity and date as random effects to account for the variation between sampling dates (Bates et al ., ; Kuznetsova et al ., ). Models were constructed using lme4 and lmertest packages in R. Root colonization models assumed a binomial error distribution, and models with count data (i.e. spore count, number of flowers, pollen density, and number of bee visits) assumed a Poisson error distribution. All other models assumed a Gaussian error distribution. To determine the significance of the fixed effects, we used an F ‐test for models with continuous variables and a likelihood ratio test for models with count data. Type II sums of squares were used for each test (Langsrud, ). Degrees of freedom were calculated using the Kenward & Roger method. While our experiment focuses on the ‘functional’ effect of the AMF inoculation (competitor, stress‐tolerant, ruderal, and the combined CSR species, plus the control; model AMF CSR ), we also tested whether there was a ‘richness’ effect (model AMF richness ) or ‘presence–absence’ effect of AMF inoculation (model AMF pa ) on AMF traits, floral traits, and bee visitation. For these models, we ran the same GLM or GLMM (with the same fixed/random effects structure) for each variable with the levels of AMF inoculation treatment effect regrouped as follows (Table ): (a) AMF richness of 0 sp. (none) vs 2 sp. (competitor + stress‐tolerant + ruderal) vs 6 sp. (CSR) for AMF richness model; (b) and presence (none) vs absence (competitor + stress‐tolerant + ruderal + CSR) of AMF inoculum for the AMF pa model. Trait–trait models Next, we determined the trait–trait relationship between AMF traits (hyphal length, root colonization, and root colonization : hyphal length) and floral traits (flower number, flower size, pollen density and protein, and nectar volume and sugar) on bee visitation (number of bee visits and bee visitation time) using a piecewise structural equation model (PSEM, or path analysis). In contrast to the traditional structural equation modeling (SEM) method, piecewise SEM provides an important advantage as it permits the analysis of data with non‐normal error distributions, such as bee visitation count data (Lefcheck, ). For both bee visitation response variables, we constructed the same a priori model, considering all possible mechanisms whereby AMF traits and floral traits influence bee visitation. We simplified the initial models by eliminating nonsignificant pathways before developing the final models. Model adequacy was determined using the chi‐squared test and AIC. Because the AMF traits were measured once per individual plant, but floral traits and bee visitation were measured across multiple days (but not overlapping days), we averaged all floral traits for each individual plant and summed bee visits across days. We accounted for the number of observation days (log‐transformed) for bee visitation in the model using the offset function. Structural equation modeling was conducted with the R package psem (Lefcheck, ). In all GLM, GLMM, and PSEM models, we used Gaussian and Poisson error distributions, respectively, for continuous and count variables. We performed all statistical analyses in R v.4.4.1 (R Core Team, ). In this experiment Cucurbita pepo L. var. cylindrica (hereafter ‘squash’) was used to study the relationship between AMF functional groups (following Grime's C‐S‐R framework in Chagnon et al ., ) and low‐/high P additions on bee visitation and pollination in a glasshouse and experimental field setting at the University of California, Berkeley (Berkeley, CA, USA) between June 29, 2019 and August 31, 2019. We used two nutrient levels (low vs high P supply) and four different synthetic AMF mixtures (competitor, stress‐tolerant, and ruderal species plus a mixture of all four species) and a control, in a factorial design for a total of 10 treatment combinations with five replicates each (Fig. ). Squash is a widely grown, monoecious annual plant, which produces flowers that are only viable for pollination 1 d from sunrise to midday. Squash forms associations with a diversity of AMF species (Smith & Read, ). Squash is pollinated by a wide range of bees, including generalist bees (e.g. honey bees, Apis mellifera L., and bumble bees, Bombus spp., and solitary bees such as Halictadae) and specialist bees (e.g. Peponapis sp.). inoculum We chose two different AMF species per C‐S‐R group to create 4 different AMF inoculation mixtures plus a control (Fig. ): (1) competitor species, Gigaspora rosea and G. albida ; (2) stress‐tolerant species, Acaulospora morrawiae and A. spinosa ; (3) ruderal species, Rhizophagus intraradices and Funneliformis mosseae ; (4) all CSR species (competitor, stress‐tolerant, and ruderal); and (5) a no AMF species control with an autoclaved (twice 48 h apart at 121°C for 45 min) mixture of all species. AMF richness thus varied across the mixtures: richness of 2 for AMF mixtures 1–3 (competitor, stress‐tolerant, ruderal), 6 for mixture 4 (CSR), and effectively 0 for mixture 5 (control), in which inoculum was autoclaved. AMF inoculum was acquired from INVAM (West Virginia University, Morgantown, WV, USA), which prepares the inoculum from roots, spores, hyphae, and the original growth medium. We used 30 g of each of the two species in AMF inoculation mixtures 1–3 (competitor, stress‐tolerant, R) and 10 g of each of the six species in mixtures 4–5 (CSR and control) for a total of 60 g of inoculum in each mixture for each pot. At planting, half of the inoculum (30 g) was mixed into the sand–clay mix and the other half (30 g) was put directly into the planting hole, where the seeds were placed, for a total of 60 g of inoculum. On 29 June 2019, we planted 3 squash seeds (variety ‘Black Beauty’ zucchini; Baker Creek Heirloom Seed Co., Mansfield, MO, USA) in 5.4‐l nursery pots filled with 5 kg of 2 : 1 (v/v) growing medium mix of silica sand and a calcinated, attapulgite clay soil conditioner (Agsorb 5/20 LVM‐G, Chicago, IL, USA) modified from (Hodge et al ., ; Thirkell et al ., ), hereafter, ‘sand–clay mix’, and 60 g of AMF inoculum to a final bulk density of 0.923 g cm −3 . The sand–clay mix was autoclaved twice 48 h apart at 121°C for 45 min to ensure a sterile growing medium. Drainage holes (9‐2 cm 2 circular holes) in pots were covered with 20 μm mesh to prevent roots from growing out while still allowing water to drain. Seeds were surface sterilized using a 10% bleach solution and then rinsed with deionized water. On 5 July 2019, seedlings were thinned to a single seedling per pot. Pots were routinely rearranged in a random order in rows that were 1 m apart in a glasshouse at c . 27°C with a 14 h photoperiod with supplemental lighting (Oxford Tract, UC Berkeley, Berkeley, CA, USA). On 9 August 2019, after at least one flower had emerged for each plant, all plants were transferred to a nearby field setting for bee observations and floral resource measurements (Oxford Tract; UC Berkeley). The field is adjacent to an urban garden which supplies diverse floral resources attracting a diverse group of bees (Wojcik et al ., ). Pots were placed on the ground and were randomly arranged in rows that were 1 m apart. To determine water holding capacity (WHC), a 5.4‐l pot was filled with 5 kg of sand–clay mixture, the same amount at the same bulk density used in experimental pots, and then saturated with water and allowed to drain for 48 h; then, the gravimetric water content (GWC) was measured. The GWC of the sand–clay mix at WHC capacity was 17%. Using this information, pots were weighed and watered every other day to maintain WHC with deionized water for the duration of the experiment. To supply nutrients, we used a modified Long Ashton solution, following Rouphael & Colla , consisting of N (16.0 mM), P (1.5 mM), K (5.5 mM), S (3.5 mM), and Ca (7.0 mM) for the ‘high’ P supply treatment. For the ‘low’ P supply treatment, we used one‐tenth the concentration of P (0.15 mM) and the same concentrations of the other macronutrients and micronutrients in the ‘high’ P solution following Valentine et al . . The nutrient solution (200 ml) was applied at planting and, thereafter, once every 4 d with watering events. Between 9 and 19 August 2019, floral trait measurements were taken every day in the field setting. One day before sampling plants, flowers were covered using insect exclusion bags made from a woven polyester fabric to prevent insects from collecting nectar or pollen. Not all plants produce flowers each day. Floral traits per plant were measured as: (1) floral display (flower size and number); (2) nectar resources (volume and sugar concentration); and (3) pollen resources (volume and protein concentration). Flower size refers to the average length of the petals to the base of the flower. Nectar volume was measured using calibrated microcapillary tubes, and sucrose concentration was measured using a refractometer (Eclipse Handheld Refractometer; Bellingham & Stanley Ltd, Tunbridge Wells, UK). For pollen measurements, anthers were collected and frozen at −20°C for later processing. A 1 mg subsample of pollen was used to determine pollen protein concentration using a Bradford Assay following (Vaudo et al ., ). The remaining sample was suspended in 1 ml 50–50 glycerol water, and a 10 μl aliquot was mounted on a slide to determine the relative density of pollen grains (pollen density) by counting the total number of pollen grains. We surveyed bees for 7 d from August 23 to 30 for a total of 24.5 person‐hours of observations. All surveys were performed from 8:30 h to 12:00 h when bees were most active at the site and before flowers closed. We followed individual bees within the experimental plot and used handheld digital voice recorders to flower visitation, measured as the number of flowers visited and time spent per flower in seconds, only if bees probed the stamen, pistil, or nectary following Barber et al . . Since our methods relied on following individual pollinators, our observations only consisted of bees, which were the most actively mobile pollinators at the experimental plot at the time of observations. Bees were identified as honey bees ( Apis mellifer a), squash bees ( Peponapis spp. and Xenoglossa spp.), or within six other flower visitor categories used in observational surveys of flower visitors in this region (Supporting Information Table ; Kremen et al ., ); all identified bees are known pollinators of squash. Individual bees were followed as long as possible or until they left the plot. We calculated the number of bee visits as the number of flower visits per day on each plant and bee visitation time as the total time spent by bee per day on each plant. At the end of the pollinator survey, plants were destructively harvested to determine shoot and root biomass. Shoots were cut at the surface of the sand–clay mixture. The root structure was carefully removed from the sand–clay mixture, and any adhered sand and clay particles were rinsed off the roots in dH 2 O. All plant material was dried at 60° competitor, and shoot dry weights and root dry weights were determined. The remaining sand–clay mixture was stored at 4°C for extradical hyphal length measurements, and a subsample of the roots was taken before drying for root colonization measurements. traits Root colonization We determined root colonization by counting AMF composition in stained roots. Roots were cleared in 10% KOH, acidified in 1% HCl, and stained with trypan blue (Koske & Gemma, ). Percent colonization by AMF was determined using the intersections method at 200× magnification (McGonigle et al ., ). AMF colonization in this study refers to percent root colonization by arbuscules, vesicles, or hyphae over the total intersections counted ( c . 100 intersections per sample). Hyphal length As a proxy for AMF hyphal biomass, the total length of extraradical hyphae was measured on extracted hyphae using the membrane filter technique modified after Hanssen et al . . Briefly, two 5 g samples of sand–clay mixture from each pot were suspended in 15 ml of dH2O and 20 ml of sodium hexametaphosphate (35%) and stirred overnight. The soil suspension was then sieved through a 32 μm sieve and resuspended with 100 ml dH2O. Next, 10 ml of the suspension was filtered onto a 0.47 μm nitrocellulose filter paper (gridded, 25 mm diameter), which was then stained with trypan blue (Koske & Gemma, ). The filters were placed on slides with 50–50 glycerol water. Hyphal length ( H ) on the slide was calculated using the equation H = ( I π A )/(2 l ), where I is the average number of intersections per grid, A is the grid area, and L is the total length of the grid lines. Then, the total length of fungal hyphae ( F ) in each pot (mg −1 of sand–clay mixture) was estimated using the equation F = H × 10 −6 ( A / B ) (1/ S ), where A is the area of the filter, B is the grid area, and S is the amount of soil filtered (Bloem et al ., ). Ratio of root colonization to hyphal length To account for root colonization vs the production of hyphae, we calculated the ratio of percent root colonization to hyphal length (root colonization : hyphal length) for each pot. Spore count The number of spores was measured using the sucrose density gradient centrifugation method following Brundrett et al . . First, we blended 100 g of the sand–clay mixture with 200 ml of deionized water for 30 s at high speed using a blender. The blended material was poured through a 32 μm and 500 μm sieve. The contents of the 500 μm sieve were transferred to a 50 ml centrifuge tube with a 20–60% sucrose gradient and centrifuged at 960× g for 3 min. The supernatant was decanted into a 32 μm sieve, and the contents were transferred to a gridded Petri dish with 20 ml deionized water. The total number of spores was then counted under the microscope. We determined root colonization by counting AMF composition in stained roots. Roots were cleared in 10% KOH, acidified in 1% HCl, and stained with trypan blue (Koske & Gemma, ). Percent colonization by AMF was determined using the intersections method at 200× magnification (McGonigle et al ., ). AMF colonization in this study refers to percent root colonization by arbuscules, vesicles, or hyphae over the total intersections counted ( c . 100 intersections per sample). As a proxy for AMF hyphal biomass, the total length of extraradical hyphae was measured on extracted hyphae using the membrane filter technique modified after Hanssen et al . . Briefly, two 5 g samples of sand–clay mixture from each pot were suspended in 15 ml of dH2O and 20 ml of sodium hexametaphosphate (35%) and stirred overnight. The soil suspension was then sieved through a 32 μm sieve and resuspended with 100 ml dH2O. Next, 10 ml of the suspension was filtered onto a 0.47 μm nitrocellulose filter paper (gridded, 25 mm diameter), which was then stained with trypan blue (Koske & Gemma, ). The filters were placed on slides with 50–50 glycerol water. Hyphal length ( H ) on the slide was calculated using the equation H = ( I π A )/(2 l ), where I is the average number of intersections per grid, A is the grid area, and L is the total length of the grid lines. Then, the total length of fungal hyphae ( F ) in each pot (mg −1 of sand–clay mixture) was estimated using the equation F = H × 10 −6 ( A / B ) (1/ S ), where A is the area of the filter, B is the grid area, and S is the amount of soil filtered (Bloem et al ., ). To account for root colonization vs the production of hyphae, we calculated the ratio of percent root colonization to hyphal length (root colonization : hyphal length) for each pot. The number of spores was measured using the sucrose density gradient centrifugation method following Brundrett et al . . First, we blended 100 g of the sand–clay mixture with 200 ml of deionized water for 30 s at high speed using a blender. The blended material was poured through a 32 μm and 500 μm sieve. The contents of the 500 μm sieve were transferred to a 50 ml centrifuge tube with a 20–60% sucrose gradient and centrifuged at 960× g for 3 min. The supernatant was decanted into a 32 μm sieve, and the contents were transferred to a gridded Petri dish with 20 ml deionized water. The total number of spores was then counted under the microscope. We first examined the effect of the treatment combinations on the plant, AMF, and floral traits in addition to bee visitation (‘treatment‐trait models’). Then, using a path analysis, we tested the direct and indirect pathways between AMF traits and floral traits and their effect on bee visitation (‘trait–trait models’). Treatment‐trait models We tested the effect of AMF inoculation, P addition (low and high P supply), and their interaction on the multiple plants, floral, AMF traits measured, and bee visitation. Bee visitation was modeled for all bee groups (e.g. honey bees and other wild bees) combined because there was insufficient data for each bee group to model them separately (Table ). All models had the same model structure: AMF inoculation treatment, P supply treatment, and their interaction as the fixed effects. We used generalized linear models (GLM) for all treatment‐trait tests except for floral traits and bee visitation; these variables were measured on individual plants over multiple days and, thus, we used generalized linear mixed models (GLMM), with individual plant identity and date as random effects to account for the variation between sampling dates (Bates et al ., ; Kuznetsova et al ., ). Models were constructed using lme4 and lmertest packages in R. Root colonization models assumed a binomial error distribution, and models with count data (i.e. spore count, number of flowers, pollen density, and number of bee visits) assumed a Poisson error distribution. All other models assumed a Gaussian error distribution. To determine the significance of the fixed effects, we used an F ‐test for models with continuous variables and a likelihood ratio test for models with count data. Type II sums of squares were used for each test (Langsrud, ). Degrees of freedom were calculated using the Kenward & Roger method. While our experiment focuses on the ‘functional’ effect of the AMF inoculation (competitor, stress‐tolerant, ruderal, and the combined CSR species, plus the control; model AMF CSR ), we also tested whether there was a ‘richness’ effect (model AMF richness ) or ‘presence–absence’ effect of AMF inoculation (model AMF pa ) on AMF traits, floral traits, and bee visitation. For these models, we ran the same GLM or GLMM (with the same fixed/random effects structure) for each variable with the levels of AMF inoculation treatment effect regrouped as follows (Table ): (a) AMF richness of 0 sp. (none) vs 2 sp. (competitor + stress‐tolerant + ruderal) vs 6 sp. (CSR) for AMF richness model; (b) and presence (none) vs absence (competitor + stress‐tolerant + ruderal + CSR) of AMF inoculum for the AMF pa model. Trait–trait models Next, we determined the trait–trait relationship between AMF traits (hyphal length, root colonization, and root colonization : hyphal length) and floral traits (flower number, flower size, pollen density and protein, and nectar volume and sugar) on bee visitation (number of bee visits and bee visitation time) using a piecewise structural equation model (PSEM, or path analysis). In contrast to the traditional structural equation modeling (SEM) method, piecewise SEM provides an important advantage as it permits the analysis of data with non‐normal error distributions, such as bee visitation count data (Lefcheck, ). For both bee visitation response variables, we constructed the same a priori model, considering all possible mechanisms whereby AMF traits and floral traits influence bee visitation. We simplified the initial models by eliminating nonsignificant pathways before developing the final models. Model adequacy was determined using the chi‐squared test and AIC. Because the AMF traits were measured once per individual plant, but floral traits and bee visitation were measured across multiple days (but not overlapping days), we averaged all floral traits for each individual plant and summed bee visits across days. We accounted for the number of observation days (log‐transformed) for bee visitation in the model using the offset function. Structural equation modeling was conducted with the R package psem (Lefcheck, ). In all GLM, GLMM, and PSEM models, we used Gaussian and Poisson error distributions, respectively, for continuous and count variables. We performed all statistical analyses in R v.4.4.1 (R Core Team, ). We tested the effect of AMF inoculation, P addition (low and high P supply), and their interaction on the multiple plants, floral, AMF traits measured, and bee visitation. Bee visitation was modeled for all bee groups (e.g. honey bees and other wild bees) combined because there was insufficient data for each bee group to model them separately (Table ). All models had the same model structure: AMF inoculation treatment, P supply treatment, and their interaction as the fixed effects. We used generalized linear models (GLM) for all treatment‐trait tests except for floral traits and bee visitation; these variables were measured on individual plants over multiple days and, thus, we used generalized linear mixed models (GLMM), with individual plant identity and date as random effects to account for the variation between sampling dates (Bates et al ., ; Kuznetsova et al ., ). Models were constructed using lme4 and lmertest packages in R. Root colonization models assumed a binomial error distribution, and models with count data (i.e. spore count, number of flowers, pollen density, and number of bee visits) assumed a Poisson error distribution. All other models assumed a Gaussian error distribution. To determine the significance of the fixed effects, we used an F ‐test for models with continuous variables and a likelihood ratio test for models with count data. Type II sums of squares were used for each test (Langsrud, ). Degrees of freedom were calculated using the Kenward & Roger method. While our experiment focuses on the ‘functional’ effect of the AMF inoculation (competitor, stress‐tolerant, ruderal, and the combined CSR species, plus the control; model AMF CSR ), we also tested whether there was a ‘richness’ effect (model AMF richness ) or ‘presence–absence’ effect of AMF inoculation (model AMF pa ) on AMF traits, floral traits, and bee visitation. For these models, we ran the same GLM or GLMM (with the same fixed/random effects structure) for each variable with the levels of AMF inoculation treatment effect regrouped as follows (Table ): (a) AMF richness of 0 sp. (none) vs 2 sp. (competitor + stress‐tolerant + ruderal) vs 6 sp. (CSR) for AMF richness model; (b) and presence (none) vs absence (competitor + stress‐tolerant + ruderal + CSR) of AMF inoculum for the AMF pa model. Next, we determined the trait–trait relationship between AMF traits (hyphal length, root colonization, and root colonization : hyphal length) and floral traits (flower number, flower size, pollen density and protein, and nectar volume and sugar) on bee visitation (number of bee visits and bee visitation time) using a piecewise structural equation model (PSEM, or path analysis). In contrast to the traditional structural equation modeling (SEM) method, piecewise SEM provides an important advantage as it permits the analysis of data with non‐normal error distributions, such as bee visitation count data (Lefcheck, ). For both bee visitation response variables, we constructed the same a priori model, considering all possible mechanisms whereby AMF traits and floral traits influence bee visitation. We simplified the initial models by eliminating nonsignificant pathways before developing the final models. Model adequacy was determined using the chi‐squared test and AIC. Because the AMF traits were measured once per individual plant, but floral traits and bee visitation were measured across multiple days (but not overlapping days), we averaged all floral traits for each individual plant and summed bee visits across days. We accounted for the number of observation days (log‐transformed) for bee visitation in the model using the offset function. Structural equation modeling was conducted with the R package psem (Lefcheck, ). In all GLM, GLMM, and PSEM models, we used Gaussian and Poisson error distributions, respectively, for continuous and count variables. We performed all statistical analyses in R v.4.4.1 (R Core Team, ). Plant growth traits Shoot biomass varied significantly between AMF CSR functional groups ( F = 3.03, P = 0.03; Fig. ; Table ). Plants inoculated with stress‐tolerant AMF had 11% greater shoot biomass on average than the control (Table ). The AMF CSR inoculation treatment had no effect on root biomass and root‐to‐shoot biomass (Fig. ; Table ). The richness and presence–absence of AMF inoculation also had a significant effect on shoot biomass (Table ), with the largest shoot biomass when inoculated with the richest AMF inoculum (Table ). Root biomass and root‐to‐shoot biomass did not significantly vary between the richness and presence–absence levels (Table ). P supply had a strong effect on shoot ( F = 61.32, P < 0.001) and root biomass ( F = 11.34, P < 0.05) but not root‐to‐shoot biomass (Fig. ; Table ). Specifically, in the high P supply treatment, shoot biomass was 18% greater on average, and root biomass was 15% greater on average (Table ). Across all AMF CSR , AMF richness , and AMF p‐a models, there was no interactive effect of P supply and AMF treatments on the plant traits measured. AMF traits There was a strong effect of the AMF CSR inoculation treatment on all AMF traits (hyphal length (m g −1 ): F = 28.22, P < 0.001; root colonization : F = 256.07, P < 0.001; root colonization : hyphae: F = 25.48, P < 0.001; spore count (grains ml −1 ): F = 2.95. P = 0.03; Table ; Fig. ). For example, hyphal length was 300% higher than the control in pots with ruderal type AMF inoculum, followed by CSR, stress‐tolerant, and ruderal (Fig. ; Table ). We observed a similar trend for root colonization and the ratio of root colonization to hyphae, where plants/pots inoculated with CSR and ruderal type AMF inoculum had the highest values, followed by stress‐tolerant and competitor type AMF inoculum. By contrast, spore production was highest for plants inoculated with CSR‐type AMF inoculum, followed by ruderal stress‐tolerant, and competitor type AMF inoculum, but root colonization for plants inoculated with ruderal type inoculum. There was also a richness and presence–absence effect of AMF inoculation on all AMF traits. AMF CSR functional groups and P supply also had an interactive effect on all AMF traits (hyphal length (m g −1 ): F = 14.52, P < 0.001; root colonization : F = 13.18, P = 0.01; root colonization : hyphae: F = 11.10, P < 0.001; spore count (grains ml −1 ): F = 4.48, P < 0.01; Table ; Fig. ). While hyphal length was larger in AMF groups stress‐tolerant, ruderal and CSR with low P supply, in competitor type AMF groups, hyphal length was larger with high P supply. Root colonization was substantially greater on average ( c . 99% more; Table ) in pots with low P supply with the highest levels observed in ruderal and CSR‐type AMF mixtures, but root colonization was generally low for plants that received high P supply regardless of AMF CSR inoculation treatment (Fig. ). Similarly, the ratio of root colonization to hyphal production was greatest in pots with low P supply and was the highest in pots inoculated with the CSR‐type AMF (Fig. ). Spore production was higher in AMF inoculated pots, with stress‐tolerant and ruderal type AMF producing the most spores (Fig. ). We observed low background levels of spores and hyphae in control pots, likely due to the autoclaved AMF inoculum containing residual spores and hyphae, yet there was virtually no AMF colonization in control pots (i.e. only one plant with 1% root colonization; Fig. ). In the AMF richness models, a significant interactive effect of AMF inoculation and P supply was present for all AMF traits (root colonization : F = 3.60, P = 0.04; root colonization : hyphae: F = 201.55, P < 0.001; spore count (grains ml −1 ): F = 3.41, P = 0.04) except hyphal length production. By contrast, for AMF p‐a models, a significant interactive effect of AMF inoculation and P supply was only present for spore production ( F = 6.34, P = 0.02). P supply alone had a significant effect on hyphal production ( F = 39.38, P < 0.001), root colonization ( F = 349.26, P < 0.001), and the ratio of root colonization to hyphal production ( F = 34.67, P < 0.001). Plants grown with a low P supply produced 145% more hyphae on average and had a higher ratio of root colonization to hyphal production than those that received a high P supply. Floral traits AMF CSR inoculation treatments significantly affected the flower size ( F = 2.64, P = 0.05), total number of flowers ( F = 16.97, P < 0.01), nectar sugar nectar volume ( F = 3.64, P = 0.01), and pollen protein ( F = 4.85, P < 0.01). Plants inoculated with stress‐tolerant AMF had 13% higher nectar sugar concentration than the control (Fig. ; Table ). By contrast, plants inoculated with ruderal and CSR types had greater nectar volume (up to 318% more nectar than the control on average; Table ). For pollen protein, plants inoculated with competitor and CSR types had up to 21% greater pollen protein than the control on average (Table ). The interaction between AMF CSR and P supply – not AMF CSR alone – significantly affected spore density ( F = 11.10, P = 0.03; Table ). Among the traits that significantly varied among AMF CSR functional groups, only variation in flower size (AMF richness : F = 4.40, P = 0.02; AMF pa : F = 7.65, P < 0.01) and the number of total flowers (AMF richness : F = 8.09, P = 0.02; AMF pa : F = 5.20, P = 0.02) could also be explained by the richness and presence–absence of AMF inoculation (Table ). In general, plants grown with AMF (Fig. ; Table ) had c . 29% more flowers on average (Table ), while plants inoculated with the richest assemblage of AMF (i.e. CSR type) had the largest number of flowers (Fig. ). The P supply treatment also had a strong effect on nectar volume ( F = 9.22, P < 0.01), with plants that received a higher supply of P producing a greater amount of nectar (Table ; Fig. ). We observed a similar effect of P supply on the number of flowers ( F = 9.55, P < 0.01), with an average of c . 27% more flowers on plants grown with high P supply. Across all AMF CSR , AMF richness , and AMF p‐a models, there was no interactive effect of AMF inoculation and P supply on floral traits. Bee visitation AMF functional types (i.e. AMF CSR ) did not affect bee visitation and the number of bee visits. However, both AMF richness had an effect on the number of bee visits (number of bee visits: F = 7.29, P = 0.03; bee visitation time: F = 3.70, P = 0.03), but only AMF p‐a had an effect on the number of bee visits ( F = 4.16, P = 0.04). Plants grown with AMF inoculum received 28% more bee visits and 47% more bee visitation time (Table ). Plants inoculated with six AMF species (i.e. representing all functional groups) received the highest bee visitation time was highest (Fig. ). P supply did not affect bee visitation or number of bee visits. Effect of belowground and aboveground traits on bee visitation PSEM revealed a direct link between the number of bee visits and flower size and further revealed that flower size was associated negatively with AMF root colonization and positively with hyphal length (Fig. ). No significant pathways to bee visitation time emerged in the PSEM. Shoot biomass varied significantly between AMF CSR functional groups ( F = 3.03, P = 0.03; Fig. ; Table ). Plants inoculated with stress‐tolerant AMF had 11% greater shoot biomass on average than the control (Table ). The AMF CSR inoculation treatment had no effect on root biomass and root‐to‐shoot biomass (Fig. ; Table ). The richness and presence–absence of AMF inoculation also had a significant effect on shoot biomass (Table ), with the largest shoot biomass when inoculated with the richest AMF inoculum (Table ). Root biomass and root‐to‐shoot biomass did not significantly vary between the richness and presence–absence levels (Table ). P supply had a strong effect on shoot ( F = 61.32, P < 0.001) and root biomass ( F = 11.34, P < 0.05) but not root‐to‐shoot biomass (Fig. ; Table ). Specifically, in the high P supply treatment, shoot biomass was 18% greater on average, and root biomass was 15% greater on average (Table ). Across all AMF CSR , AMF richness , and AMF p‐a models, there was no interactive effect of P supply and AMF treatments on the plant traits measured. traits There was a strong effect of the AMF CSR inoculation treatment on all AMF traits (hyphal length (m g −1 ): F = 28.22, P < 0.001; root colonization : F = 256.07, P < 0.001; root colonization : hyphae: F = 25.48, P < 0.001; spore count (grains ml −1 ): F = 2.95. P = 0.03; Table ; Fig. ). For example, hyphal length was 300% higher than the control in pots with ruderal type AMF inoculum, followed by CSR, stress‐tolerant, and ruderal (Fig. ; Table ). We observed a similar trend for root colonization and the ratio of root colonization to hyphae, where plants/pots inoculated with CSR and ruderal type AMF inoculum had the highest values, followed by stress‐tolerant and competitor type AMF inoculum. By contrast, spore production was highest for plants inoculated with CSR‐type AMF inoculum, followed by ruderal stress‐tolerant, and competitor type AMF inoculum, but root colonization for plants inoculated with ruderal type inoculum. There was also a richness and presence–absence effect of AMF inoculation on all AMF traits. AMF CSR functional groups and P supply also had an interactive effect on all AMF traits (hyphal length (m g −1 ): F = 14.52, P < 0.001; root colonization : F = 13.18, P = 0.01; root colonization : hyphae: F = 11.10, P < 0.001; spore count (grains ml −1 ): F = 4.48, P < 0.01; Table ; Fig. ). While hyphal length was larger in AMF groups stress‐tolerant, ruderal and CSR with low P supply, in competitor type AMF groups, hyphal length was larger with high P supply. Root colonization was substantially greater on average ( c . 99% more; Table ) in pots with low P supply with the highest levels observed in ruderal and CSR‐type AMF mixtures, but root colonization was generally low for plants that received high P supply regardless of AMF CSR inoculation treatment (Fig. ). Similarly, the ratio of root colonization to hyphal production was greatest in pots with low P supply and was the highest in pots inoculated with the CSR‐type AMF (Fig. ). Spore production was higher in AMF inoculated pots, with stress‐tolerant and ruderal type AMF producing the most spores (Fig. ). We observed low background levels of spores and hyphae in control pots, likely due to the autoclaved AMF inoculum containing residual spores and hyphae, yet there was virtually no AMF colonization in control pots (i.e. only one plant with 1% root colonization; Fig. ). In the AMF richness models, a significant interactive effect of AMF inoculation and P supply was present for all AMF traits (root colonization : F = 3.60, P = 0.04; root colonization : hyphae: F = 201.55, P < 0.001; spore count (grains ml −1 ): F = 3.41, P = 0.04) except hyphal length production. By contrast, for AMF p‐a models, a significant interactive effect of AMF inoculation and P supply was only present for spore production ( F = 6.34, P = 0.02). P supply alone had a significant effect on hyphal production ( F = 39.38, P < 0.001), root colonization ( F = 349.26, P < 0.001), and the ratio of root colonization to hyphal production ( F = 34.67, P < 0.001). Plants grown with a low P supply produced 145% more hyphae on average and had a higher ratio of root colonization to hyphal production than those that received a high P supply. AMF CSR inoculation treatments significantly affected the flower size ( F = 2.64, P = 0.05), total number of flowers ( F = 16.97, P < 0.01), nectar sugar nectar volume ( F = 3.64, P = 0.01), and pollen protein ( F = 4.85, P < 0.01). Plants inoculated with stress‐tolerant AMF had 13% higher nectar sugar concentration than the control (Fig. ; Table ). By contrast, plants inoculated with ruderal and CSR types had greater nectar volume (up to 318% more nectar than the control on average; Table ). For pollen protein, plants inoculated with competitor and CSR types had up to 21% greater pollen protein than the control on average (Table ). The interaction between AMF CSR and P supply – not AMF CSR alone – significantly affected spore density ( F = 11.10, P = 0.03; Table ). Among the traits that significantly varied among AMF CSR functional groups, only variation in flower size (AMF richness : F = 4.40, P = 0.02; AMF pa : F = 7.65, P < 0.01) and the number of total flowers (AMF richness : F = 8.09, P = 0.02; AMF pa : F = 5.20, P = 0.02) could also be explained by the richness and presence–absence of AMF inoculation (Table ). In general, plants grown with AMF (Fig. ; Table ) had c . 29% more flowers on average (Table ), while plants inoculated with the richest assemblage of AMF (i.e. CSR type) had the largest number of flowers (Fig. ). The P supply treatment also had a strong effect on nectar volume ( F = 9.22, P < 0.01), with plants that received a higher supply of P producing a greater amount of nectar (Table ; Fig. ). We observed a similar effect of P supply on the number of flowers ( F = 9.55, P < 0.01), with an average of c . 27% more flowers on plants grown with high P supply. Across all AMF CSR , AMF richness , and AMF p‐a models, there was no interactive effect of AMF inoculation and P supply on floral traits. AMF functional types (i.e. AMF CSR ) did not affect bee visitation and the number of bee visits. However, both AMF richness had an effect on the number of bee visits (number of bee visits: F = 7.29, P = 0.03; bee visitation time: F = 3.70, P = 0.03), but only AMF p‐a had an effect on the number of bee visits ( F = 4.16, P = 0.04). Plants grown with AMF inoculum received 28% more bee visits and 47% more bee visitation time (Table ). Plants inoculated with six AMF species (i.e. representing all functional groups) received the highest bee visitation time was highest (Fig. ). P supply did not affect bee visitation or number of bee visits. PSEM revealed a direct link between the number of bee visits and flower size and further revealed that flower size was associated negatively with AMF root colonization and positively with hyphal length (Fig. ). No significant pathways to bee visitation time emerged in the PSEM. In this study, we demonstrate that belowground interactions between a plant and AMF impact floral traits, which in turn affect bee foraging dynamics on that plant. In general, we observed positive effects of AMF not only on plant growth but also on floral traits, such as display size and floral resource quantity and quality, and in turn, on bee visitation. Importantly, however, the effect of AMF on some floral traits varied between compositionally distinct AMF inoculation mixtures. Bee visitation was also highest for plants inoculated with the richest assemblage of AMF species, which included AMF representing different life‐history strategies. Because our experimental design included a range of distinct AMF communities, we were able to examine the wide expression of AMF traits. Yet, our design did not distinguish the effects of AMF functional diversity from species richness (as the CSR mixture had 6 AMF species whereas the competitor, stress‐tolerant, or ruderal mixtures each had only 2 AMF species), and in some cases, the effects of AMF inoculation varied across both richness and functional diversity of AMF. Nevertheless, our results demonstrated that AMF traits varied among the different AMF communities and how this variation was ultimately linked to floral traits and bee visitation. AMF traits (i.e. spore production, hyphal length, and root colonization) strongly varied between the AMF inoculation mixtures and interactively with the P supply treatment, suggesting different ecological strategies among the AMF mixtures. For example, the ruderal AMF group had the highest root colonization and production of spores and hyphae, especially in low P supply conditions. This follows the CSR framework, which indicates ruderal species will flourish in high‐disturbance environments by growing quickly (i.e. production of spores and hyphae) and establishing symbiotic associations via root colonization. Surprisingly, the competitor AMF species group had the lowest values across all the AMF traits measured – even lower than the stress‐tolerant AMF species group, which is expected to grow slower than the competitor or ruderal species. While individual AMF species do not have conclusive CSR designations, the strong differences in AMF traits between the AMF inoculation mixtures and P supply treatment signals that the AMF inoculation mixtures in our study represent functionally distinct ecological strategies with important implications for the plant host. While plant growth responded positively to AMF inoculation, plant growth varied minimally between the AMF functional groups – regardless of the differences in AMF traits (i.e. root colonization, hyphal biomass, and spore production) between the AMF functional groups. Instead, P supply had a stronger impact on plant growth. Plant growth was greater (i.e. greater shoot biomass) when P supply was high (Fig. ). In this case, our application of the CSR framework for AMF was minimally predictive of the variations among plant traits, contrasting previous studies examining the effect of AMF functional differences on plant growth (Smith et al ., ); instead, we found that the CSR framework was more predictive for floral traits (Table ). Specifically, our study shows that the effect of AMF on the quantity and quality of individual floral resources depends on the identity or composition of AMF. A key pattern observed was that no singular AMF inoculum mixture held the highest value for all floral traits (Fig. ; Table ), which conflicted with our expectations that all floral resources would be most enhanced by the CSR mixture. Instead, our prediction that floral resources could be bolstered by more functionally diverse AMF communities (i.e. an additive effect of the richer CSR mixture) was only observed for some floral traits in our study (e.g. plants inoculated with the CSR mixture had the largest flower size on average; Fig. ). On one occasion, we observed that antagonistic effects may result from a functionally diverse AMF community (e.g. plants inoculated with the CSR mixture with the lowest nectar sugar). Overall, these results indicate that the effect of AMF inoculation on individual floral resources is not equal across distinct AMF communities and, thus, emphasize the important role of AMF identity in mediating aboveground processes. Importantly, these results show that at the whole plant level, responses to AMF functional differences may be obscured, whereas, upon closer inspection of plant structures, such as floral traits, they may come to light. In our study, we suspect that the overall benefit of AMF to floral traits is also due to the increased transfer efficiency of P by competitor AMF. P is a necessary nutrient for plant growth and an important building block for pollen (Lau & Stephenson, ). For example, pollen protein concentration was highest on average for plants inoculated with competitor AMF whereas plants inoculated with stress‐tolerant AMF had the lowest pollen protein concentration on average, even compared with the control (Fig. ). This result follows our expectation that competitor AMF would be most beneficial to plants. Surprisingly, however, we measured the lowest hyphal production for competitor AMF (Fig. ). While this may suggest that hyphal production may not necessarily track with the rate of P transfer as previously suggested (Jansa et al ., ; Avio et al ., ). One possibility is that competitor AMF, may be more efficient in P translocation and transfer to plant roots despite low hyphal production. Previous studies have also shown that P availability is a determining factor for mycorrhizal responses (Smith & Read, ). In some cases, P supply did influence the role of AMF on floral traits in this study (Fig. ). For example, while plants inoculated with the CSR AMF mixture had the largest flowers on average, these plants had smaller flowers in low vs high P supply conditions (Fig. ). Despite these differences, we found that P supply alone had a minimal impact on floral resources (Table ), suggesting that plants that form associations with AMF were able to counteract the potentially detrimental impact of low P supply on the production of floral resources (e.g. low P supply in control plants results in lower nectar volume and the number of flowers). We speculate that AMF‐mediated variations in floral traits may also influence pollinator health. Because bees depend on pollen and nectar to meet critical nutritional requirements (Willmer, ; Bauer et al ., ; Roy et al ., ; Dolezal & Toth, ; Parachnowitsch et al ., ), our results suggest that plants that form associations with AMF may be more nutritionally beneficial to foraging bees via improvements to pollen and nectar (Fig. ) and, thus, could potentially improve bee health. If nectar volume is relatively higher for plants forming AMF associations, then bees may be able to meet their caloric needs in fewer flower visits (i.e. with less energetic expenditure and risk of predation during foraging) by visiting those plants (Jha & Kremen, ). Pollen protein, in particular, is necessary for brood rearing and reproduction (Roulston et al ., ; Human et al ., ; Brodschneider & Crailsheim, ; Li et al ., , ) and thus connects directly to bee fitness. Importantly, few studies have addressed how pollen quality, much less pollen protein, is impacted by AMF (Bennett & Meek, ), and to the best of our knowledge, our study provides the first evidence that AMF can improve pollen protein concentration in flowers. Therefore, the 9–21% increase in pollen protein by AMF associations (Table ) provides an opportunity to support bee health by focusing on beneficial belowground interactions. Beyond floral traits, our results provide evidence that AMF inoculation, in general, could have some positive effects on bee foraging dynamics. Plants inoculated with AMF received the highest number of bee visits and bee visitation time (Fig. ). Even though our CSR framework was not predictive for bee visitation (Table ), the number of bee visits and bee visitation time responded positively to the richest assemblage of AMF species (i.e. the CSR mixture). Since our experimental design did not differentiate the effects of AMF functional diversity from species richness, it is possible that the effect of AMF on bee visitation varied across both the richness and functional diversity of AMF. The study also suggests that one of the principal ways AMF could influence bee foraging dynamics is via floral display size. In our pathway analysis (structural equation model), flower size increased with AMF hyphal biomass, and, in turn, plants received a greater number of bee visits when flowers were larger (Fig. ). Floral display size is well‐known to influence bee foraging dynamics (Herrera, ) and is considered an important visual cue for the quality and quantity of floral resources (Ortiz et al ., ). However, trade‐offs did emerge for plant host and AMF associations because plants with greater root colonization had reduced flower size (Fig. ). These opposite trends signal potential carbon expenditure trade‐offs for an individual plant: between producing flowers vs forming an association with AMF. Floral resource production costs plants a lot of carbon. For example, in some cases, plants allocate up to 30% of net primary productivity to floral nectar (Obeso, ). Similarly, plants can transfer up to 30% of net primary productivity to AMF (Frey, ). Our pathway analysis suggests that in more highly colonized roots, the relative carbon cost per unit of nutrients delivered by AMF may be higher. This leads to smaller flowers as plants shuttle more carbon belowground to obtain needed nutrients. Conversely, more extraradical hyphae may indicate relatively more nutrient transport via AMF to plants (Smith & Read, ) and possibly a lower marginal carbon cost. Extraradical hyphal production may be a better predictor of nutrient acquisition and uptake benefits to the host plant (Jakobsen et al ., ; Sawers et al ., ; Charters et al ., ) and, in this case, floral resources. These results suggest that AMF traits (i.e. root colonization vs hyphal length; Kiers et al ., ; Hart et al ., ; Treseder, ; Treseder et al ., ) affect floral traits and, in turn, bee foraging dynamics. Overall, our study suggests that functional diversity underscores below‐ to aboveground interactions. The different AMF inoculation treatments did not have an equal effect on floral traits and bee foraging dynamics. Applying trait‐based frameworks may reveal ecological patterns that could otherwise be obscured, especially when multiple mutualistic interactions are involved (Afkhami et al ., ). Furthermore, variations in the plant–mycorrhizal and plant–pollinator relationships that we observed can have important implications for conservation management of natural and managed systems. Consideration of below‐ to aboveground linkages could inform and guide restoration efforts of natural habitats aiming to improve plant growth and bee visitation. In agricultural systems, targeting practices to enhance plant–mycorrhizal relationships, such as cover crops (Higo et al ., ) and crop diversification (Guzman et al ., ), may lead to several beneficial impacts on plant growth and floral traits, influencing the frequency and duration of bee visitations important for plant reproduction. In general, incorporating belowground interactions into predictive models of floral trait variations may assist in predicting changes in plant–pollinator interactions. None declared. AG, MF, CK and TB designed the study. AG collected the data with substantial assistance from MM, NL, MB, GD and IS‐G. AG conducted the analyses and wrote the first manuscript draft. All co‐authors provided feedback and approved the final manuscript. Table S1 Total number of occurrences of different bee groups per AMF functional group (none, competitor, stress‐tolerant, ruderal, and CSR mixtures) and P supply (low vs high) combination. Table S2 Mean ± SE of all plant traits. Same letters indicate nonsignificant difference between means based on post hoc Tukey HSD tests. Table S3 Mean ± SE of AMF traits. Same letters indicate nonsignificant difference between means based on post hoc Tukey HSD tests. Table S4 Mean ± SE of floral traits. Same letters indicate nonsignificant difference between means based on post hoc Tukey HSD tests. Table S5 Mean ± SE of bee visitation measurements. Same letters indicate nonsignificant difference between means based on post hoc Tukey HSD tests. Please note: Wiley is not responsible for the content or functionality of any Supporting Information supplied by the authors. Any queries (other than missing material) should be directed to the New Phytologist Central Office. |
Progress in the development of stabilization strategies for nanocrystal preparations | e4451a22-e85f-4ee7-97ff-5765b9cb3d68 | 8725885 | Pharmacology[mh] | Introduction With the rapid development of combinatorial chemistry and high-throughput screening technologies, many potential drug candidates with satisfactory receptor targeting have emerged in recent years. However, due to the low water solubility of these candidate drugs, further preparation development is limited (Jermain et al., ). Drug nanocrystals are insoluble drug particles that form inhomogeneous water dispersions with particle sizes of 1∼1000 nm under the stability of surfactants or/and polymers. Different from other nano preparations such as liposomes, nanoparticles, and other solid lipid nanoparticles as the ‘carrier’ for drug delivery, drug nanocrystals have a simple composition, usually contain only pure drugs, do not require a carrier, and may include small amounts of stabilizers such as surfactants and a filling agent such as sucrose, thereby minimizing accessory-related toxicity (McKee et al., ; Barle et al., ); another advantage of the high drug loadings of drug nanocrystals is increased patient compliance. Therefore, drug nanocrystal technology has been widely investigated as a method for increasing the bioavailability of insoluble drugs. Due to the unique advantages of nanocrystals, various pharmaceutical nanocrystals have been successfully commercialized (Möschwitzer, ). The production techniques are classified as either bottom-up (antisolvent precipitation) or top-down techniques (high-pressure homogenization, media milling, etc.) (Ahire et al., ). The bottom-up approach has not yet led to a product on the market; the marketed products are typically produced via a wet media milling or high-pressure homogenization technology. In 2000, Rapamune ® tablets of Sirolimus nanocrystals were marketed as immunosuppressants with 21% higher bioavailability compared to the oral solution (Zhou et al., ). An aprepitant nanocrystal, namely, Emend ® , was introduced to the market in 2003 (Zhang et al., ; Roos et al., ), which showed increased absorption and reduced drug–food interactions compared with the micronized aprepitant, as well as improved bioavailability. Tricor ® (2004) and Triglide ® (2005) have significantly increased bioavailabilities compared to fenofibrate coarse and micronized suspensions with minimal impact on food intake (Sauron et al., ; Li et al., ). The emergence of nanotechnology has created a new prosperity in all fields, including chemistry, physics, and life sciences (Cai et al., ; Zhang et al., ), which provides a new direction for drug delivery system. In particular, nanotechnology drugs have great application prospects in tumor-targeted therapy (Chen et al., ; Pan et al., ; Zhai et al., ). However, the instability of nanocrystals has been hindering their development and production. The instability of nanocrystal preparations is primarily due to the small particle size, and the high surface energy that is caused by small particles leads to thermodynamic instability, which eventually leads to aggregation and Ostwald ripening. In this paper, the influencing factors, characterization, and evaluation methods of the stability of nanocrystal preparations are reviewed, and the key and difficult points to be considered in the research and development process are discussed.
Causes of instability of nanocrystals Small particles have higher surface energy, so the particle size will increase to reduce the surface energy during storage. This section discusses the representative phenomena that affect the particle size of nanocrystals, including aggregation, sedimentation, Ostwald ripening, etc. . 2.1. Aggregation A nanosuspension is a thermodynamically unstable heterogeneous water dispersion, and aggregation between crystals is one of the main reasons for its low stability. Particles in suspension exhibit Brownian motion, and they can collide, stick together, and coalesce due to the attraction between the particles and van der Waals forces (Berre et al., ). This phenomenon can be observed in the preparation and storage of nanocrystal suspensions. The aggregation of nanoparticles increases the particle size, broadens the particle size distribution, and, thus, reduces the solubility and dissolution rate of drugs. 2.2. Ostwald ripening Ostwald ripening (crystal growth) is a phenomenon in which crystals of various particle sizes grow due to differences in solubility. According to the Ostwald–Freundlich equation, the preparation of an insoluble drug in a nanocrystal suspension could significantly improve the drug solubility. When the particle size is less than 1 μm, the drug solubility increases with the decrease of the particle size: (1) log ( S 2 S 1 ) = 2 σ M ( 1 r 2 − 1 r 1 ) / ρ R T where S 1 and S 2 are drug solubilities with radii r 1 and r 2 , respectively; σ is the surface tension between the solid drug and the liquid solvent; M is the relative molecular mass; ρ is the density of the solid drug; R is the molar gas constant; and T is the thermodynamic temperature. Since small crystals have higher surface free energy, they have higher saturation solubility than large crystals, which leads to a drug concentration gradient between crystals. A smaller crystal interacts with a larger crystal, and the resulting diffused mass exchange causes the larger crystal to grow further and the smaller crystal to shrink and disappear (Singh et al., ). 2.3. Sedimentation Sedimentation is a common cause of instability of nanosuspensions. In a suspension, particles of larger size settle naturally under the action of gravity, and their settling velocity follows Stokes’ law: (2) v = 2 r 2 ρ 1 − ρ 2 g / 9 η where v is the settling velocity of a particle; r is the particle radius; ρ 1 and ρ 2 are the densities of the particle and medium; η is the viscosity of the dispersion medium; and g is the gravitational acceleration. The sedimentation behavior of nanosuspensions can be divided into two types: flocculation and deflocculation. Flocculating suspensions are characterized by rapid and loose sedimentation, and sediments are easily redispersed. In contrast, deflocculation suspensions show a slow and dense settlement. Nanocrystal deposition is acceptable if the deposition rate is low and the sediments are easily redispersed. However, irreversible precipitation can lead to severe fluctuations in drug quality, thereby making it impossible for patients to obtain a uniform dose. Therefore, the inhibition of nanocrystal deposition is crucial for increasing the stability of nanocrystal drugs (Gao et al., ; Martínez et al., ).
Aggregation A nanosuspension is a thermodynamically unstable heterogeneous water dispersion, and aggregation between crystals is one of the main reasons for its low stability. Particles in suspension exhibit Brownian motion, and they can collide, stick together, and coalesce due to the attraction between the particles and van der Waals forces (Berre et al., ). This phenomenon can be observed in the preparation and storage of nanocrystal suspensions. The aggregation of nanoparticles increases the particle size, broadens the particle size distribution, and, thus, reduces the solubility and dissolution rate of drugs.
Ostwald ripening Ostwald ripening (crystal growth) is a phenomenon in which crystals of various particle sizes grow due to differences in solubility. According to the Ostwald–Freundlich equation, the preparation of an insoluble drug in a nanocrystal suspension could significantly improve the drug solubility. When the particle size is less than 1 μm, the drug solubility increases with the decrease of the particle size: (1) log ( S 2 S 1 ) = 2 σ M ( 1 r 2 − 1 r 1 ) / ρ R T where S 1 and S 2 are drug solubilities with radii r 1 and r 2 , respectively; σ is the surface tension between the solid drug and the liquid solvent; M is the relative molecular mass; ρ is the density of the solid drug; R is the molar gas constant; and T is the thermodynamic temperature. Since small crystals have higher surface free energy, they have higher saturation solubility than large crystals, which leads to a drug concentration gradient between crystals. A smaller crystal interacts with a larger crystal, and the resulting diffused mass exchange causes the larger crystal to grow further and the smaller crystal to shrink and disappear (Singh et al., ).
Sedimentation Sedimentation is a common cause of instability of nanosuspensions. In a suspension, particles of larger size settle naturally under the action of gravity, and their settling velocity follows Stokes’ law: (2) v = 2 r 2 ρ 1 − ρ 2 g / 9 η where v is the settling velocity of a particle; r is the particle radius; ρ 1 and ρ 2 are the densities of the particle and medium; η is the viscosity of the dispersion medium; and g is the gravitational acceleration. The sedimentation behavior of nanosuspensions can be divided into two types: flocculation and deflocculation. Flocculating suspensions are characterized by rapid and loose sedimentation, and sediments are easily redispersed. In contrast, deflocculation suspensions show a slow and dense settlement. Nanocrystal deposition is acceptable if the deposition rate is low and the sediments are easily redispersed. However, irreversible precipitation can lead to severe fluctuations in drug quality, thereby making it impossible for patients to obtain a uniform dose. Therefore, the inhibition of nanocrystal deposition is crucial for increasing the stability of nanocrystal drugs (Gao et al., ; Martínez et al., ).
Formability mechanism of nanocrystal suspensions 3.1. Drug-related factors The formation of nanocrystal suspensions is influenced by the physical and chemical properties of the drugs, including polymorphism, log P, enthalpy, cohesive energy, etc. Not all drugs can form stable nanocrystal suspensions. 3.1.1. Drug polymorphism Many factors influence the molecular arrangement in drug nanocrystals, such as the solvent, temperature, and preparation process. The polymorphic forms and physical stability and solubility vary among arrangements (Shi et al., ). Therefore, in the formation of stable drug nanostructures, the polymorphic forms of drug nanocrystals must be considered. Compared with crystalline forms, amorphous forms are relatively unstable, and amorphous drugs are more soluble and prone to Ostwald ripening, thereby leading to an increase in the drug particle size (Lindfors et al., ). 3.1.2. Drug hydrophobicity The logarithm of the drug distribution coefficient, Log P, is the ratio of the concentration of an undissociated drug in the organic phase (usually n -octyl alcohol) to its equilibrium concentration in water. N -octyl alcohol is commonly used as an organic phase due to its similarity to the lipid layer of cell membranes. In contrast, water is used as an aqueous phase to simulate intracellular fluids. Log P is usually used to describe the hydrophilicity and hydrophobicity of a drug. When the concentration of the drug in the organic phase is 10 times the concentration in water, Log P is equal to 1. The larger the value of Log P, the higher the hydrophobicity. The main advantage of strong hydrophobic drugs over hydrophilic drug nanocrystals is that the stabilizers can cover the nanocrystals more easily. George & Ghosh found that drugs with high Log P values form highly stable nanosuspensions . The researchers believe that the attraction between the hydrophobic surface of the drug and the hydrophobic functional group of the stabilizer leads to the strong adsorption of the stabilizer on the drug surface and that hydrophobic drugs are more suitable than hydrophilic drugs for nanocrystal preparations because of the risk of reversible dissolution and precipitation. 3.1.3. Drug enthalpy and cohesive energy Enthalpy represents the strength of the intermolecular interactions, and cohesion refers to the energy that is required by condensed matter to eliminate the intermolecular interactions. Both are important state parameters for characterizing the energy of a material system. George & Ghosh found that drugs with low enthalpy are prone to aggregation during the storage process. Due to the low enthalpy of these compounds, the crystal structures of drugs in water are easily destroyed, which may lead to a transition from a crystalline form to an amorphous form, thereby leading to the instability of the drug nanosystem. Yue et al. found that the surface hydrophobicity and cohesion of drugs are the main factors for the formation of nanocrystal suspensions . Under the premise that stabilizers and drugs can be wetting, drugs with high cohesion are more likely to form stable nanocrystal suspensions. 3.2. Stabilizing agent related factors Stabilizers are essential for preventing nanocrystals from accumulating. The surface tension of drug nanocrystals is often very high, which leads to the facile aggregation of drug particles. The use of a suitable stabilizer can reduce the surface tension and prevent the aggregation of nanocrystals. As illustrated in , ionic surfactants stabilize suspensions by initiating electrostatic repulsion between drug nanocrystals. In this case, when the stabilizer is adsorbed on the drug surface, a double electric layer is formed from the hydrophilic part of the stabilizer, and a charge is formed around the drug. When two drug particles are attracted to each other, they move closer to each other, and when the distance is reduced past a threshold, the two layers of the same charge repel each other and the particles separate, which eventually prevents polymerization. Polymers and nonionic surfactants maintain the stability of suspensions through spatial barriers, and they act as space stabilizers by adsorbing hydrophobic molecules on the surfaces of drug nanocrystals. The long hydrophilic chains of the polymers that are adsorbed on the nanocrystal surface extend further outward, thereby limiting the movement of drug particles to maintain the distance between drug particles. While elucidating the stabilization mechanism of different stabilizers, their deficiencies are also exposed. The stability of nanosuspension system stabilized by electrostatic repulsion can be inhibited by the electrolytes or high acid conditions. Especially, oral drugs are exposed to an acidic gastric condition, the stable electrostatic interaction system may be destroyed due to the influence of electrolytes in gastrointestinal fluids (Rachmawati et al., ). The stability of the nanosuspension system stabilized by steric hindrance is not disturbed by charge ions, but the interaction between the stabilizer and the drug is more complex, the suitable polymer should be selected according to the physical and chemical properties of the drug (George & Ghosh, ). Suspensions containing high concentration polymers and drugs are often not conducive to the preparation of nanosuspensions because of their high viscosity (Medarević et al., ). It has been reported in many literatures that the stabilizers with different stabilization mechanisms have been applied to the preparation of nanosuspensions to produce a synergistic effect and obtain a stable nanosuspension system (Zuo et al., ; Toziopoulou et al., ; Medarević et al., ). In addition, there are also some uncommonly used stabilizers, such as whey protein isolate, soybean protein isolate, etc. (He et al., ), which have a strong affinity with drugs and stable adsorption on the surface of drugs, forming an effective space protective barrier. Some polyphenols, such as tannic acid and epigallocatechin gallate, have also been used in nano-drug delivery systems (Bartzoka et al., ; Luo et al., ; Su et al., ). lists the common stabilizers classified based on the mechanism of stabilization. This section discusses the influence of the key properties of stabilizers on the development of stable nanosuspension formulations. 3.2.1. Molecular weight of the stabilizer The hydrophobic end of the polymer stabilizer adsorbs on the surface of the drug nanocrystal, which can provide spatial stability, and stabilizers with higher molecular weight typically outperform stabilizers with lower molecular weight. The mutual attraction between drug nanocrystals that is caused by van der Waals forces leads to the aggregation of the nanocrystals. A long-chain polymer stabilizer can effectively induce spatial repulsion and prevent the aggregation of particles (Lee et al., ). A polymer stabilizer with a molecular weight of less than 5000 g/mol has difficult forming a spatial barrier for the mutual attraction between particles. In comparison, a polymer stabilizer with a molecular weight that exceeds 25,000 g/mol may lead to nanocrystal bridging due to the large molecular chain length (Lee et al., ; Choi et al., ; Peltonen & Hirvonen, ; Tuomela et al., ). The selection of a polymer of suitable molecular weight via experimental design is essential for the preparation of a stable nanosuspension. 3.2.2. Hydrophilic and hydrophobic properties of the stabilizer The hydrophilicity and hydrophobicity of a surfactant can be expressed by the hydrophilic lipid equilibrium (HLB) values (Pasquali et al., ; VermaGokhale et al., ). The HLB value of a hydrophobic surfactant is low while that of hydrophilic surfactant is high. To improve the stability of drug nanocrystals, the stabilizer should have sufficient affinity with the surfaces of the drug particles (Lee et al., ). When insoluble drugs show high hydrophobicity, the hydrophobicity of the stabilizer is the main driving force for the surface adsorption of the drug particles, which is crucial for the spatial stability and uniform dispersion of the drug particles (Van Eerdenbrugh et al., ). It is impossible to realize stability without adsorption, and it is also impossible to obtain a dispersed nanocrystal suspension. Moreover, the hydrophilicity of the stabilizer is important because most drug nanocrystals are dispersed in water and the hydrophilic portion of the stabilizer will be oriented toward water rather than the hydrophobic surface of the drug, thereby facilitating the inhibition of the drug nanocrystal aggregation. Hydrophilic molecules that contain electric charges can further stabilize drug nanocrystals through electrostatic repulsion between crystals, thereby providing sufficient space or charge stability for the drug nanocrystals. Ferrar et al. investigated the effects of 28 stabilizer formulations on the formability of drug nanocrystals using three insoluble drugs as models and found that the key factors that affected the stability of the nanocrystals were the amphiphilicity of the stabilizer and whether it had a sufficiently long hydrocarbon chain. Through a molecular model, it is shown that surfactant molecules with long and flexible hydrophobic chains can anchor on the surfaces of nanocrystals more effectively, thereby increasing the stability. Therefore, a stabilizer must have a suitable balance between hydrophilicity and hydrophobicity. 3.2.3. Concentration of the stabilizer It is necessary to prepare stable nanocrystals with a suitable stabilizer concentration. The optimal stabilizer concentration will maximize the adsorption affinity of the stabilizer to the drug surface (Deng et al., ). Spatial repulsion is induced by coating drug nanocrystals with stabilizers to prevent Ostwald ripening. Therefore, if the stabilizer concentration is insufficient, the drug particles cannot be effectively coated. If the drug particles are attached to the same stabilizer molecule, particle aggregation and bridging can occur, thereby resulting in reduced stability. The stability of a nanosuspension is not directly proportional to the concentration of the stabilizer. Excessive stabilizer may lead to Ostwald ripening and decrease the stability over time. In addition, amphiphilic stabilizers in concentrations that exceed the critical micelle concentration (CMC) may lead to micelle formation. As the number of micelles increases, the micelles begin to compete for surface adsorption, and the total adsorption capacity at the drug interface begins to decrease, which will further undermine the stability of the nanosystem, thereby resulting in an increase in the particle size (Lo et al., ; Hui et al., ). Therefore, the use of a suitable stabilizer concentration is critical (Rangel-Yagui et al., ; Deng et al., ; Peltonen & Hirvonen, ; Hui et al., ). 3.3. Combined action factor 3.3.1. Drug solubility in a stabilizer solution The solubility of a drug is affected by the type of stabilizer that is used. When the solution of stabilizers increases the solubility of drug nanocrystals, the stability of these crystals decreases over time, thereby leading to the growth of the nanocrystals. For example, a study showed that PVP K30, Pluronic F68, and HPMC had no significant effect on ibuprofen solubility (VermaGokhale et al., ), and stable nanocrystal suspensions were obtained; however, as stabilizers, SLS, Twine 80, and Pluronic F127 increased the solubility of ibuprofen, thereby resulting in instability of the nanosuspensions and increased particle size during storage. Ghosh et al. reported similar results in a study on the use of the wet grinding process to improve the bioavailability of insoluble drugs. As 1% SLS increased the solubility of drugs, it also exacerbated the Ostwald ripening phenomenon. Therefore, the stabilizers with the weakest influence on the drug solubility are the first choices for the preparation of a nanosuspension. 3.3.2. Surface energies and specific interactions of the drug and stabilizer The interactions between drug nanocrystals and polymer stabilizers depend mainly on their respective surface energies. Especially when drug nanocrystals are dispersed in water, they have large surface area and high surface tension due to their small particle size and strong hydrophobicity. Therefore, drug nanocrystals exhibit higher surface free energy, and their dispersion becomes unstable, thereby leading to aggregation, solidification, or crystal growth (Verma et al., ). To reduce the surface energy of drug nanocrystals and improve the stability of drug nanocrystals, it is necessary to humidify or hydrate the surfaces of the drug nanocrystals. The surface of a drug nanocrystal can be hydrated and modified by various materials to reduce the surface free energy (Gong et al., ; Wang & Gong, , ). Hydrophilic polymers are commonly used to hydrate nanocrystal surfaces because they can interact strongly with surrounding water molecules (Choi et al., ). In a study that analyzed the effects of polymer stabilizers on the stability of drug nanocrystals, seven drugs were wet-comminuted to form nanocrystals (Choi et al., ), and hydroxypropyl cellulose (HPC) and polyvinylpyrrolidone (PVP) were used as stabilizers. The results demonstrate that a drug with a surface energy that is similar to that of PVP can form stable nanocrystals effectively. Due to the strong interactions between drug nanocrystals and stabilizers, the use of polymer stabilizers that are similar in surface energy to the drug usually results in drug nanocrystals of stable and uniform particle size (Lee et al., ). The surface energies of drugs and stabilizers can be assessed using ‘static contact angle measurements’ (Choi et al., ; Lee et al., ) (see the subsection on the contact angle measurement below for details). 3.3.3. Effects of dispersion media To form a stable nanosystem, the temperature and viscosity of the dispersion medium must be suitable. The Stokes–Einstein equation can be used to explain the influence of the temperature and viscosity on the stability of the nanosuspension: (3) D = k T / ( 6 η π r ) where D is the diffusion coefficient, k is the Boltzmann constant, T is the thermodynamic temperature, η is the viscosity, and r is the radius of the spherical particle (Zwanzig & Harrison, ; Harris, ). According to the equation, the stability of the nanosystem is negatively correlated with the temperature and positively correlated with the viscosity of the medium. According to the Stokes–Einstein equation, high viscosity reduces the diffusion velocity of drug particles and, thus, stabilizes the nanosuspension (Milewski et al., ). The formation of the hydrophobic interaction between the nanocrystal system and the stabilizer is a negative entropy process. The higher the temperature of the nanocrystal system, the lower the stability of the system and the more likely the nanocrystal drugs are to aggregate. However, an increase in the temperature will lead to a decrease in the viscosity and an increase in the diffusion coefficient, which is very unfavorable for the interactions between particles in the nanosystem (Kakran et al., ). However, in a study that compared surfactants with polymer stabilizers, it was found that although surfactants have low viscosity and high surface activity, their stability is higher (Van Eerdenbrugh et al., ). The polymer stabilizer with a high viscosity has a poor effect on the preparation of stable nanocrystals, for which the main reason is that the high viscosity inhibits the decrease of the particle size in the preparation process of the nanocrystals. 3.4. Characterization and evaluation of the nanosuspension 3.4.1. Contact angle measurements Contact angle measurement is a method for measuring the wettability of a stabilizer. The smaller the contact angle, the higher the wettability. The contact angle of a stabilizer solution can be measured by compressing a small amount of powder to form a disk. Yue et al. evaluated the wettability of drugs through contact angles. Drugs with small contact angles and satisfactory wettability easily form stable nanosuspensions . Pardeike & Müller used the contact angle as the standard for the formula selection of a nanosuspension stabilizer. Purified water showed a contact angle of 51.6° on the compressed PX-18 disk. With 0.1% (w/v) Tween 80 solution, the contact angle was reduced to 23.2° . Therefore, Tween 80 was selected as the stabilizing agent for PX-18 nanosuspensions. In another study, in which various stabilizers were screened for the preparation of miconazole nanosuspensions, the contact angles between the stabilizer solutions and the drug were determined (Cerdeira et al., ). The contact angle between miconazole and pure water exceeded 140°. The contact angle was determined to be 43° for a 2.5% HPC-LF and 0.1% SLS solution. However, miconazole had a large contact angle with PVP/SDS and Poloxamer solutions, which indicated poor wettability of the drug. The nanocrystal size was smaller when the stabilizer system with the lowest contact angle was used, which further demonstrated the practicability of the method. 3.4.2. Micromorphological characterization Atomic force microscopy (AFM) is an important visualization tool for nanocrystals. It enables the qualitative and quantitative analysis of the physical properties of nanocrystals, such as the size, surface structure, roughness, and morphology. The interaction forces between atoms and molecules are used to observe the surface morphology of an object and provide a three-dimensional surface map. Compared with scanning electron microscopy (SEM) and transmission electron microscopy (TEM), it has many advantages: Electron microscopes can only provide two-dimensional images, while AFM can capture three-dimensional images of nanocrystal surfaces without any special processing of the sample. Atomic force microscopy has proved to be a valuable tool for visualizing and quantifying pharmaceutical nanocrystals in preparations. In addition to precise size measurements, AFM can easily provide information about the shape and structure of nanoparticles that cannot be obtained by light scattering or other methods (Shi et al., ; Du et al., ). In addition, the method can be used to evaluate the interactions between the stabilizer and the surfaces of the drug particles, and the resulting affinity can be a satisfactory indicator of the stability of the nanocrystal preparation with the stabilizer. Verma et al. used AFM technology to screen the stabilizers in ibuprofen nanocrystal formulations . The captured AFM image clearly shows that on the ibuprofen particle surface, the polymerization chains of HPMC and HPC are fully unfolded and adsorbed on the ibuprofen particle surface. The strong interactions between HPMC/HPC and ibuprofen drug particles strongly suggest that both polymers are suitable for the formation of stable ibuprofen nanosuspensions. In contrast, the AFM images of PVP and Poloxamer show incomplete surface adsorption of ibuprofen particles, which results in low stability of the nanocrystal preparations that are obtained using PVP and Poloxamer. 3.4.3. Particle size distributions of suspensions The polydispersity index (PDI) represents the change of the particle size distribution of a nanocrystal suspension and is affected by its physical stability. Under normal circumstances, a PDI value of 0.1 ∼ 0.25 corresponds to a narrow particle size distribution, which indicates a stable nanocrystal suspension system, while a PDI value of >0.5 correspond to a wide particle size distribution range (Shah et al., ). Ensuring a narrow particle size distribution is an effective method for reducing the concentration gradient and the differences in the saturation solubility of drug nanocrystals. When drug nanocrystals have a wide particle size distribution, Ostwald ripening is more likely, which leads to decreases in the drug solubility and the dissolution rate and, ultimately, to a decrease in the bioavailability. Therefore, maintaining a narrow particle size distribution of drug nanocrystals is highly important for ensuring the stability of a drug nanocrystal suspension. Photonic correlation spectroscopy (PCS) is one of the most commonly used particle size characterization techniques. It uses the principle of dynamic light scattering to evaluate the average particle sizes of nanocrystals in terms of Z-value, particle size distribution, and zeta potential (which refers to the potential of the shear plane). The PDI values range from 0 (monodispersed particles) to 0.500 (polydispersed particles) and are used to monitor the physical stability of nanocrystals. PCS has a narrow range of measurements (e.g. from 3 nm to 3 μm) and is not suitable for large particle size measurements. When the particles are large, they are measured via laser diffraction (LD), which measures a large range of particles (0.02–2000 µm) that depends on the type of instrument that is used. The data that are measured via PCS and LD are not similar in terms of granularity because the LD data are based on the volume distribution, whereas the PCS data are the weighted light intensity values. LD only measures the particle size distribution, whereas PCS also measures the average particle size and zeta potential, which can be used to convert strength data into volume and quantity distributions. If nanosuspensions are used intravenously, it is necessary to use the Coulter counting method. Since the smallest capillaries are 5 µm in size, there is a risk of capillary blockage if any particles that are larger than 5 µm are present in the intravenous formulation. Coulter’s counting method provides the absolute number of particles per unit volume at various size levels; hence, the number of nanoscale particles is strictly controlled. Keck found that the dissolution of nanocrystals during measurement significantly affected the size results that were obtained. When an unsaturated medium or microparticle saturated medium is used, the sample will dissolve, the dissolution will be unstable, and the results will be unreproducible. If the particle sizes of nanocrystals are to be analyzed, the dispersion media should be pre-saturated with the nanocrystals because the solubility of the nanocrystals exceeds that of micro-sized drugs. In the early stage of formulation development, it should be confirmed whether the particle size analysis method requires a pre-saturated dispersion medium. The characterization of nanoparticles using both dynamic and static light scattering techniques can yield meaningful results if the necessary prerequisites are satisfied. Via the development and validation of a reasonable particle size detection methodology, misleading studies can be avoided, and the stability and instability of nanocrystals can be reliably distinguished at an early stage of development. 3.4.4. Zeta potential in suspension The zeta potential (ζ) is the main factor that affects the physical stability of nanocrystal suspensions. It is a measure of the charge on the shear surfaces of particles and reflects the physical stability of colloidal systems. When the absolute zeta potential of the drug nanocrystals is very small, the gravitational attraction between the particles exceeds the electrostatic repulsion, thereby causing nanocrystal aggregation. Typically, a zeta potential of 30 mv is required for obtaining an electrostatically stable nanocrystal suspension. The zeta potential of a suspension can be used to predict the storage stability, and particles with sufficient zeta potentials are difficult to aggregate due to electrostatic or spatial repulsion between the particles. The zeta potential represents the stability of a nanosuspension; hence, it is necessary to evaluate the level of the zeta potential value reasonably. When a polymer is used as a stabilizer, the zeta potential on the nanocrystal surface depends more strongly on the polymer concentration than on the surfactant concentration; thus, the absolute potential value must be no less than 20 mV. In a study, the zeta potential of a glyburide nanosuspension that was stabilized by HPMC and SLS depended more strongly on the polymer concentration than on the surfactant concentration (Singh et al., ). HPMC is a nonionic polymer, and SLS is an anionic surfactant. When the polymer concentration is low, the particle surface of the drug is not highly densely covered by the polymer; as a result, the anionic surfactant can more easily reach the surface of the drug and the nanocrystal surface, and the zeta potential increases with the increase of the concentration of SLS. However, at a higher percentage of HPMC, the nanocrystal surface potential is not significantly affected by the concentration of SLS. Similar results were obtained in another study in which the zeta potential of a meloxicam suspension depended more strongly on the polymer concentration than on the surfactant concentration (Singare et al., ). Nanosuspensions typically realize stability through the synergistic action of polymer stabilizers and charge stabilizers. Therefore, for the polymers and charge protectors that are used to prepare nanocrystal suspensions, the optimal balance between the electrostatic repulsion of the zeta potential and the spatial stability that is provided by the polymer should be realized. 3.4.5. Storage stability The stability of a nanosuspension can be evaluated experimentally under various storage conditions. The stability of the nanocrystals will be assessed according to their size, polydispersity index (PDI), and zeta potential (Geng et al., ; Gol et al., ). In one study, miconazole nitrate nanocrystal suspensions were stored at refrigerated (4 °C), room (25 °C) and hyperthermal (40 °C) temperatures for further investigations (Pyo et al., ). The particle size and PDI of the nanosuspensions that were stabilized by Tween 80 did not change when stored at 4 °C and showed almost no change at 25 °C. However, the particle size and PDI both increased during storage at 40 °C. Via optical microscopy, the presence of needle-shaped crystals was observed, and the Feret diameter of approximately 5 μm was outside the measurement range of PCS and, thus. could not be detected. When Poloxamer 407 was used as a stabilizer, the particle size and PDI did not increase at 4 °C or 25 °C over 3 months, while particle growth was observed at 40 °C, but the increase was significantly less than that of the Tween 80 stable suspension.
Drug-related factors The formation of nanocrystal suspensions is influenced by the physical and chemical properties of the drugs, including polymorphism, log P, enthalpy, cohesive energy, etc. Not all drugs can form stable nanocrystal suspensions. 3.1.1. Drug polymorphism Many factors influence the molecular arrangement in drug nanocrystals, such as the solvent, temperature, and preparation process. The polymorphic forms and physical stability and solubility vary among arrangements (Shi et al., ). Therefore, in the formation of stable drug nanostructures, the polymorphic forms of drug nanocrystals must be considered. Compared with crystalline forms, amorphous forms are relatively unstable, and amorphous drugs are more soluble and prone to Ostwald ripening, thereby leading to an increase in the drug particle size (Lindfors et al., ). 3.1.2. Drug hydrophobicity The logarithm of the drug distribution coefficient, Log P, is the ratio of the concentration of an undissociated drug in the organic phase (usually n -octyl alcohol) to its equilibrium concentration in water. N -octyl alcohol is commonly used as an organic phase due to its similarity to the lipid layer of cell membranes. In contrast, water is used as an aqueous phase to simulate intracellular fluids. Log P is usually used to describe the hydrophilicity and hydrophobicity of a drug. When the concentration of the drug in the organic phase is 10 times the concentration in water, Log P is equal to 1. The larger the value of Log P, the higher the hydrophobicity. The main advantage of strong hydrophobic drugs over hydrophilic drug nanocrystals is that the stabilizers can cover the nanocrystals more easily. George & Ghosh found that drugs with high Log P values form highly stable nanosuspensions . The researchers believe that the attraction between the hydrophobic surface of the drug and the hydrophobic functional group of the stabilizer leads to the strong adsorption of the stabilizer on the drug surface and that hydrophobic drugs are more suitable than hydrophilic drugs for nanocrystal preparations because of the risk of reversible dissolution and precipitation. 3.1.3. Drug enthalpy and cohesive energy Enthalpy represents the strength of the intermolecular interactions, and cohesion refers to the energy that is required by condensed matter to eliminate the intermolecular interactions. Both are important state parameters for characterizing the energy of a material system. George & Ghosh found that drugs with low enthalpy are prone to aggregation during the storage process. Due to the low enthalpy of these compounds, the crystal structures of drugs in water are easily destroyed, which may lead to a transition from a crystalline form to an amorphous form, thereby leading to the instability of the drug nanosystem. Yue et al. found that the surface hydrophobicity and cohesion of drugs are the main factors for the formation of nanocrystal suspensions . Under the premise that stabilizers and drugs can be wetting, drugs with high cohesion are more likely to form stable nanocrystal suspensions.
Drug polymorphism Many factors influence the molecular arrangement in drug nanocrystals, such as the solvent, temperature, and preparation process. The polymorphic forms and physical stability and solubility vary among arrangements (Shi et al., ). Therefore, in the formation of stable drug nanostructures, the polymorphic forms of drug nanocrystals must be considered. Compared with crystalline forms, amorphous forms are relatively unstable, and amorphous drugs are more soluble and prone to Ostwald ripening, thereby leading to an increase in the drug particle size (Lindfors et al., ).
Drug hydrophobicity The logarithm of the drug distribution coefficient, Log P, is the ratio of the concentration of an undissociated drug in the organic phase (usually n -octyl alcohol) to its equilibrium concentration in water. N -octyl alcohol is commonly used as an organic phase due to its similarity to the lipid layer of cell membranes. In contrast, water is used as an aqueous phase to simulate intracellular fluids. Log P is usually used to describe the hydrophilicity and hydrophobicity of a drug. When the concentration of the drug in the organic phase is 10 times the concentration in water, Log P is equal to 1. The larger the value of Log P, the higher the hydrophobicity. The main advantage of strong hydrophobic drugs over hydrophilic drug nanocrystals is that the stabilizers can cover the nanocrystals more easily. George & Ghosh found that drugs with high Log P values form highly stable nanosuspensions . The researchers believe that the attraction between the hydrophobic surface of the drug and the hydrophobic functional group of the stabilizer leads to the strong adsorption of the stabilizer on the drug surface and that hydrophobic drugs are more suitable than hydrophilic drugs for nanocrystal preparations because of the risk of reversible dissolution and precipitation.
Drug enthalpy and cohesive energy Enthalpy represents the strength of the intermolecular interactions, and cohesion refers to the energy that is required by condensed matter to eliminate the intermolecular interactions. Both are important state parameters for characterizing the energy of a material system. George & Ghosh found that drugs with low enthalpy are prone to aggregation during the storage process. Due to the low enthalpy of these compounds, the crystal structures of drugs in water are easily destroyed, which may lead to a transition from a crystalline form to an amorphous form, thereby leading to the instability of the drug nanosystem. Yue et al. found that the surface hydrophobicity and cohesion of drugs are the main factors for the formation of nanocrystal suspensions . Under the premise that stabilizers and drugs can be wetting, drugs with high cohesion are more likely to form stable nanocrystal suspensions.
Stabilizing agent related factors Stabilizers are essential for preventing nanocrystals from accumulating. The surface tension of drug nanocrystals is often very high, which leads to the facile aggregation of drug particles. The use of a suitable stabilizer can reduce the surface tension and prevent the aggregation of nanocrystals. As illustrated in , ionic surfactants stabilize suspensions by initiating electrostatic repulsion between drug nanocrystals. In this case, when the stabilizer is adsorbed on the drug surface, a double electric layer is formed from the hydrophilic part of the stabilizer, and a charge is formed around the drug. When two drug particles are attracted to each other, they move closer to each other, and when the distance is reduced past a threshold, the two layers of the same charge repel each other and the particles separate, which eventually prevents polymerization. Polymers and nonionic surfactants maintain the stability of suspensions through spatial barriers, and they act as space stabilizers by adsorbing hydrophobic molecules on the surfaces of drug nanocrystals. The long hydrophilic chains of the polymers that are adsorbed on the nanocrystal surface extend further outward, thereby limiting the movement of drug particles to maintain the distance between drug particles. While elucidating the stabilization mechanism of different stabilizers, their deficiencies are also exposed. The stability of nanosuspension system stabilized by electrostatic repulsion can be inhibited by the electrolytes or high acid conditions. Especially, oral drugs are exposed to an acidic gastric condition, the stable electrostatic interaction system may be destroyed due to the influence of electrolytes in gastrointestinal fluids (Rachmawati et al., ). The stability of the nanosuspension system stabilized by steric hindrance is not disturbed by charge ions, but the interaction between the stabilizer and the drug is more complex, the suitable polymer should be selected according to the physical and chemical properties of the drug (George & Ghosh, ). Suspensions containing high concentration polymers and drugs are often not conducive to the preparation of nanosuspensions because of their high viscosity (Medarević et al., ). It has been reported in many literatures that the stabilizers with different stabilization mechanisms have been applied to the preparation of nanosuspensions to produce a synergistic effect and obtain a stable nanosuspension system (Zuo et al., ; Toziopoulou et al., ; Medarević et al., ). In addition, there are also some uncommonly used stabilizers, such as whey protein isolate, soybean protein isolate, etc. (He et al., ), which have a strong affinity with drugs and stable adsorption on the surface of drugs, forming an effective space protective barrier. Some polyphenols, such as tannic acid and epigallocatechin gallate, have also been used in nano-drug delivery systems (Bartzoka et al., ; Luo et al., ; Su et al., ). lists the common stabilizers classified based on the mechanism of stabilization. This section discusses the influence of the key properties of stabilizers on the development of stable nanosuspension formulations. 3.2.1. Molecular weight of the stabilizer The hydrophobic end of the polymer stabilizer adsorbs on the surface of the drug nanocrystal, which can provide spatial stability, and stabilizers with higher molecular weight typically outperform stabilizers with lower molecular weight. The mutual attraction between drug nanocrystals that is caused by van der Waals forces leads to the aggregation of the nanocrystals. A long-chain polymer stabilizer can effectively induce spatial repulsion and prevent the aggregation of particles (Lee et al., ). A polymer stabilizer with a molecular weight of less than 5000 g/mol has difficult forming a spatial barrier for the mutual attraction between particles. In comparison, a polymer stabilizer with a molecular weight that exceeds 25,000 g/mol may lead to nanocrystal bridging due to the large molecular chain length (Lee et al., ; Choi et al., ; Peltonen & Hirvonen, ; Tuomela et al., ). The selection of a polymer of suitable molecular weight via experimental design is essential for the preparation of a stable nanosuspension. 3.2.2. Hydrophilic and hydrophobic properties of the stabilizer The hydrophilicity and hydrophobicity of a surfactant can be expressed by the hydrophilic lipid equilibrium (HLB) values (Pasquali et al., ; VermaGokhale et al., ). The HLB value of a hydrophobic surfactant is low while that of hydrophilic surfactant is high. To improve the stability of drug nanocrystals, the stabilizer should have sufficient affinity with the surfaces of the drug particles (Lee et al., ). When insoluble drugs show high hydrophobicity, the hydrophobicity of the stabilizer is the main driving force for the surface adsorption of the drug particles, which is crucial for the spatial stability and uniform dispersion of the drug particles (Van Eerdenbrugh et al., ). It is impossible to realize stability without adsorption, and it is also impossible to obtain a dispersed nanocrystal suspension. Moreover, the hydrophilicity of the stabilizer is important because most drug nanocrystals are dispersed in water and the hydrophilic portion of the stabilizer will be oriented toward water rather than the hydrophobic surface of the drug, thereby facilitating the inhibition of the drug nanocrystal aggregation. Hydrophilic molecules that contain electric charges can further stabilize drug nanocrystals through electrostatic repulsion between crystals, thereby providing sufficient space or charge stability for the drug nanocrystals. Ferrar et al. investigated the effects of 28 stabilizer formulations on the formability of drug nanocrystals using three insoluble drugs as models and found that the key factors that affected the stability of the nanocrystals were the amphiphilicity of the stabilizer and whether it had a sufficiently long hydrocarbon chain. Through a molecular model, it is shown that surfactant molecules with long and flexible hydrophobic chains can anchor on the surfaces of nanocrystals more effectively, thereby increasing the stability. Therefore, a stabilizer must have a suitable balance between hydrophilicity and hydrophobicity. 3.2.3. Concentration of the stabilizer It is necessary to prepare stable nanocrystals with a suitable stabilizer concentration. The optimal stabilizer concentration will maximize the adsorption affinity of the stabilizer to the drug surface (Deng et al., ). Spatial repulsion is induced by coating drug nanocrystals with stabilizers to prevent Ostwald ripening. Therefore, if the stabilizer concentration is insufficient, the drug particles cannot be effectively coated. If the drug particles are attached to the same stabilizer molecule, particle aggregation and bridging can occur, thereby resulting in reduced stability. The stability of a nanosuspension is not directly proportional to the concentration of the stabilizer. Excessive stabilizer may lead to Ostwald ripening and decrease the stability over time. In addition, amphiphilic stabilizers in concentrations that exceed the critical micelle concentration (CMC) may lead to micelle formation. As the number of micelles increases, the micelles begin to compete for surface adsorption, and the total adsorption capacity at the drug interface begins to decrease, which will further undermine the stability of the nanosystem, thereby resulting in an increase in the particle size (Lo et al., ; Hui et al., ). Therefore, the use of a suitable stabilizer concentration is critical (Rangel-Yagui et al., ; Deng et al., ; Peltonen & Hirvonen, ; Hui et al., ).
Molecular weight of the stabilizer The hydrophobic end of the polymer stabilizer adsorbs on the surface of the drug nanocrystal, which can provide spatial stability, and stabilizers with higher molecular weight typically outperform stabilizers with lower molecular weight. The mutual attraction between drug nanocrystals that is caused by van der Waals forces leads to the aggregation of the nanocrystals. A long-chain polymer stabilizer can effectively induce spatial repulsion and prevent the aggregation of particles (Lee et al., ). A polymer stabilizer with a molecular weight of less than 5000 g/mol has difficult forming a spatial barrier for the mutual attraction between particles. In comparison, a polymer stabilizer with a molecular weight that exceeds 25,000 g/mol may lead to nanocrystal bridging due to the large molecular chain length (Lee et al., ; Choi et al., ; Peltonen & Hirvonen, ; Tuomela et al., ). The selection of a polymer of suitable molecular weight via experimental design is essential for the preparation of a stable nanosuspension.
Hydrophilic and hydrophobic properties of the stabilizer The hydrophilicity and hydrophobicity of a surfactant can be expressed by the hydrophilic lipid equilibrium (HLB) values (Pasquali et al., ; VermaGokhale et al., ). The HLB value of a hydrophobic surfactant is low while that of hydrophilic surfactant is high. To improve the stability of drug nanocrystals, the stabilizer should have sufficient affinity with the surfaces of the drug particles (Lee et al., ). When insoluble drugs show high hydrophobicity, the hydrophobicity of the stabilizer is the main driving force for the surface adsorption of the drug particles, which is crucial for the spatial stability and uniform dispersion of the drug particles (Van Eerdenbrugh et al., ). It is impossible to realize stability without adsorption, and it is also impossible to obtain a dispersed nanocrystal suspension. Moreover, the hydrophilicity of the stabilizer is important because most drug nanocrystals are dispersed in water and the hydrophilic portion of the stabilizer will be oriented toward water rather than the hydrophobic surface of the drug, thereby facilitating the inhibition of the drug nanocrystal aggregation. Hydrophilic molecules that contain electric charges can further stabilize drug nanocrystals through electrostatic repulsion between crystals, thereby providing sufficient space or charge stability for the drug nanocrystals. Ferrar et al. investigated the effects of 28 stabilizer formulations on the formability of drug nanocrystals using three insoluble drugs as models and found that the key factors that affected the stability of the nanocrystals were the amphiphilicity of the stabilizer and whether it had a sufficiently long hydrocarbon chain. Through a molecular model, it is shown that surfactant molecules with long and flexible hydrophobic chains can anchor on the surfaces of nanocrystals more effectively, thereby increasing the stability. Therefore, a stabilizer must have a suitable balance between hydrophilicity and hydrophobicity.
Concentration of the stabilizer It is necessary to prepare stable nanocrystals with a suitable stabilizer concentration. The optimal stabilizer concentration will maximize the adsorption affinity of the stabilizer to the drug surface (Deng et al., ). Spatial repulsion is induced by coating drug nanocrystals with stabilizers to prevent Ostwald ripening. Therefore, if the stabilizer concentration is insufficient, the drug particles cannot be effectively coated. If the drug particles are attached to the same stabilizer molecule, particle aggregation and bridging can occur, thereby resulting in reduced stability. The stability of a nanosuspension is not directly proportional to the concentration of the stabilizer. Excessive stabilizer may lead to Ostwald ripening and decrease the stability over time. In addition, amphiphilic stabilizers in concentrations that exceed the critical micelle concentration (CMC) may lead to micelle formation. As the number of micelles increases, the micelles begin to compete for surface adsorption, and the total adsorption capacity at the drug interface begins to decrease, which will further undermine the stability of the nanosystem, thereby resulting in an increase in the particle size (Lo et al., ; Hui et al., ). Therefore, the use of a suitable stabilizer concentration is critical (Rangel-Yagui et al., ; Deng et al., ; Peltonen & Hirvonen, ; Hui et al., ).
Combined action factor 3.3.1. Drug solubility in a stabilizer solution The solubility of a drug is affected by the type of stabilizer that is used. When the solution of stabilizers increases the solubility of drug nanocrystals, the stability of these crystals decreases over time, thereby leading to the growth of the nanocrystals. For example, a study showed that PVP K30, Pluronic F68, and HPMC had no significant effect on ibuprofen solubility (VermaGokhale et al., ), and stable nanocrystal suspensions were obtained; however, as stabilizers, SLS, Twine 80, and Pluronic F127 increased the solubility of ibuprofen, thereby resulting in instability of the nanosuspensions and increased particle size during storage. Ghosh et al. reported similar results in a study on the use of the wet grinding process to improve the bioavailability of insoluble drugs. As 1% SLS increased the solubility of drugs, it also exacerbated the Ostwald ripening phenomenon. Therefore, the stabilizers with the weakest influence on the drug solubility are the first choices for the preparation of a nanosuspension. 3.3.2. Surface energies and specific interactions of the drug and stabilizer The interactions between drug nanocrystals and polymer stabilizers depend mainly on their respective surface energies. Especially when drug nanocrystals are dispersed in water, they have large surface area and high surface tension due to their small particle size and strong hydrophobicity. Therefore, drug nanocrystals exhibit higher surface free energy, and their dispersion becomes unstable, thereby leading to aggregation, solidification, or crystal growth (Verma et al., ). To reduce the surface energy of drug nanocrystals and improve the stability of drug nanocrystals, it is necessary to humidify or hydrate the surfaces of the drug nanocrystals. The surface of a drug nanocrystal can be hydrated and modified by various materials to reduce the surface free energy (Gong et al., ; Wang & Gong, , ). Hydrophilic polymers are commonly used to hydrate nanocrystal surfaces because they can interact strongly with surrounding water molecules (Choi et al., ). In a study that analyzed the effects of polymer stabilizers on the stability of drug nanocrystals, seven drugs were wet-comminuted to form nanocrystals (Choi et al., ), and hydroxypropyl cellulose (HPC) and polyvinylpyrrolidone (PVP) were used as stabilizers. The results demonstrate that a drug with a surface energy that is similar to that of PVP can form stable nanocrystals effectively. Due to the strong interactions between drug nanocrystals and stabilizers, the use of polymer stabilizers that are similar in surface energy to the drug usually results in drug nanocrystals of stable and uniform particle size (Lee et al., ). The surface energies of drugs and stabilizers can be assessed using ‘static contact angle measurements’ (Choi et al., ; Lee et al., ) (see the subsection on the contact angle measurement below for details). 3.3.3. Effects of dispersion media To form a stable nanosystem, the temperature and viscosity of the dispersion medium must be suitable. The Stokes–Einstein equation can be used to explain the influence of the temperature and viscosity on the stability of the nanosuspension: (3) D = k T / ( 6 η π r ) where D is the diffusion coefficient, k is the Boltzmann constant, T is the thermodynamic temperature, η is the viscosity, and r is the radius of the spherical particle (Zwanzig & Harrison, ; Harris, ). According to the equation, the stability of the nanosystem is negatively correlated with the temperature and positively correlated with the viscosity of the medium. According to the Stokes–Einstein equation, high viscosity reduces the diffusion velocity of drug particles and, thus, stabilizes the nanosuspension (Milewski et al., ). The formation of the hydrophobic interaction between the nanocrystal system and the stabilizer is a negative entropy process. The higher the temperature of the nanocrystal system, the lower the stability of the system and the more likely the nanocrystal drugs are to aggregate. However, an increase in the temperature will lead to a decrease in the viscosity and an increase in the diffusion coefficient, which is very unfavorable for the interactions between particles in the nanosystem (Kakran et al., ). However, in a study that compared surfactants with polymer stabilizers, it was found that although surfactants have low viscosity and high surface activity, their stability is higher (Van Eerdenbrugh et al., ). The polymer stabilizer with a high viscosity has a poor effect on the preparation of stable nanocrystals, for which the main reason is that the high viscosity inhibits the decrease of the particle size in the preparation process of the nanocrystals.
Drug solubility in a stabilizer solution The solubility of a drug is affected by the type of stabilizer that is used. When the solution of stabilizers increases the solubility of drug nanocrystals, the stability of these crystals decreases over time, thereby leading to the growth of the nanocrystals. For example, a study showed that PVP K30, Pluronic F68, and HPMC had no significant effect on ibuprofen solubility (VermaGokhale et al., ), and stable nanocrystal suspensions were obtained; however, as stabilizers, SLS, Twine 80, and Pluronic F127 increased the solubility of ibuprofen, thereby resulting in instability of the nanosuspensions and increased particle size during storage. Ghosh et al. reported similar results in a study on the use of the wet grinding process to improve the bioavailability of insoluble drugs. As 1% SLS increased the solubility of drugs, it also exacerbated the Ostwald ripening phenomenon. Therefore, the stabilizers with the weakest influence on the drug solubility are the first choices for the preparation of a nanosuspension.
Surface energies and specific interactions of the drug and stabilizer The interactions between drug nanocrystals and polymer stabilizers depend mainly on their respective surface energies. Especially when drug nanocrystals are dispersed in water, they have large surface area and high surface tension due to their small particle size and strong hydrophobicity. Therefore, drug nanocrystals exhibit higher surface free energy, and their dispersion becomes unstable, thereby leading to aggregation, solidification, or crystal growth (Verma et al., ). To reduce the surface energy of drug nanocrystals and improve the stability of drug nanocrystals, it is necessary to humidify or hydrate the surfaces of the drug nanocrystals. The surface of a drug nanocrystal can be hydrated and modified by various materials to reduce the surface free energy (Gong et al., ; Wang & Gong, , ). Hydrophilic polymers are commonly used to hydrate nanocrystal surfaces because they can interact strongly with surrounding water molecules (Choi et al., ). In a study that analyzed the effects of polymer stabilizers on the stability of drug nanocrystals, seven drugs were wet-comminuted to form nanocrystals (Choi et al., ), and hydroxypropyl cellulose (HPC) and polyvinylpyrrolidone (PVP) were used as stabilizers. The results demonstrate that a drug with a surface energy that is similar to that of PVP can form stable nanocrystals effectively. Due to the strong interactions between drug nanocrystals and stabilizers, the use of polymer stabilizers that are similar in surface energy to the drug usually results in drug nanocrystals of stable and uniform particle size (Lee et al., ). The surface energies of drugs and stabilizers can be assessed using ‘static contact angle measurements’ (Choi et al., ; Lee et al., ) (see the subsection on the contact angle measurement below for details).
Effects of dispersion media To form a stable nanosystem, the temperature and viscosity of the dispersion medium must be suitable. The Stokes–Einstein equation can be used to explain the influence of the temperature and viscosity on the stability of the nanosuspension: (3) D = k T / ( 6 η π r ) where D is the diffusion coefficient, k is the Boltzmann constant, T is the thermodynamic temperature, η is the viscosity, and r is the radius of the spherical particle (Zwanzig & Harrison, ; Harris, ). According to the equation, the stability of the nanosystem is negatively correlated with the temperature and positively correlated with the viscosity of the medium. According to the Stokes–Einstein equation, high viscosity reduces the diffusion velocity of drug particles and, thus, stabilizes the nanosuspension (Milewski et al., ). The formation of the hydrophobic interaction between the nanocrystal system and the stabilizer is a negative entropy process. The higher the temperature of the nanocrystal system, the lower the stability of the system and the more likely the nanocrystal drugs are to aggregate. However, an increase in the temperature will lead to a decrease in the viscosity and an increase in the diffusion coefficient, which is very unfavorable for the interactions between particles in the nanosystem (Kakran et al., ). However, in a study that compared surfactants with polymer stabilizers, it was found that although surfactants have low viscosity and high surface activity, their stability is higher (Van Eerdenbrugh et al., ). The polymer stabilizer with a high viscosity has a poor effect on the preparation of stable nanocrystals, for which the main reason is that the high viscosity inhibits the decrease of the particle size in the preparation process of the nanocrystals.
Characterization and evaluation of the nanosuspension 3.4.1. Contact angle measurements Contact angle measurement is a method for measuring the wettability of a stabilizer. The smaller the contact angle, the higher the wettability. The contact angle of a stabilizer solution can be measured by compressing a small amount of powder to form a disk. Yue et al. evaluated the wettability of drugs through contact angles. Drugs with small contact angles and satisfactory wettability easily form stable nanosuspensions . Pardeike & Müller used the contact angle as the standard for the formula selection of a nanosuspension stabilizer. Purified water showed a contact angle of 51.6° on the compressed PX-18 disk. With 0.1% (w/v) Tween 80 solution, the contact angle was reduced to 23.2° . Therefore, Tween 80 was selected as the stabilizing agent for PX-18 nanosuspensions. In another study, in which various stabilizers were screened for the preparation of miconazole nanosuspensions, the contact angles between the stabilizer solutions and the drug were determined (Cerdeira et al., ). The contact angle between miconazole and pure water exceeded 140°. The contact angle was determined to be 43° for a 2.5% HPC-LF and 0.1% SLS solution. However, miconazole had a large contact angle with PVP/SDS and Poloxamer solutions, which indicated poor wettability of the drug. The nanocrystal size was smaller when the stabilizer system with the lowest contact angle was used, which further demonstrated the practicability of the method. 3.4.2. Micromorphological characterization Atomic force microscopy (AFM) is an important visualization tool for nanocrystals. It enables the qualitative and quantitative analysis of the physical properties of nanocrystals, such as the size, surface structure, roughness, and morphology. The interaction forces between atoms and molecules are used to observe the surface morphology of an object and provide a three-dimensional surface map. Compared with scanning electron microscopy (SEM) and transmission electron microscopy (TEM), it has many advantages: Electron microscopes can only provide two-dimensional images, while AFM can capture three-dimensional images of nanocrystal surfaces without any special processing of the sample. Atomic force microscopy has proved to be a valuable tool for visualizing and quantifying pharmaceutical nanocrystals in preparations. In addition to precise size measurements, AFM can easily provide information about the shape and structure of nanoparticles that cannot be obtained by light scattering or other methods (Shi et al., ; Du et al., ). In addition, the method can be used to evaluate the interactions between the stabilizer and the surfaces of the drug particles, and the resulting affinity can be a satisfactory indicator of the stability of the nanocrystal preparation with the stabilizer. Verma et al. used AFM technology to screen the stabilizers in ibuprofen nanocrystal formulations . The captured AFM image clearly shows that on the ibuprofen particle surface, the polymerization chains of HPMC and HPC are fully unfolded and adsorbed on the ibuprofen particle surface. The strong interactions between HPMC/HPC and ibuprofen drug particles strongly suggest that both polymers are suitable for the formation of stable ibuprofen nanosuspensions. In contrast, the AFM images of PVP and Poloxamer show incomplete surface adsorption of ibuprofen particles, which results in low stability of the nanocrystal preparations that are obtained using PVP and Poloxamer. 3.4.3. Particle size distributions of suspensions The polydispersity index (PDI) represents the change of the particle size distribution of a nanocrystal suspension and is affected by its physical stability. Under normal circumstances, a PDI value of 0.1 ∼ 0.25 corresponds to a narrow particle size distribution, which indicates a stable nanocrystal suspension system, while a PDI value of >0.5 correspond to a wide particle size distribution range (Shah et al., ). Ensuring a narrow particle size distribution is an effective method for reducing the concentration gradient and the differences in the saturation solubility of drug nanocrystals. When drug nanocrystals have a wide particle size distribution, Ostwald ripening is more likely, which leads to decreases in the drug solubility and the dissolution rate and, ultimately, to a decrease in the bioavailability. Therefore, maintaining a narrow particle size distribution of drug nanocrystals is highly important for ensuring the stability of a drug nanocrystal suspension. Photonic correlation spectroscopy (PCS) is one of the most commonly used particle size characterization techniques. It uses the principle of dynamic light scattering to evaluate the average particle sizes of nanocrystals in terms of Z-value, particle size distribution, and zeta potential (which refers to the potential of the shear plane). The PDI values range from 0 (monodispersed particles) to 0.500 (polydispersed particles) and are used to monitor the physical stability of nanocrystals. PCS has a narrow range of measurements (e.g. from 3 nm to 3 μm) and is not suitable for large particle size measurements. When the particles are large, they are measured via laser diffraction (LD), which measures a large range of particles (0.02–2000 µm) that depends on the type of instrument that is used. The data that are measured via PCS and LD are not similar in terms of granularity because the LD data are based on the volume distribution, whereas the PCS data are the weighted light intensity values. LD only measures the particle size distribution, whereas PCS also measures the average particle size and zeta potential, which can be used to convert strength data into volume and quantity distributions. If nanosuspensions are used intravenously, it is necessary to use the Coulter counting method. Since the smallest capillaries are 5 µm in size, there is a risk of capillary blockage if any particles that are larger than 5 µm are present in the intravenous formulation. Coulter’s counting method provides the absolute number of particles per unit volume at various size levels; hence, the number of nanoscale particles is strictly controlled. Keck found that the dissolution of nanocrystals during measurement significantly affected the size results that were obtained. When an unsaturated medium or microparticle saturated medium is used, the sample will dissolve, the dissolution will be unstable, and the results will be unreproducible. If the particle sizes of nanocrystals are to be analyzed, the dispersion media should be pre-saturated with the nanocrystals because the solubility of the nanocrystals exceeds that of micro-sized drugs. In the early stage of formulation development, it should be confirmed whether the particle size analysis method requires a pre-saturated dispersion medium. The characterization of nanoparticles using both dynamic and static light scattering techniques can yield meaningful results if the necessary prerequisites are satisfied. Via the development and validation of a reasonable particle size detection methodology, misleading studies can be avoided, and the stability and instability of nanocrystals can be reliably distinguished at an early stage of development. 3.4.4. Zeta potential in suspension The zeta potential (ζ) is the main factor that affects the physical stability of nanocrystal suspensions. It is a measure of the charge on the shear surfaces of particles and reflects the physical stability of colloidal systems. When the absolute zeta potential of the drug nanocrystals is very small, the gravitational attraction between the particles exceeds the electrostatic repulsion, thereby causing nanocrystal aggregation. Typically, a zeta potential of 30 mv is required for obtaining an electrostatically stable nanocrystal suspension. The zeta potential of a suspension can be used to predict the storage stability, and particles with sufficient zeta potentials are difficult to aggregate due to electrostatic or spatial repulsion between the particles. The zeta potential represents the stability of a nanosuspension; hence, it is necessary to evaluate the level of the zeta potential value reasonably. When a polymer is used as a stabilizer, the zeta potential on the nanocrystal surface depends more strongly on the polymer concentration than on the surfactant concentration; thus, the absolute potential value must be no less than 20 mV. In a study, the zeta potential of a glyburide nanosuspension that was stabilized by HPMC and SLS depended more strongly on the polymer concentration than on the surfactant concentration (Singh et al., ). HPMC is a nonionic polymer, and SLS is an anionic surfactant. When the polymer concentration is low, the particle surface of the drug is not highly densely covered by the polymer; as a result, the anionic surfactant can more easily reach the surface of the drug and the nanocrystal surface, and the zeta potential increases with the increase of the concentration of SLS. However, at a higher percentage of HPMC, the nanocrystal surface potential is not significantly affected by the concentration of SLS. Similar results were obtained in another study in which the zeta potential of a meloxicam suspension depended more strongly on the polymer concentration than on the surfactant concentration (Singare et al., ). Nanosuspensions typically realize stability through the synergistic action of polymer stabilizers and charge stabilizers. Therefore, for the polymers and charge protectors that are used to prepare nanocrystal suspensions, the optimal balance between the electrostatic repulsion of the zeta potential and the spatial stability that is provided by the polymer should be realized. 3.4.5. Storage stability The stability of a nanosuspension can be evaluated experimentally under various storage conditions. The stability of the nanocrystals will be assessed according to their size, polydispersity index (PDI), and zeta potential (Geng et al., ; Gol et al., ). In one study, miconazole nitrate nanocrystal suspensions were stored at refrigerated (4 °C), room (25 °C) and hyperthermal (40 °C) temperatures for further investigations (Pyo et al., ). The particle size and PDI of the nanosuspensions that were stabilized by Tween 80 did not change when stored at 4 °C and showed almost no change at 25 °C. However, the particle size and PDI both increased during storage at 40 °C. Via optical microscopy, the presence of needle-shaped crystals was observed, and the Feret diameter of approximately 5 μm was outside the measurement range of PCS and, thus. could not be detected. When Poloxamer 407 was used as a stabilizer, the particle size and PDI did not increase at 4 °C or 25 °C over 3 months, while particle growth was observed at 40 °C, but the increase was significantly less than that of the Tween 80 stable suspension.
Contact angle measurements Contact angle measurement is a method for measuring the wettability of a stabilizer. The smaller the contact angle, the higher the wettability. The contact angle of a stabilizer solution can be measured by compressing a small amount of powder to form a disk. Yue et al. evaluated the wettability of drugs through contact angles. Drugs with small contact angles and satisfactory wettability easily form stable nanosuspensions . Pardeike & Müller used the contact angle as the standard for the formula selection of a nanosuspension stabilizer. Purified water showed a contact angle of 51.6° on the compressed PX-18 disk. With 0.1% (w/v) Tween 80 solution, the contact angle was reduced to 23.2° . Therefore, Tween 80 was selected as the stabilizing agent for PX-18 nanosuspensions. In another study, in which various stabilizers were screened for the preparation of miconazole nanosuspensions, the contact angles between the stabilizer solutions and the drug were determined (Cerdeira et al., ). The contact angle between miconazole and pure water exceeded 140°. The contact angle was determined to be 43° for a 2.5% HPC-LF and 0.1% SLS solution. However, miconazole had a large contact angle with PVP/SDS and Poloxamer solutions, which indicated poor wettability of the drug. The nanocrystal size was smaller when the stabilizer system with the lowest contact angle was used, which further demonstrated the practicability of the method.
Micromorphological characterization Atomic force microscopy (AFM) is an important visualization tool for nanocrystals. It enables the qualitative and quantitative analysis of the physical properties of nanocrystals, such as the size, surface structure, roughness, and morphology. The interaction forces between atoms and molecules are used to observe the surface morphology of an object and provide a three-dimensional surface map. Compared with scanning electron microscopy (SEM) and transmission electron microscopy (TEM), it has many advantages: Electron microscopes can only provide two-dimensional images, while AFM can capture three-dimensional images of nanocrystal surfaces without any special processing of the sample. Atomic force microscopy has proved to be a valuable tool for visualizing and quantifying pharmaceutical nanocrystals in preparations. In addition to precise size measurements, AFM can easily provide information about the shape and structure of nanoparticles that cannot be obtained by light scattering or other methods (Shi et al., ; Du et al., ). In addition, the method can be used to evaluate the interactions between the stabilizer and the surfaces of the drug particles, and the resulting affinity can be a satisfactory indicator of the stability of the nanocrystal preparation with the stabilizer. Verma et al. used AFM technology to screen the stabilizers in ibuprofen nanocrystal formulations . The captured AFM image clearly shows that on the ibuprofen particle surface, the polymerization chains of HPMC and HPC are fully unfolded and adsorbed on the ibuprofen particle surface. The strong interactions between HPMC/HPC and ibuprofen drug particles strongly suggest that both polymers are suitable for the formation of stable ibuprofen nanosuspensions. In contrast, the AFM images of PVP and Poloxamer show incomplete surface adsorption of ibuprofen particles, which results in low stability of the nanocrystal preparations that are obtained using PVP and Poloxamer.
Particle size distributions of suspensions The polydispersity index (PDI) represents the change of the particle size distribution of a nanocrystal suspension and is affected by its physical stability. Under normal circumstances, a PDI value of 0.1 ∼ 0.25 corresponds to a narrow particle size distribution, which indicates a stable nanocrystal suspension system, while a PDI value of >0.5 correspond to a wide particle size distribution range (Shah et al., ). Ensuring a narrow particle size distribution is an effective method for reducing the concentration gradient and the differences in the saturation solubility of drug nanocrystals. When drug nanocrystals have a wide particle size distribution, Ostwald ripening is more likely, which leads to decreases in the drug solubility and the dissolution rate and, ultimately, to a decrease in the bioavailability. Therefore, maintaining a narrow particle size distribution of drug nanocrystals is highly important for ensuring the stability of a drug nanocrystal suspension. Photonic correlation spectroscopy (PCS) is one of the most commonly used particle size characterization techniques. It uses the principle of dynamic light scattering to evaluate the average particle sizes of nanocrystals in terms of Z-value, particle size distribution, and zeta potential (which refers to the potential of the shear plane). The PDI values range from 0 (monodispersed particles) to 0.500 (polydispersed particles) and are used to monitor the physical stability of nanocrystals. PCS has a narrow range of measurements (e.g. from 3 nm to 3 μm) and is not suitable for large particle size measurements. When the particles are large, they are measured via laser diffraction (LD), which measures a large range of particles (0.02–2000 µm) that depends on the type of instrument that is used. The data that are measured via PCS and LD are not similar in terms of granularity because the LD data are based on the volume distribution, whereas the PCS data are the weighted light intensity values. LD only measures the particle size distribution, whereas PCS also measures the average particle size and zeta potential, which can be used to convert strength data into volume and quantity distributions. If nanosuspensions are used intravenously, it is necessary to use the Coulter counting method. Since the smallest capillaries are 5 µm in size, there is a risk of capillary blockage if any particles that are larger than 5 µm are present in the intravenous formulation. Coulter’s counting method provides the absolute number of particles per unit volume at various size levels; hence, the number of nanoscale particles is strictly controlled. Keck found that the dissolution of nanocrystals during measurement significantly affected the size results that were obtained. When an unsaturated medium or microparticle saturated medium is used, the sample will dissolve, the dissolution will be unstable, and the results will be unreproducible. If the particle sizes of nanocrystals are to be analyzed, the dispersion media should be pre-saturated with the nanocrystals because the solubility of the nanocrystals exceeds that of micro-sized drugs. In the early stage of formulation development, it should be confirmed whether the particle size analysis method requires a pre-saturated dispersion medium. The characterization of nanoparticles using both dynamic and static light scattering techniques can yield meaningful results if the necessary prerequisites are satisfied. Via the development and validation of a reasonable particle size detection methodology, misleading studies can be avoided, and the stability and instability of nanocrystals can be reliably distinguished at an early stage of development.
Zeta potential in suspension The zeta potential (ζ) is the main factor that affects the physical stability of nanocrystal suspensions. It is a measure of the charge on the shear surfaces of particles and reflects the physical stability of colloidal systems. When the absolute zeta potential of the drug nanocrystals is very small, the gravitational attraction between the particles exceeds the electrostatic repulsion, thereby causing nanocrystal aggregation. Typically, a zeta potential of 30 mv is required for obtaining an electrostatically stable nanocrystal suspension. The zeta potential of a suspension can be used to predict the storage stability, and particles with sufficient zeta potentials are difficult to aggregate due to electrostatic or spatial repulsion between the particles. The zeta potential represents the stability of a nanosuspension; hence, it is necessary to evaluate the level of the zeta potential value reasonably. When a polymer is used as a stabilizer, the zeta potential on the nanocrystal surface depends more strongly on the polymer concentration than on the surfactant concentration; thus, the absolute potential value must be no less than 20 mV. In a study, the zeta potential of a glyburide nanosuspension that was stabilized by HPMC and SLS depended more strongly on the polymer concentration than on the surfactant concentration (Singh et al., ). HPMC is a nonionic polymer, and SLS is an anionic surfactant. When the polymer concentration is low, the particle surface of the drug is not highly densely covered by the polymer; as a result, the anionic surfactant can more easily reach the surface of the drug and the nanocrystal surface, and the zeta potential increases with the increase of the concentration of SLS. However, at a higher percentage of HPMC, the nanocrystal surface potential is not significantly affected by the concentration of SLS. Similar results were obtained in another study in which the zeta potential of a meloxicam suspension depended more strongly on the polymer concentration than on the surfactant concentration (Singare et al., ). Nanosuspensions typically realize stability through the synergistic action of polymer stabilizers and charge stabilizers. Therefore, for the polymers and charge protectors that are used to prepare nanocrystal suspensions, the optimal balance between the electrostatic repulsion of the zeta potential and the spatial stability that is provided by the polymer should be realized.
Storage stability The stability of a nanosuspension can be evaluated experimentally under various storage conditions. The stability of the nanocrystals will be assessed according to their size, polydispersity index (PDI), and zeta potential (Geng et al., ; Gol et al., ). In one study, miconazole nitrate nanocrystal suspensions were stored at refrigerated (4 °C), room (25 °C) and hyperthermal (40 °C) temperatures for further investigations (Pyo et al., ). The particle size and PDI of the nanosuspensions that were stabilized by Tween 80 did not change when stored at 4 °C and showed almost no change at 25 °C. However, the particle size and PDI both increased during storage at 40 °C. Via optical microscopy, the presence of needle-shaped crystals was observed, and the Feret diameter of approximately 5 μm was outside the measurement range of PCS and, thus. could not be detected. When Poloxamer 407 was used as a stabilizer, the particle size and PDI did not increase at 4 °C or 25 °C over 3 months, while particle growth was observed at 40 °C, but the increase was significantly less than that of the Tween 80 stable suspension.
Solidification of nanocrystal suspensions Solidification is one of the stabilization strategies, and solid preparations are more stable than liquid preparations. The solidification of nanocrystal suspensions can reduce the generation of unstable factors of nanocrystals such as aggregation and Ostwald ripening; hence, prepared nanocrystal suspensions are usually converted into the solid state. Then, the solid powders are converted into other dosage forms, such as sterile powder for injection, oral tablets, and capsules (Wang et al., ). 4.1. Solid method of nanocrystal suspension The solidification process is a key step in the formation of the final product. The solidification methods include spray drying, freeze drying, electrostatic spray drying, and the use of an aerosol flow reactor, among others (Chan & Kwok, ; Ho & Lee, ). In addition, a type of fluidized bed coating technology has been applied in the industry. Fluidized bed coating of pellets is a one-step pelletizing method in which a nanocrystal suspension is dried and wrapped around the cores of pills. The pellets can be used to realize satisfactory fluidity, which is conducive to tablet compression and capsule filling. Spray drying and freeze drying are two main curing methods. To reduce the time and energy consumptions, spray drying is more widely used in the pharmaceutical industry than freeze drying. However, spray drying is not suitable for heat-unstable drugs, and freeze drying is the preferred technique for such drugs. The aggregation of nanoparticles should be minimized during solidification. In a nanocrystal suspension, stabilizers provide ionic or spatial stability by adsorbing onto the surfaces of the drug nanoparticles, thereby preventing nanoparticle aggregation. The solidification of nanocrystal suspensions may result in drying and solidification of the stabilizers, which may lead to unstable and irreversible aggregation of the drug nanoparticles (Chaubal & Popescu, ). Medarević et al. found that the spray-dried solidified carvedilol nanocrystals exhibited satisfactory redispersability when in contact with water, while strong agglomeration during freeze drying prevented the redispersion of carvedilol nanocrystals after freeze drying. Therefore, a reasonable solidification method should be selected (Niwa et al., ; Wang & Gong, ). The dissolution rates of dry powder in water differ among curing methods. Salazar studied the effects of spray drying, freeze drying and wet granulation on the dissolution rates of glibenclamide nanoparticles (Salazar et al., ). The results demonstrated that the dissolution rates were highest for spray drying, moderate for freeze drying, and lowest for wet granulation. presents case studies on the solidification of nanocrystal suspensions. Regardless of the solidification method, it is important to preserve the properties of the nanocrystal particles after the removal of water from the nanocrystal suspension. The influence of the redispersibility of nanocrystals after curing is a major concern. Dispersants (protectants) are typically added to nanosuspensions to maintain the redispersibility of the nanocrystals in water after solidification (Van Eerdenbrugh et al., ). Most protectants are water-soluble, such as mannitol, sucrose, lactose, and water-soluble polymers such as hydroxypropyl methyl cellulose (Dan et al., ; Parmentier et al., ). When the dry powder comes into contact with the water medium, the protective agent around the nanoparticles dissolves rapidly, thereby releasing the nanocrystals and maintaining them in their original dispersed state. In a study on the preparation of fenofibrate nanocrystals, Zuo et al found that the average particle size of fenofibrate redispersion increased to 3901 nm without the addition of a protective agent, which was 6 times the particle size before drying. This means that irreversible aggregation occurs during the drying process, and, hence, the dry powder can no longer disperse into nanoparticles of the original size. A water-soluble dispersant can form a bridge that connects hydrophilic excipients to nanocrystals. When spray drying was conducted via the addition of protective agents (lactose, sucrose, glucose, maltose, and mannitol), the fenofibrate redispersibility was substantially improved, among which mannitol was the most effective protective agent for maintaining the redispersibility of the nanocrystals. Teeranachaideekul et al. studied the particle sizes after freeze drying of nanosuspensions with and without cryoprotectants, and the results demonstrated that the average particle size of nanocrystals without cryoprotectants exceeded that of nanocrystals with cryoprotectants. In a study of naproxen nanocrystal spray drying, Kumar et al. found that lactose and trehalose could effectively inhibit the aggregation of nanoparticles. Ultimately, trehalose was used as a naproxen nanocrystal powder due to its higher yield than lactose. 4.2. Characterization and evaluation of solid nanocrystal preparations 4.2.1. Surface morphology The sizes and shapes of nanocrystals were analyzed via scanning electron microscopy (SEM) and transmission electron microscopy (TEM). In SEM, image results are generated through the interaction between the electron beam and atoms at various depths in the sample. For example, by collecting secondary electrons and backscattered electrons, information about the microstructure of the material can be obtained . In a transmission electron microscope, an image is obtained by capturing transmitted electrons in a sample. The accelerated and clustered electron beam can be transmitted to a very thin sample, and the electrons collide with the atoms in the sample and change direction, thereby generating solid angle scattering, which can be used to observe the ultrastructures of particles, and the resolution can reach 0.1 ∼ 0.2 nm . 4.2.2. Crystal characteristics The crystal characteristics of bulk drugs are highly attributes in the final products of nano pharmaceutical preparations. In the process of formation, the crystalline form of the drug may be changed due to external stresses and temperature changes. Although amorphous drugs have higher solubility, higher dissolution rates, or better compression properties, they are less physically and chemically stable than crystalline drugs, thereby resulting in uneven final product quality. Therefore, it is necessary to consider the crystal form changes before and after the formation of a drug. Nanocrystals can be characterized via differential scanning calorimetry (DSC), powder X-ray diffraction (P-XRD), Fourier-transform infrared spectroscopy (FTIR), and Raman spectroscopy. DSC is a method of thermal analysis. A curve that is recorded by a differential scanning calorimeter is called a DSC curve. The rate of absorption or exothermic heat of the sample, namely, the heat flux rate (dH/dt), is selected as the ordinate, and the temperature (T) is selected as the abscissa. The endothermic peak, which can be readily observed in the DSC diagram, represents the energy consumption and is used to determine the melting point of the corresponding nanocrystal. The amorphous material shows no readily observable melting point peak but shows a glass transition temperature. Nanocrystals with smaller particle size are closer to the amorphous state and, therefore, have lower melting point peaks compared with the bulk drug crystals. P-XRD is another method for evaluating the crystal forms of nanocrystals. In some cases, the X-ray diffraction pattern of the nanocrystals may also show reduced or no peaks due to partial or complete amorphous formation of the nanocrystals during the grinding process (Zhang et al., ). Infrared spectroscopy is based on the differences in the infrared characteristic absorption spectra among functional groups in a material structure. When a reaction occurs between two components, the infrared absorption peak displacement or peak intensity change is generated, which is used to identify the molecular interaction between the two components. Raman spectroscopy is a type of molecular vibration spectroscopy that is based on inelastic light scattering. Its analysis principle is similar to that of infrared spectroscopy, but infrared signals are produced mainly by asymmetric vibration and polar groups. Therefore, by combining the results of Raman and infrared spectroscopy, the interaction between the drug and excipient in a nanocrystal preparation can be investigated at the molecular level, and a more comprehensive judgment can be obtained (Doyle, ). Zuo et al. evaluated the crystal morphology of a sample with DSC and P-XRD. The DSC thermal image shows that the heat absorption peaks of the spray powder and tablet are shifted slightly forward, which may be because the drug is partially transformed into an amorphous form in the process of crushing or micro pulverization; the particle size reduction of the fenofibrate crystal may also cause the heat absorption peak to shift forward. With the crystallinity of fenofibrate bulk drug as 100%, the crystallinities of fenofibrate in the spray drying powder and tablet are approximately 95% and 73%, respectively. An X-ray diffraction (P-XRD) image showed that fenofibrate crystal I was retained in both the spray drying powder and the tablet but the crystalline transformation of mannitol occurred during spray drying, which was consistent with the DSC results that are presented above. According to a DSC thermal image that was obtained in a study that was conducted by Medarević et al. , carvedilol showed a shift of the absorption peak and a decrease of the melting point after freeze drying or spray drying. Since thermal stress during the analysis will lead to a polymorphic transition, DSC technology cannot accurately identify the polymorphic transitions of materials. Therefore, according to P-XRD analysis results, neither wet grinding nor spray drying will cause polymorphic transitions of materials, while carvedilol will undergo crystal transformation during freeze drying. In combination with FTIR technology, the crystal type of carvedilol was identified, and there was no interaction between carvedilol and the functional groups of the stabilizers, such as HPC-SL and mannitol . In the process of nanocrystal drug development, multiple crystal characterization techniques can be combined to jointly investigate the possible crystal transformations and interactions in the preparation process of drug nanocrystals. 4.2.3. In vitro and in vivo drug release studies The drug release rates of drug nanocrystals are evaluated via an in vitro drug release study. The dissolution medium may be selected from among the pharmacopeia standard dissolution media or according to the solubilities of the drug in various media. The particle size of the nanocrystals determines the overall dissolution rate. Since nanocrystals have higher dissolution rates and larger ratios of surface area to volume, smaller particles have higher dissolution rates than larger particles. The dissolution rates of nanocrystals can also be controlled by applying a coating of hydrophobic polymers. Due to the diversity and heterogeneity of nanocrystal preparations and the complexity of in vivo release behavior, the establishment of an effective in vitro dissolution method for predicting in vivo release behavior remains a technical challenge. Kumar et al. used the dialysis sac method, which was developed in the previous stage, to conduct an in vitro release test. Samples were obtained at a predetermined time interval, and HPLC quantitative analysis was conducted to draw the dissolution curve. This method can distinguish among sizes of nanocrystals and obtain the release curves for various sizes. Sievens-Figueroa et al. prepared a griseofulvin nanosuspension and compared the performances of the basket method and the flow-through cell method in vitro drug release. The results demonstrated that the flow-through cell method outperformed the basket method. He et al. prepared teniposide nanosuspensions for intravenous administration. They used the dialysis bag method to compare the in vitro release of teniposide nanosuspensions freeze-dried preparation and the marketed preparation. The results revealed that the passage of teniposide molecule in the nanosuspensions through the dialysis membrane was considerably slower as compared with that of marketed preparation. The slow release rate of teniposide nanosuspensions could be attributed to the slowly solution of teniposide, which maybe add to the benefit of prolonging the system circulation of teniposide for chemotherapy. In vitro release tests are crucial in preparation development and quality control. In addition to dialysis and the flow-through cell method, there are sampling and separation, gel, pressure ultrafiltration, turbidimetric analysis, and in situ methods (Crisp et al., ; Dai et al., ; Xia et al., ; Anhalt et al., ; Kumar et al., ; Xie et al., ; Liu et al., ). The researchers proposed that the in vitro release method for nanodrug delivery systems could be improved by introducing in vivo proteins into the in vitro release medium to design and simulate the distribution characteristics of the drug delivery system in vivo (Liu et al., ). Many methods have been reported, and each has advantages and disadvantages. In the process of nano-formulation development, suitable dissolution equipment should be selected according to the drug properties, dosage forms, and formulation process. Reasonable dissolution medium conditions should also be identified to develop suitable dissolution methods in vitro (Nothnagel & Wacker, ). The proposed dissolution method, which has distinguishing power, can screen for the desired formulation, optimize the technological parameters during the research process, and provide a reasonable reference for prescription evaluation. The optimal formulation is selected through in vitro dissolution to optimize the formulation and process parameters. Then, the drug release is studied in vivo to evaluate the bioavailability of the drug. Many research groups have studied the in vivo properties of nanocrystals by administering them to rats or mice through various routes. Guo et al. studied the in vivo performance of the rebamipide nanocrystal. They observed that the C max and AUC 0–24 h values of rebamipide nanocrystals were 1 and 1.57 times larger than those of the marketed preparations; hence, the nanocrystals significantly improved the bioavailability of the drug. However, if an effective in vitro and in vivo correlation (IVIVC) can be established, the number of experiments in vivo will be reduced significantly. IVIVC is a mathematical relationship between in vitro feature of the product (for example dissolution rate) and in vivo performance (Rettig & Mysicka, ). The major objective of IVIVC is to be able to use in vitro data to predict in vivo performance serving as a surrogate for an in vivo bioavailability test and to support biowaivers (Gonzalez-Garcia et al., ). Karakucuk et al. prepared ritonavir nanosuspension with microfluidization method. In vitro dissolution and in vivo bioavailability of nanosuspension were evaluated in the research. In nanosuspension formulation, the dissolution and solubility were improved which caused higher correlation between in vitro dissolution and in vivo pharmacokinetic data. Ghosh et al. conducted in vivo pharmacokinetic experiments with beagle dogs and found that there was a significant correlation between the particle size and bioavailability of drug molecules. As the dissolution rate increased, AUC and C max increased significantly when the drug was converted to nanocrystals. Nanosuspension with narrow distributions of particles produced systems with improved absorption, less variability, and superior stability by minimizing the Ostwald ripening process. Imono et al. prepared microsuspensions of two model drugs, namely, fenofibrate and megesterone acetate, along with three nanosuspensions with various particle sizes. Through in vitro dissolution-permeation studies and in vivo oral pharmacokinetic studies, it was found that the particle size reduction only slightly increased the apparent solubilities (1.4 times) but significantly increased the penetration rates of the two drugs (3 times). A strong positive correlation was identified between the in vitro permeation rate and the in vivo maximum absorption rate. The permeability increase due to the formation of nanocrystals is the main factor for improving the oral absorption, and the dissolution permeability in vitro can be used to predict the oral absorption enhancement of nanocrystals. The absorption mechanism of parenteral nanocrystal drug delivery is complex and diverse, which also brings great challenges to the study of nanocrystal drug release in vitro (Alexis et al., ). For example, intravenously administered nanocrystal formulations are a new type of therapeutics, which encounter a rather complex and dynamic in vivo environment. As a consequence, it is difficult to establish the IVIVC for these formulations and only few success stories have been published so far. Jablonka et al. established an IVIVC for the drug formulation Foscan ® on the basis of in vitro release and particle characterization data. Furthermore, the extrapolations made by the physiologically based pharmakokinetic and biodistribution model generates an expected in vivo biodistribution pattern based on early preclinical in vitro and in vivo data. In brief, establishing in vitro–in vivo correlation of nanocrystals can be used to well predict the in vivo behavior of drugs, elucidate the absorption mechanism and reduce the risk of clinical drug use (Bao et al., ; Litou et al., ).
Solid method of nanocrystal suspension The solidification process is a key step in the formation of the final product. The solidification methods include spray drying, freeze drying, electrostatic spray drying, and the use of an aerosol flow reactor, among others (Chan & Kwok, ; Ho & Lee, ). In addition, a type of fluidized bed coating technology has been applied in the industry. Fluidized bed coating of pellets is a one-step pelletizing method in which a nanocrystal suspension is dried and wrapped around the cores of pills. The pellets can be used to realize satisfactory fluidity, which is conducive to tablet compression and capsule filling. Spray drying and freeze drying are two main curing methods. To reduce the time and energy consumptions, spray drying is more widely used in the pharmaceutical industry than freeze drying. However, spray drying is not suitable for heat-unstable drugs, and freeze drying is the preferred technique for such drugs. The aggregation of nanoparticles should be minimized during solidification. In a nanocrystal suspension, stabilizers provide ionic or spatial stability by adsorbing onto the surfaces of the drug nanoparticles, thereby preventing nanoparticle aggregation. The solidification of nanocrystal suspensions may result in drying and solidification of the stabilizers, which may lead to unstable and irreversible aggregation of the drug nanoparticles (Chaubal & Popescu, ). Medarević et al. found that the spray-dried solidified carvedilol nanocrystals exhibited satisfactory redispersability when in contact with water, while strong agglomeration during freeze drying prevented the redispersion of carvedilol nanocrystals after freeze drying. Therefore, a reasonable solidification method should be selected (Niwa et al., ; Wang & Gong, ). The dissolution rates of dry powder in water differ among curing methods. Salazar studied the effects of spray drying, freeze drying and wet granulation on the dissolution rates of glibenclamide nanoparticles (Salazar et al., ). The results demonstrated that the dissolution rates were highest for spray drying, moderate for freeze drying, and lowest for wet granulation. presents case studies on the solidification of nanocrystal suspensions. Regardless of the solidification method, it is important to preserve the properties of the nanocrystal particles after the removal of water from the nanocrystal suspension. The influence of the redispersibility of nanocrystals after curing is a major concern. Dispersants (protectants) are typically added to nanosuspensions to maintain the redispersibility of the nanocrystals in water after solidification (Van Eerdenbrugh et al., ). Most protectants are water-soluble, such as mannitol, sucrose, lactose, and water-soluble polymers such as hydroxypropyl methyl cellulose (Dan et al., ; Parmentier et al., ). When the dry powder comes into contact with the water medium, the protective agent around the nanoparticles dissolves rapidly, thereby releasing the nanocrystals and maintaining them in their original dispersed state. In a study on the preparation of fenofibrate nanocrystals, Zuo et al found that the average particle size of fenofibrate redispersion increased to 3901 nm without the addition of a protective agent, which was 6 times the particle size before drying. This means that irreversible aggregation occurs during the drying process, and, hence, the dry powder can no longer disperse into nanoparticles of the original size. A water-soluble dispersant can form a bridge that connects hydrophilic excipients to nanocrystals. When spray drying was conducted via the addition of protective agents (lactose, sucrose, glucose, maltose, and mannitol), the fenofibrate redispersibility was substantially improved, among which mannitol was the most effective protective agent for maintaining the redispersibility of the nanocrystals. Teeranachaideekul et al. studied the particle sizes after freeze drying of nanosuspensions with and without cryoprotectants, and the results demonstrated that the average particle size of nanocrystals without cryoprotectants exceeded that of nanocrystals with cryoprotectants. In a study of naproxen nanocrystal spray drying, Kumar et al. found that lactose and trehalose could effectively inhibit the aggregation of nanoparticles. Ultimately, trehalose was used as a naproxen nanocrystal powder due to its higher yield than lactose.
Characterization and evaluation of solid nanocrystal preparations 4.2.1. Surface morphology The sizes and shapes of nanocrystals were analyzed via scanning electron microscopy (SEM) and transmission electron microscopy (TEM). In SEM, image results are generated through the interaction between the electron beam and atoms at various depths in the sample. For example, by collecting secondary electrons and backscattered electrons, information about the microstructure of the material can be obtained . In a transmission electron microscope, an image is obtained by capturing transmitted electrons in a sample. The accelerated and clustered electron beam can be transmitted to a very thin sample, and the electrons collide with the atoms in the sample and change direction, thereby generating solid angle scattering, which can be used to observe the ultrastructures of particles, and the resolution can reach 0.1 ∼ 0.2 nm . 4.2.2. Crystal characteristics The crystal characteristics of bulk drugs are highly attributes in the final products of nano pharmaceutical preparations. In the process of formation, the crystalline form of the drug may be changed due to external stresses and temperature changes. Although amorphous drugs have higher solubility, higher dissolution rates, or better compression properties, they are less physically and chemically stable than crystalline drugs, thereby resulting in uneven final product quality. Therefore, it is necessary to consider the crystal form changes before and after the formation of a drug. Nanocrystals can be characterized via differential scanning calorimetry (DSC), powder X-ray diffraction (P-XRD), Fourier-transform infrared spectroscopy (FTIR), and Raman spectroscopy. DSC is a method of thermal analysis. A curve that is recorded by a differential scanning calorimeter is called a DSC curve. The rate of absorption or exothermic heat of the sample, namely, the heat flux rate (dH/dt), is selected as the ordinate, and the temperature (T) is selected as the abscissa. The endothermic peak, which can be readily observed in the DSC diagram, represents the energy consumption and is used to determine the melting point of the corresponding nanocrystal. The amorphous material shows no readily observable melting point peak but shows a glass transition temperature. Nanocrystals with smaller particle size are closer to the amorphous state and, therefore, have lower melting point peaks compared with the bulk drug crystals. P-XRD is another method for evaluating the crystal forms of nanocrystals. In some cases, the X-ray diffraction pattern of the nanocrystals may also show reduced or no peaks due to partial or complete amorphous formation of the nanocrystals during the grinding process (Zhang et al., ). Infrared spectroscopy is based on the differences in the infrared characteristic absorption spectra among functional groups in a material structure. When a reaction occurs between two components, the infrared absorption peak displacement or peak intensity change is generated, which is used to identify the molecular interaction between the two components. Raman spectroscopy is a type of molecular vibration spectroscopy that is based on inelastic light scattering. Its analysis principle is similar to that of infrared spectroscopy, but infrared signals are produced mainly by asymmetric vibration and polar groups. Therefore, by combining the results of Raman and infrared spectroscopy, the interaction between the drug and excipient in a nanocrystal preparation can be investigated at the molecular level, and a more comprehensive judgment can be obtained (Doyle, ). Zuo et al. evaluated the crystal morphology of a sample with DSC and P-XRD. The DSC thermal image shows that the heat absorption peaks of the spray powder and tablet are shifted slightly forward, which may be because the drug is partially transformed into an amorphous form in the process of crushing or micro pulverization; the particle size reduction of the fenofibrate crystal may also cause the heat absorption peak to shift forward. With the crystallinity of fenofibrate bulk drug as 100%, the crystallinities of fenofibrate in the spray drying powder and tablet are approximately 95% and 73%, respectively. An X-ray diffraction (P-XRD) image showed that fenofibrate crystal I was retained in both the spray drying powder and the tablet but the crystalline transformation of mannitol occurred during spray drying, which was consistent with the DSC results that are presented above. According to a DSC thermal image that was obtained in a study that was conducted by Medarević et al. , carvedilol showed a shift of the absorption peak and a decrease of the melting point after freeze drying or spray drying. Since thermal stress during the analysis will lead to a polymorphic transition, DSC technology cannot accurately identify the polymorphic transitions of materials. Therefore, according to P-XRD analysis results, neither wet grinding nor spray drying will cause polymorphic transitions of materials, while carvedilol will undergo crystal transformation during freeze drying. In combination with FTIR technology, the crystal type of carvedilol was identified, and there was no interaction between carvedilol and the functional groups of the stabilizers, such as HPC-SL and mannitol . In the process of nanocrystal drug development, multiple crystal characterization techniques can be combined to jointly investigate the possible crystal transformations and interactions in the preparation process of drug nanocrystals. 4.2.3. In vitro and in vivo drug release studies The drug release rates of drug nanocrystals are evaluated via an in vitro drug release study. The dissolution medium may be selected from among the pharmacopeia standard dissolution media or according to the solubilities of the drug in various media. The particle size of the nanocrystals determines the overall dissolution rate. Since nanocrystals have higher dissolution rates and larger ratios of surface area to volume, smaller particles have higher dissolution rates than larger particles. The dissolution rates of nanocrystals can also be controlled by applying a coating of hydrophobic polymers. Due to the diversity and heterogeneity of nanocrystal preparations and the complexity of in vivo release behavior, the establishment of an effective in vitro dissolution method for predicting in vivo release behavior remains a technical challenge. Kumar et al. used the dialysis sac method, which was developed in the previous stage, to conduct an in vitro release test. Samples were obtained at a predetermined time interval, and HPLC quantitative analysis was conducted to draw the dissolution curve. This method can distinguish among sizes of nanocrystals and obtain the release curves for various sizes. Sievens-Figueroa et al. prepared a griseofulvin nanosuspension and compared the performances of the basket method and the flow-through cell method in vitro drug release. The results demonstrated that the flow-through cell method outperformed the basket method. He et al. prepared teniposide nanosuspensions for intravenous administration. They used the dialysis bag method to compare the in vitro release of teniposide nanosuspensions freeze-dried preparation and the marketed preparation. The results revealed that the passage of teniposide molecule in the nanosuspensions through the dialysis membrane was considerably slower as compared with that of marketed preparation. The slow release rate of teniposide nanosuspensions could be attributed to the slowly solution of teniposide, which maybe add to the benefit of prolonging the system circulation of teniposide for chemotherapy. In vitro release tests are crucial in preparation development and quality control. In addition to dialysis and the flow-through cell method, there are sampling and separation, gel, pressure ultrafiltration, turbidimetric analysis, and in situ methods (Crisp et al., ; Dai et al., ; Xia et al., ; Anhalt et al., ; Kumar et al., ; Xie et al., ; Liu et al., ). The researchers proposed that the in vitro release method for nanodrug delivery systems could be improved by introducing in vivo proteins into the in vitro release medium to design and simulate the distribution characteristics of the drug delivery system in vivo (Liu et al., ). Many methods have been reported, and each has advantages and disadvantages. In the process of nano-formulation development, suitable dissolution equipment should be selected according to the drug properties, dosage forms, and formulation process. Reasonable dissolution medium conditions should also be identified to develop suitable dissolution methods in vitro (Nothnagel & Wacker, ). The proposed dissolution method, which has distinguishing power, can screen for the desired formulation, optimize the technological parameters during the research process, and provide a reasonable reference for prescription evaluation. The optimal formulation is selected through in vitro dissolution to optimize the formulation and process parameters. Then, the drug release is studied in vivo to evaluate the bioavailability of the drug. Many research groups have studied the in vivo properties of nanocrystals by administering them to rats or mice through various routes. Guo et al. studied the in vivo performance of the rebamipide nanocrystal. They observed that the C max and AUC 0–24 h values of rebamipide nanocrystals were 1 and 1.57 times larger than those of the marketed preparations; hence, the nanocrystals significantly improved the bioavailability of the drug. However, if an effective in vitro and in vivo correlation (IVIVC) can be established, the number of experiments in vivo will be reduced significantly. IVIVC is a mathematical relationship between in vitro feature of the product (for example dissolution rate) and in vivo performance (Rettig & Mysicka, ). The major objective of IVIVC is to be able to use in vitro data to predict in vivo performance serving as a surrogate for an in vivo bioavailability test and to support biowaivers (Gonzalez-Garcia et al., ). Karakucuk et al. prepared ritonavir nanosuspension with microfluidization method. In vitro dissolution and in vivo bioavailability of nanosuspension were evaluated in the research. In nanosuspension formulation, the dissolution and solubility were improved which caused higher correlation between in vitro dissolution and in vivo pharmacokinetic data. Ghosh et al. conducted in vivo pharmacokinetic experiments with beagle dogs and found that there was a significant correlation between the particle size and bioavailability of drug molecules. As the dissolution rate increased, AUC and C max increased significantly when the drug was converted to nanocrystals. Nanosuspension with narrow distributions of particles produced systems with improved absorption, less variability, and superior stability by minimizing the Ostwald ripening process. Imono et al. prepared microsuspensions of two model drugs, namely, fenofibrate and megesterone acetate, along with three nanosuspensions with various particle sizes. Through in vitro dissolution-permeation studies and in vivo oral pharmacokinetic studies, it was found that the particle size reduction only slightly increased the apparent solubilities (1.4 times) but significantly increased the penetration rates of the two drugs (3 times). A strong positive correlation was identified between the in vitro permeation rate and the in vivo maximum absorption rate. The permeability increase due to the formation of nanocrystals is the main factor for improving the oral absorption, and the dissolution permeability in vitro can be used to predict the oral absorption enhancement of nanocrystals. The absorption mechanism of parenteral nanocrystal drug delivery is complex and diverse, which also brings great challenges to the study of nanocrystal drug release in vitro (Alexis et al., ). For example, intravenously administered nanocrystal formulations are a new type of therapeutics, which encounter a rather complex and dynamic in vivo environment. As a consequence, it is difficult to establish the IVIVC for these formulations and only few success stories have been published so far. Jablonka et al. established an IVIVC for the drug formulation Foscan ® on the basis of in vitro release and particle characterization data. Furthermore, the extrapolations made by the physiologically based pharmakokinetic and biodistribution model generates an expected in vivo biodistribution pattern based on early preclinical in vitro and in vivo data. In brief, establishing in vitro–in vivo correlation of nanocrystals can be used to well predict the in vivo behavior of drugs, elucidate the absorption mechanism and reduce the risk of clinical drug use (Bao et al., ; Litou et al., ).
Surface morphology The sizes and shapes of nanocrystals were analyzed via scanning electron microscopy (SEM) and transmission electron microscopy (TEM). In SEM, image results are generated through the interaction between the electron beam and atoms at various depths in the sample. For example, by collecting secondary electrons and backscattered electrons, information about the microstructure of the material can be obtained . In a transmission electron microscope, an image is obtained by capturing transmitted electrons in a sample. The accelerated and clustered electron beam can be transmitted to a very thin sample, and the electrons collide with the atoms in the sample and change direction, thereby generating solid angle scattering, which can be used to observe the ultrastructures of particles, and the resolution can reach 0.1 ∼ 0.2 nm .
Crystal characteristics The crystal characteristics of bulk drugs are highly attributes in the final products of nano pharmaceutical preparations. In the process of formation, the crystalline form of the drug may be changed due to external stresses and temperature changes. Although amorphous drugs have higher solubility, higher dissolution rates, or better compression properties, they are less physically and chemically stable than crystalline drugs, thereby resulting in uneven final product quality. Therefore, it is necessary to consider the crystal form changes before and after the formation of a drug. Nanocrystals can be characterized via differential scanning calorimetry (DSC), powder X-ray diffraction (P-XRD), Fourier-transform infrared spectroscopy (FTIR), and Raman spectroscopy. DSC is a method of thermal analysis. A curve that is recorded by a differential scanning calorimeter is called a DSC curve. The rate of absorption or exothermic heat of the sample, namely, the heat flux rate (dH/dt), is selected as the ordinate, and the temperature (T) is selected as the abscissa. The endothermic peak, which can be readily observed in the DSC diagram, represents the energy consumption and is used to determine the melting point of the corresponding nanocrystal. The amorphous material shows no readily observable melting point peak but shows a glass transition temperature. Nanocrystals with smaller particle size are closer to the amorphous state and, therefore, have lower melting point peaks compared with the bulk drug crystals. P-XRD is another method for evaluating the crystal forms of nanocrystals. In some cases, the X-ray diffraction pattern of the nanocrystals may also show reduced or no peaks due to partial or complete amorphous formation of the nanocrystals during the grinding process (Zhang et al., ). Infrared spectroscopy is based on the differences in the infrared characteristic absorption spectra among functional groups in a material structure. When a reaction occurs between two components, the infrared absorption peak displacement or peak intensity change is generated, which is used to identify the molecular interaction between the two components. Raman spectroscopy is a type of molecular vibration spectroscopy that is based on inelastic light scattering. Its analysis principle is similar to that of infrared spectroscopy, but infrared signals are produced mainly by asymmetric vibration and polar groups. Therefore, by combining the results of Raman and infrared spectroscopy, the interaction between the drug and excipient in a nanocrystal preparation can be investigated at the molecular level, and a more comprehensive judgment can be obtained (Doyle, ). Zuo et al. evaluated the crystal morphology of a sample with DSC and P-XRD. The DSC thermal image shows that the heat absorption peaks of the spray powder and tablet are shifted slightly forward, which may be because the drug is partially transformed into an amorphous form in the process of crushing or micro pulverization; the particle size reduction of the fenofibrate crystal may also cause the heat absorption peak to shift forward. With the crystallinity of fenofibrate bulk drug as 100%, the crystallinities of fenofibrate in the spray drying powder and tablet are approximately 95% and 73%, respectively. An X-ray diffraction (P-XRD) image showed that fenofibrate crystal I was retained in both the spray drying powder and the tablet but the crystalline transformation of mannitol occurred during spray drying, which was consistent with the DSC results that are presented above. According to a DSC thermal image that was obtained in a study that was conducted by Medarević et al. , carvedilol showed a shift of the absorption peak and a decrease of the melting point after freeze drying or spray drying. Since thermal stress during the analysis will lead to a polymorphic transition, DSC technology cannot accurately identify the polymorphic transitions of materials. Therefore, according to P-XRD analysis results, neither wet grinding nor spray drying will cause polymorphic transitions of materials, while carvedilol will undergo crystal transformation during freeze drying. In combination with FTIR technology, the crystal type of carvedilol was identified, and there was no interaction between carvedilol and the functional groups of the stabilizers, such as HPC-SL and mannitol . In the process of nanocrystal drug development, multiple crystal characterization techniques can be combined to jointly investigate the possible crystal transformations and interactions in the preparation process of drug nanocrystals.
In vitro and in vivo drug release studies The drug release rates of drug nanocrystals are evaluated via an in vitro drug release study. The dissolution medium may be selected from among the pharmacopeia standard dissolution media or according to the solubilities of the drug in various media. The particle size of the nanocrystals determines the overall dissolution rate. Since nanocrystals have higher dissolution rates and larger ratios of surface area to volume, smaller particles have higher dissolution rates than larger particles. The dissolution rates of nanocrystals can also be controlled by applying a coating of hydrophobic polymers. Due to the diversity and heterogeneity of nanocrystal preparations and the complexity of in vivo release behavior, the establishment of an effective in vitro dissolution method for predicting in vivo release behavior remains a technical challenge. Kumar et al. used the dialysis sac method, which was developed in the previous stage, to conduct an in vitro release test. Samples were obtained at a predetermined time interval, and HPLC quantitative analysis was conducted to draw the dissolution curve. This method can distinguish among sizes of nanocrystals and obtain the release curves for various sizes. Sievens-Figueroa et al. prepared a griseofulvin nanosuspension and compared the performances of the basket method and the flow-through cell method in vitro drug release. The results demonstrated that the flow-through cell method outperformed the basket method. He et al. prepared teniposide nanosuspensions for intravenous administration. They used the dialysis bag method to compare the in vitro release of teniposide nanosuspensions freeze-dried preparation and the marketed preparation. The results revealed that the passage of teniposide molecule in the nanosuspensions through the dialysis membrane was considerably slower as compared with that of marketed preparation. The slow release rate of teniposide nanosuspensions could be attributed to the slowly solution of teniposide, which maybe add to the benefit of prolonging the system circulation of teniposide for chemotherapy. In vitro release tests are crucial in preparation development and quality control. In addition to dialysis and the flow-through cell method, there are sampling and separation, gel, pressure ultrafiltration, turbidimetric analysis, and in situ methods (Crisp et al., ; Dai et al., ; Xia et al., ; Anhalt et al., ; Kumar et al., ; Xie et al., ; Liu et al., ). The researchers proposed that the in vitro release method for nanodrug delivery systems could be improved by introducing in vivo proteins into the in vitro release medium to design and simulate the distribution characteristics of the drug delivery system in vivo (Liu et al., ). Many methods have been reported, and each has advantages and disadvantages. In the process of nano-formulation development, suitable dissolution equipment should be selected according to the drug properties, dosage forms, and formulation process. Reasonable dissolution medium conditions should also be identified to develop suitable dissolution methods in vitro (Nothnagel & Wacker, ). The proposed dissolution method, which has distinguishing power, can screen for the desired formulation, optimize the technological parameters during the research process, and provide a reasonable reference for prescription evaluation. The optimal formulation is selected through in vitro dissolution to optimize the formulation and process parameters. Then, the drug release is studied in vivo to evaluate the bioavailability of the drug. Many research groups have studied the in vivo properties of nanocrystals by administering them to rats or mice through various routes. Guo et al. studied the in vivo performance of the rebamipide nanocrystal. They observed that the C max and AUC 0–24 h values of rebamipide nanocrystals were 1 and 1.57 times larger than those of the marketed preparations; hence, the nanocrystals significantly improved the bioavailability of the drug. However, if an effective in vitro and in vivo correlation (IVIVC) can be established, the number of experiments in vivo will be reduced significantly. IVIVC is a mathematical relationship between in vitro feature of the product (for example dissolution rate) and in vivo performance (Rettig & Mysicka, ). The major objective of IVIVC is to be able to use in vitro data to predict in vivo performance serving as a surrogate for an in vivo bioavailability test and to support biowaivers (Gonzalez-Garcia et al., ). Karakucuk et al. prepared ritonavir nanosuspension with microfluidization method. In vitro dissolution and in vivo bioavailability of nanosuspension were evaluated in the research. In nanosuspension formulation, the dissolution and solubility were improved which caused higher correlation between in vitro dissolution and in vivo pharmacokinetic data. Ghosh et al. conducted in vivo pharmacokinetic experiments with beagle dogs and found that there was a significant correlation between the particle size and bioavailability of drug molecules. As the dissolution rate increased, AUC and C max increased significantly when the drug was converted to nanocrystals. Nanosuspension with narrow distributions of particles produced systems with improved absorption, less variability, and superior stability by minimizing the Ostwald ripening process. Imono et al. prepared microsuspensions of two model drugs, namely, fenofibrate and megesterone acetate, along with three nanosuspensions with various particle sizes. Through in vitro dissolution-permeation studies and in vivo oral pharmacokinetic studies, it was found that the particle size reduction only slightly increased the apparent solubilities (1.4 times) but significantly increased the penetration rates of the two drugs (3 times). A strong positive correlation was identified between the in vitro permeation rate and the in vivo maximum absorption rate. The permeability increase due to the formation of nanocrystals is the main factor for improving the oral absorption, and the dissolution permeability in vitro can be used to predict the oral absorption enhancement of nanocrystals. The absorption mechanism of parenteral nanocrystal drug delivery is complex and diverse, which also brings great challenges to the study of nanocrystal drug release in vitro (Alexis et al., ). For example, intravenously administered nanocrystal formulations are a new type of therapeutics, which encounter a rather complex and dynamic in vivo environment. As a consequence, it is difficult to establish the IVIVC for these formulations and only few success stories have been published so far. Jablonka et al. established an IVIVC for the drug formulation Foscan ® on the basis of in vitro release and particle characterization data. Furthermore, the extrapolations made by the physiologically based pharmakokinetic and biodistribution model generates an expected in vivo biodistribution pattern based on early preclinical in vitro and in vivo data. In brief, establishing in vitro–in vivo correlation of nanocrystals can be used to well predict the in vivo behavior of drugs, elucidate the absorption mechanism and reduce the risk of clinical drug use (Bao et al., ; Litou et al., ).
Conclusions Particle size instability has always been a major technical limitation in the development of nanocrystal drugs. The problems that are associated with nanocrystal drug instability include aggregation, Ostwald ripening, and sedimentation. The stability depends on the interactions between drug nanocrystals and the surface free energy, among other factors. The interactions between drug nanocrystals and stabilizers have yet to be fully understood, and the results cannot be clearly explained by established knowledge. The reason may be that the stability of drug nanocrystals is influenced by various factors, such as the physical and chemical properties of the nanocrystals, stabilizers, dispersion media, and surrounding environment, including temperature. Therefore, it is necessary to identify the most suitable stabilizer and prescription variables experimentally according to various action mechanisms and influencing factors. In addition, nanocrystal preparations still face major technical challenges, especially in the control of the effects of solidification on the physical stability and redispersibility. In vitro and in vivo evaluation and other aspects still need to be continuously explored to develop scientific and standardized preparation and evaluation methods.
|
Efferocytosis in multisystem diseases | 0c33ff91-8d57-4a4e-ae28-96f33400711e | 8600411 | Pathology[mh] | Introduction Efferocytosis refers to the physiological process in which phagocytic cells clear apoptotic cells (ACs) . Phagocytosis involves both specialized phagocytes (such as macrophages) and non-specialized phagocytes (such as epithelial cells) . Efferocytosis can be divided into four stages : i) ‘Find me’ stage. Chemotactic factors induce macrophages to recognize and surround ACs. The ‘find me’ signal molecules released by ACs are recognized by homologous receptors on the surface of phagocytes, to induce the migration and recruitment of phagocytes to ACs . ii) ‘Eat me’ stage. Phagocytic receptors of macrophages recognize and bind to the ‘eat me’ signal molecules of ACs through bridging molecules. Following the programmed cell apoptosis, the ‘eat me’ signal molecule ligands on the AC surface are exposed, which can directly bind to the ‘eat me’ signal molecule receptors on the surface of phagocytes . Then, one end of the bridging molecules, as signal molecules related to efferocytosis, binds to the ligand of ‘eat me’ signal molecule of ACS and the other end binds to the receptor of the ‘eat me’ signal molecule on the surface of the phagocyte. Therefore, the phagocytes recognize and capture ACs in the direct ‘ligand-receptor’ binding form and the indirect ‘ligand-bridging molecule-receptor’ binding form . iii) Endocytosis stage. The ‘eat me’ signal molecules binding to phagocytic receptors activate the programmed cell removal system to form ‘a phagocytic cup’ and complete the endocytosis of ACs . iv) ‘Post-phagocytosis’ stage. Macrophages further digest and degrade apoptotic cell debris, activating multiple metabolic signaling pathways . After phagocytic cells engulf ACs, phagosomes are formed and then fuse with primary lysosomes to form phagolysosomes . When the phagolysosome matures, it begins to degrade ACs and release anti-inflammatory cytokines such as IL-10 and TGF-β . In efferocytosis, a number of molecules function to clear ACs promptly so that normal tissues cannot be damaged. First, the ‘find me’ signal molecules, consisting of direct ligand molecules and indirect signal molecules, are released after cell apoptosis. The direct ligand molecules include triphosphate nucleotides ATP, uridine-5′-triphosphate , lyso-phosphatidylcholine and sphingosine-1-phosphate . The indirect signal molecules include CX3C chemokine ligand 1 (CX3CL1) protein . Second, the ‘find me’ signal is received by phagocytic cell receptors. Then the phagocytic receptors, including receptor G2 accumulation , CX3C chemokine receptor 1 (CX3CR1) , low-density lipoprotein receptor related proteins 1 (LRP1) and scavenger receptor class B type 1 (SRB1), directly interact with ‘eat-me’ signal molecules on the surface of ACs, such as phosphatidylserine (PtdSer) , oxidized phospholipids and endoplasmic reticulum-resident protein calreticulin . The phagocytic receptors, such as Mer tyrosine protein kinase receptor (Mertk) , also interact with ‘eat-me’ signal molecules indirectly through bridging molecules . Extracellular bridging molecules link phagocytes with ACs, activate the phagocytic function of phagocytes and remove the ACs, such as milk fat globule-epidermal growth factor (MFGE8), serum complement C1q, transglutaminase 2, human growth arrest specific protein 6 (GAS6) and protein S (ProS) . These signaling molecules and extracellular bridging molecules are key to efferocytosis. In addition, the ‘not eat me’ signal in non-apoptotic cells prevents viable cells from being cleared by phagocytes. Among them, the best-known signal molecule is CD47 (18; ). Efferocytosis is essential for human health, because it can prevent the deleterious effects of cell necrosis, thus maintaining the tissue and organ homeostasis and the normal immune response . Apart from preventing secondary necrosis, efferocytosis has three functions: Terminating inflammatory responses, promoting self-tolerance and activating pro-resolving pathways . Efferocytosis triggers the production of anti-inflammatory and tissue-reparative cytokines, while defective efferocytosis may lead to hyperinflammation and diseases . The present study summarized the current knowledge of efferocytosis and the links between efferocytosis and body homeostasis. Further, it reviewed the consequences of impaired efferocytosis in multisystem diseases . Several drugs and treatments available to enhance efferocytosis are also mentioned to provide new evidence for clinical application. Cardiovascular diseases Studies on genome-wide association have discovered that common genetic variants in the chromosome 9p21 confer the risk of coronary artery disease, myocardial infarction (MI) and ischemic stroke . The expression of calreticulin protein is reduced in the plaques of these allele carriers , while the area of the necrotic core and the number of ACs increase in the plaques of atherosclerosis . Calreticulin, located in the endoplasmic reticulum, serves a crucial role in cardiac embryogenesis. It affects cardiac development and myofibrillogenesis and is involved in the pathophysiology of several cardiac pathologies . Calreticulin binds to the ‘eat me’ ligand on the surface of ACs, activating LDLR4 on the surface of phagocytic cells and inducing phagocytosis . Therefore, the reduction of calreticulin protein of these allele carriers suppresses the ‘eat me’ signal and weakens the phagocytosis of ACs. This explains why efferocytosis is independent of traditional risk factors (such as hypertension, dyslipidemia, diabetes and smoking) of cardiovascular diseases . Atherosclerosis, a major pathological basis for cardiovascular and cerebrovascular diseases, is also the key process in other diseases, such as chronic cerebral insufficiency and cerebral infarction. Atherosclerosis is considered to be a cholesterol storage disease and a lipid-driven inflammatory disease . Cholesterol loading is hypothesized to cause pro-inflammatory cytokine secretion and form intracellular cholesterol microcrystals that activate the inflammasome . In addition, cholesterol-laden macrophages are ‘foam cells’ that die easily and release their contents in advanced lesions and thereby can worsen the inflammatory status . As atherosclerosis is an inflammatory disease, various factors involved in the inflammatory response may be related to the formation of atherosclerotic plaques . TNF-α is elevated in the pro-inflammatory early-stage of atherosclerosis. TNF-α inhibits MFGE8, Mertk and LRP1 by activating the Toll-like receptor (TLR) and upregulates CD47 expression to activate the ‘not eat me’ signal . TNF-α weakens the efferocytosis and prevents timely clearance of ACs, thereby aggravating the inflammatory response and further worsening atherosclerosis . The above reactions form a vicious circle. Therefore, the effect of efferocytosis is gradually impaired as the atherosclerotic plaque progresses. In atherosclerosis, the clearance of ACs is essential to resolve inflammation. Efferocytosis promotes the resolution of inflammation in a stepwise manner. One step is to recognize and engulf ACs, which prevents AC accumulation and inflammatory agent secretion . The engulfment of ACs results in the acquisition of excess cellular materials such as lipids, carbohydrates, proteins and nucleic acids . Macrophages need to activate degradation and efflux pathways for increased metabolic load, which is crucial for inflammatory resolution and tissue repair . For instance, lipid metabolism activates the nuclear receptors peroxisome proliferator-activated receptor (PPAR) and liver X receptor (LXR)-α, helping release anti-inflammatory cytokines, such as IL-10 and TGF-β . Efferocytosis within the plaque is impaired when atherosclerotic plaque develops in the late stage . A study has confirmed that the ratio of apoptotic cell clearance is nineteen times higher in human tonsils as compared with human atherosclerotic plaques . Schrijvers et al found more apoptotic cells outside lesional phagocytes in advanced human coronary artery lesions. Defective efferocytosis leads to post-apoptotic cellular necrosis and the release of proinflammatory factors . Failed AC clearance, increased inflammation and worsened atherosclerosis were found in mice lacking TIM-4, Mertk, MFGE8, or Pro S. As macrophage apoptosis accelerates under defective efferocytosis, the lipid-laden necrotic core enlarges with the progression of atherosclerotic plaques . Thinning fibrous cap, high-level inflammatory cytokines, apoptosis of intimal cells and expansion of the lipid-laden necrotic core all contribute to vulnerable plaques and acute coronary artery syndrome . The absence of efferocytosis signals also inhibits the subsequent intracellular cholesterol reverse transportation pathways , then promotes foam cell formation and initiates the development of atherosclerosis. C1q protects early atherosclerosis by promoting macrophage survival and improving the function of macrophage foam cells . Effective efferocytosis can inhibit secondary cell necrosis and prevent dead cells from releasing inflammatory factors and toxic molecules, thereby slowing down atherosclerosis progression and reducing plaque vulnerability . Enhanced efferocytosis can reverse hypoxia in murine atherosclerosis to prevent necrotic core expansion . Natalicone ZB, the specific agonist of LXR, can facilitate efferocytosis, inhibit plaque formation and reduce the area of necrotic core . Conventional anti-atherosclerotic drugs, such as statins and non-steroidal PPAR γ agonists, can enhance efferocytosis in plaques . In atherosclerosis treatment, statin can reduce cholesterol and inflammation, repress the highly expressed Ras homologous gene family member A, a negative regulator of phagocytosis in atherosclerotic lesions . Experimental results have confirmed the regulatory role of extracellular signal-regulated kinase 5 (ERK5) in macrophage phagocytosis. ERK5, one of the mitogen-activated protein kinases, can maintain macrophage phagocytosis and prevent atherosclerosis progression . In LDLR −/− mice, ERK5 gene knockout can aggravate atherosclerosis and inhibit the expression of efferocytosis-related proteins . ERK5 inhibitor can downregulate the phagocytosis of RAW264.7 cells in vitro . Thus, it can be concluded that regulating efferocytosis of macrophages through ERK5 can exert an anti-atherosclerosis effect. Efferocytosis also serves a role in other cardiovascular diseases. In Wan et al , Mertk could clear apoptotic cardiomyocytes following MI, thus mitigating the progression to heart failure, while suppressed efferocytosis could increase infarct size, promote adverse ventricular remodeling and left ventricle functional deterioration after MI and ease the occurrence of cardiomyopathy. These studies implicate that impaired efferocytosis can result in secondary necrosis, inflammation, cholesterol reverse disorder and thus lead to cardiovascular diseases, such as atherosclerosis and acute coronary artery syndrome. As a crucial modulator in cardiovascular diseases, efferocytosis is worthy of further investigation. Respiratory diseases Lung diseases are closely related to efferocytosis due to the complex inflammatory and immune responses. When lung inflammation occurs, neutrophils are quickly recruited to the airways. Following phagocytosis of pathogens, neutrophils undergo apoptosis. This process is regulated by multiple genes and multiple factors, such as LPS, TNF-α, Fas/Fas-L pathway, apoptotic genes, interleukins, interferons and Caspase protein . Phagocytosis clears apoptotic neutrophils to prevent the release of toxic substances and the subsequent damage to the surrounding tissues, thereby alleviating inflammation . Effective efferocytosis protects normal airways, alveolar structures and even the lung tissues . For example, Lee et al found Mertk overexpression could attenuate bleomycin-induced lung injury in mice. By contrast, due to impaired efferocytosis, the number of uncleared ACs increases, which prolongs inflammatory response in the mouse and human models of chronic obstructive pulmonary disease (COPD) and cystic fibrosis (CF) . COPD is characterized by chronic inflammation, extracellular matrix destruction and increased apoptosis of airway epithelial cells and neutrophils . The macrophage-mediated efferocytosis in the lungs of COPD patients weakens significantly, while efferocytosis strengthens in COPD patients who use statins . Similarly, patients with CF or allergic asthma display protracted inflammation caused by defective efferocytosis . CF is a heritable disease with infection, airway inflammation and bronchiectasis . Sputa examination has shown more ACs in CF patients compared with those with chronic bronchitis . Asthma is a complex syndrome with airflow obstruction, bronchial hyper-responsiveness and airway inflammation . The resolution of ovalbumin-induced allergic airway inflammation is delayed in Mer-deficient mice . Targeting T-cell immunoglobulin and mucin domain-containing molecule (TIM) 1, a member of TIM receptor family, can modulate airway inflammation in mouse models of airway hyper-responsiveness . Grabiec et al indicate that the deficiency of Axl receptor tyrosine kinase accelerates asthma progression by exaggerating airway inflammation. The prognosis of acute lung injury (ALI) in mouse models is also influenced by defective efferocytosis. MFGE8 deficient mice with lipopolysaccharide (LPS)-induced acute ALI showed increased inflammatory cytokines and decreased survival . Mertk can attenuate LPS-induced lung injury . Common respiratory drugs, such as statins, macrolides and corticosteroids, can alleviate respiratory symptoms by enhancing efferocytosis. Macrolide antibiotics are reported to promote efferocytosis by upregulating the expression of bridging molecules such as collectins . These drugs have already been used to treat COPD, cystic fibrosis, or asthma . Mannose receptor may be a target of azithromycin to improve phagocytic ability . Azithromycin restores the phagocytic function of the airway macrophages by binding to PtdSer in COPD . Corticosteroids enhance efferocytosis by downregulating CD47-signal regulatory protein (SIRP) and upregulating Mertk . Glucocorticoids, the most commonly used drugs for asthma and COPD, enhance macrophage phagocytosis in vitro and restore the efferocytosis of macrophages in the airway of patients with asthma by regulating Mertk and Pro S . Grégoire et al also found that blocking high mobility group box-1 (HMGB1) and activating AMP-activated protein kinase (AMPK) by metformin could enhance AC clearance and decrease lung inflammation in patients with acute respiratory distress syndrome (ARDS). Several chronic lung diseases are characterized by an increased lung burden of uningested apoptotic cells and sustained lung inflammation . The efferocytic process favors tissue repair and inflammation suppression . Existing therapies such as corticosteroids, statins and macrolides may function in part by augmenting apoptotic cell clearance. Liver and intestine diseases Kupffer cells and other myeloid phagocytic cells, the most important hepatic efferocytes, are attracted into the liver to remove ACs after injury . Bukong et al found that acute alcohol use could significantly impair the clearance of neutrophil extracellular traps by macrophages, which could contribute to prolonged liver inflammation and injury. Mediators released by neutrophils during NETosis can directly corrupt the recognition of apoptotic cells by phagocytes: HMGB1, for example, initiates pro-inflammatory signal whilst simultaneously preventing efferocytosis by obscuring PS recognition . In patients with alcoholic liver disease, alcohol and alcohol metabolites increase liver inflammation and steatosis . Wang et al found alcohol inhibits MFGE8 gene expression and impairs efferocytosis and thus leading to hepatocyte necrosis, which explains why alcohol can cause liver damage from another perspective. Defective efferocytosis also contributes to other liver diseases, such as fatty liver disease and primary biliary cholangitis . Following the phagocytosis of ACs, phagocytes increase cholesterol efflux activity to maintain lipid homeostasis. The engagement of PS receptors activates PPAR γ/δ and LXR, the regulators of cellular lipid homeostasis and upregulates the phagocytic receptors, such as the TAM family, to accommodate to the increased cholesterol induced by phagocytosis . Excessive accumulation of fatty acids caused by defective efferocytosis triggers oxidative stress and lipid peroxidation, leading to liver cell death/apoptosis, inflammation, liver steatosis and even lipotoxic liver cell damage. GAS6 and Mertk can protect cultured primary mouse hepatocytes against lipid toxicity via protein kinase B (AKT)/signal transducers and activators of transcription 3 (STAT3) signaling . The enhanced oxidative stress response and the reactive oxygen species (ROS) expression in fatty liver tissues exacerbate non-alcoholic fatty liver disease (NAFLD) . Mertk can protect primary macrophages from oxidative stress-induced apoptosis . The significantly upregulated NLR family, pyrin domain containing 3 (NLRP3) inflammasome aggravates NAFLD greatly . However, a study showed that TIM4 reduced the inflammation in NAFLD by suppressing NLRP3 inflammasome . High-level hepatocyte apoptosis is found in non-alcoholic steatohepatitis (NASH) patients . The delayed removal of apoptotic liver cells can cause liver damage, inflammation and fibrosis . Liver fibrosis, the pathological result of various chronic liver diseases, is associated with the dysregulation and polarization of M1/M2 macrophages . Efferocytosis can alleviate liver fibrosis by stimulating M2 macrophage polarization . Rantakari et al clearly showed that the absence of stabilin-1 aggravates fibrosis in chronic liver injury following CCl4 administration. TIM4 and GAS6 are critical proteins in the resolution of hepatic ischemia-reperfusion injury . The administration of recombinant GAS6 can protect GAS6-knockout mice from fulminant hepatic failure . GAS6 also protects primary mouse hepatocytes from hypoxia-induced cell death through AKT phosphorylation and diminishes inflammatory cytokines in vitro . In acute and chronic liver injury, the elevated Galectin-3 expression can facilitate phagocytosis via Mertk . Triantafyllou et al demonstrate that Mertk + macrophage, as a novel hepatoprotective target, can promote resolution responses and quell tissue-destructive responses following acute liver injury. Phagocytic clearance of ACs also serves a role in intestinal inflammatory disorders. In the acute phase of murine experimental colitis, MFGE8 expression decreases in inflamed colons . However, recombinant MFGE8 ameliorates colitis by reducing inflammation and improving disease parameters, suggesting that it may be a useful therapeutic agent for colitis . A number of studies indicate that the receptor tyrosine kinases Axl and Mertk can promote the resolution of inflammation, serving as a potential therapeutic target for inflammatory bowel disease (IBD) . Compared with wild-type mice, Axl −/− Mertk −/− mice and GAS6 −/− mice are more sensitive to dextran sulfate sodium, presenting with more severe colitis signs and more weight loss . Effective efferocytosis prevents apoptotic or necrotic cells from forming cell debris that can induce liver and intestine damage . Efferocytosis serves a role in liver diseases by regulating lipid metabolism, inflammation and polarization . Also, since efferocytosis promotes the resolution of inflammation, it can be used to treat intestinal inflammatory disorders . Autoimmune diseases A large number of cells undergo apoptosis during the development and homeostasis of the body . There are two main pathways of apoptosis, extrinsic or death receptor pathway and intrinsic or mitochondrial pathway, which have been identified . Patients with autoimmune diseases show high levels of apoptotic cells, partly attributed to the massive apoptosis in phagocytes or in tissue cells, such as glomerular cell, epidermal keratinocytes and T cells . Efficient AC clearance maintains immune homeostasis by eliminating auto-antigens, as well as producing anti-inflammatory and immunosuppressive signals . By contrast, under defective efferocytosis, ACs cannot be removed in time . As a result, uncleared ACs may rupture and release harmful contents such as autoantigens, thus promoting immune response and resulting in autoimmune diseases, such as systemic lupus erythematosus (SLE), rheumatoid arthritis (RA), type 1 diabetes (T1D) and multiple sclerosis (MS) . Genetic evidence from mouse studies has demonstrated that failed or delayed efferocytosis might cause immune system disorders and the release of auto-antigens . TAM triple-knockout mice can develop autoimmune hepatitis at the age of six months, with rising transaminases and titers of autoantibodies to antinuclear antigen . SLE damages multiple organs, such as the skin, joints, kidney, lungs, nervous system, heart and blood vessels . Autoantibodies against nuclear antigens, such as antinuclear antibodies and anti-double-stranded DNA (dsDNA), are found in SLE patients . Macrophages from SLE patients possess a weaker capability to engulf ACs . Defective clearance of ACs has been demonstrated to promote the autoantibody production in vivo . An analysis of 50 SLE patients indicates that GAS6/ProS-TAM system is associated with the disease activity of SLE in several ways . Several PS receptors and PS opsonins serve an essential role in efferocytosis, chronic inflammation and age-dependent autoimmunity. TAM triple (Mertk −/− , Axl −/− , Tyro3 −/− ) knockout mice develop a poly-autoimmune syndrome resembling SLE, with elevated titers of autoantibodies, uncontrolled B and T cell proliferation and lymphocyte accumulation . Similarly, TIM4-deficient mice, with hyperactivated T and B cells, develop autoantibodies to dsDNA, a specific antibody to SLE . Furthermore, knockouts of TIM-1 , scavenger receptor class F member 1 (SCARF1) and CD300f all share a common phenotype with SLE-like autoimmunity. These observations are evidenced by the defective AC clearance as an etiological cause of SLE. The site of RA inflammation contains uncleared ACs, suggesting that RA is related to impaired efferocytosis. Waterborg et al show that the deficiency of Axl and Mertk worsens arthritis in mice, whereas overexpressing their ligands ProS1 and GAS6 to activate these receptors can ameliorate arthritis. Other studies have demonstrated that activating LXR/PPAR γ exerts therapeutic effects on the mouse model with inflammatory arthritis . T1D is a T cell-mediated autoimmune disorder with insulin deficiency and hyperglycemia . Inefficient clearance of apoptotic pancreatic cells may aggravate inflammation and necrosis, thus accelerating the release of autoantigens . Defective wound healing is a characteristic of patients with T1D. Due to incomplete phagocytosis, dead cells accumulate at the wound site, which leads to inflammation and retard wound healing . Das et al demonstrate that MFGE8 −/− mice develop systemic inflammation and MFGE8 exerts a potential therapeutic effect on diabetic wound. Sjogren's syndrome (SS) is a chronic, progressive autoimmune disease, with dry mouth and eyes as the most frequent symptoms. The accumulation of ACs and a type I interferon signature have been observed in patients with SS and mouse models. The function of TAMs in efferocytosis and interferon response dampening is impaired in SS . Chen et al found that decreased plasma GAS6 concentration is associated with SS and thus GAS6 may be a novel independent risk factor for SS. Similarly, another study shows that the level of Tyro3 and Axl was decreased in SS patients . These findings suggest that efferocytosis may be associated with disease activity or inflammation in SS. Glucocorticoids, the most widely-used anti-inflammatory drugs, are used to treat SLE by promoting AC clearance and alleviating inflammation in an MFGE8-dependent way . Glucocorticoids can also upregulate Mer and increase the expression of annexin A1 and lipoxin A receptors . Long-term effects of glucocorticoids are reported to be dependent on PPARγ . When efferocytosis fails, ACs can rupture and release cellular materials. Then the released cellular materials stimulate inflammatory and immunogenic reactions, which are likely to trigger an autoimmune response . Glucocorticoids treat autoimmune diseases by promoting efferocytosis, which provides more ideas for future treatment . Neurodegenerative disorders Phagocytosis in the brain is accomplished by microglia, a resident macrophage in the brain and spinal cord. The central nervous system also requires efficient efferocytosis to achieve homeostasis by clearing the dying cells and preventing the spillover of proinflammatory and neurotoxic molecules . Defective efferocytosis may lead to multiple neurodegenerative disorders, such as Alzheimer's disease (AD), Parkinson's disease (PD) and Huntington's disease. Excessive ACs have been detected in patients with neurodegenerative diseases . MFGE8, an endogenous protective factor, regulates microglial phagocytosis of apoptotic neurons and inhibits inflammation . In the central nervous system of mice, microglial cells lacking Mertk fail to clear ineffective synaptic connections, thus impairing hippocampal development and propagating neuronal damage . AD is characterized by the accumulation of hyperphosphorylated protein tau and amyloid β (Aβ) . Zheng et al found that Aβ generation is significantly decreased by Tyro3 receptor overexpression in the cell model. A significant increase of amyloid plaques in the hippocampus and plaque-associated clusters of astroglia has been detected in a Tyro3 −/− AD transgenic mouse model . Neuroinflammation serves a key role in AD development and progression. The expression of MFGE8, an anti-inflammatory agent, decreases in a mouse model of AD . Evidence suggests that MFGE8 can suppress A1 astrocytes and regulate microglia M1/M2 alteration to prevent the death of neurons and oligodendrocytes by regulating NF-κB and PI3K-AKT . Recombinant MFGE8 may have the potential to treat chronic inflammation in AD, through inhibiting MAPK and NF-κB signaling pathways . PD is a progressive neurological disorder characterized by α-synuclein deposit . Dysregulated microglia phagocytosis has been recognized in PD and defective phagocytosis has also been observed in the monocytes of patients with PD . In PD models, CX3CR1 −/− mice show microglial neurotoxicity . Studies also reveal that microglia phagocytosed and cleared cellular debris of degenerating neurons through C1q-mediated pathway and scavenger receptors . In the central nervous system, microglial phagocytic function is supported by bridging proteins (such as MFGE8 or Pro S) and TAM-receptor kinases (such as Axl and Mer) to clear PS-exposing neurons . The study of Nakashima et al suggests that MFGE8 may prevent PD by reversing the reduced mesencephalic dopamine neurons. Chronic neuroinflammation is also crucial in PD. Ghahremani et al conclude that efferocytosis can change the macrophage phenotype into anti-inflammatory phenotype. In conclusion, neuronal apoptotic debris is cleared by phagocytic cells through efferocytosis. Then the release of proinflammatory and antigenic autoimmune constituents is inhibited, which enhances the neuronal survival and axonal regeneration. The tremendous therapeutic potential of efferocytosis for neurodegenerative diseases requires further preclinical development. Tumors Efferocytosis also serves an essential role in tumors. Apoptotic cell clearance can have deleterious consequences within the tumor microenvironment, potentially affecting the natural progression of the disease and cancer treatments. Efferocytosis can help generate a tumor-tolerant, immunosuppressive tumor micro-environment . In the tumor microenvironment, tumor-associated macrophages, which are largely polarized to the M2-like phenotype through PPAR-γ and LXR-α, exert pro-tumor effects by promoting angiogenesis, suppressing T cell infiltration and cytotoxic T cell function, remodeling extracellular matrix to promote invasion and metastasis of cancer cells and suppressing the immune system . Efferocytosis upregulates TAM receptor expression to promote macrophage polarization towards an immunosuppressive phenotype . Namely, the escape of malignant cells is supported by TAM-mediated efferocytosis, negative regulation of dendritic cell activity and dysregulated production of chemokines. MerTK overexpression has been found in a number of human cancers, including myeloid and lymphoblastic leukemia, melanoma and gliomas . Thus, TAM receptors on macrophages serve as exciting targets for cancer therapy by effecting macrophage phenotype and efferocytosis . A growing amount of evidence suggests that efferocytosis in the tumor microenvironment accelerates tumor progression, which provides new ideas for tumor treatment. For example, the blockade of PtdSer interacting with the efferocytosis of phagocytes sufficiently can inhibit tumor progression and metastasis . Studies demonstrate that Axl and Mer contribute to cell survival, migration, invasion, metastasis and chemosensitivity, which can be used as therapeutic targets . Cancer cells have been found to escape from phagocytosis by upregulating ‘don't eat me’ molecules on their surface . Willingham et al found that CD47 was highly expressed in ovarian, breast, colon, bladder, glioblastoma, hepatocellular carcinomas and prostate tumor cells; a high level of CD47 mRNA expression was associated with a decreased survival. Anti-CD47 antibodies can promote phagocytosis, inhibit tumor growth and prevent tumor metastasis. The anti-CD47 antibodies enhance cancer cell phagocytosis via inhibiting the SIRPα axis in anti-cancer therapy . These results suggest CD47 as a therapeutic target for solid tumors. Combing anti-CD47 antibodies and tumor-targeting therapies can achieve higher anti-cancer efficacies. Other studies support that the enhanced efferocytosis can exert anti-tumor effects. A previous study has revealed that multiple myeloma is associated with reduced efferocytosis by monocytes . Some studies have demonstrated that the loss of Axl, Mertk and GAS6 can promote colon cancer . Axl, Mertk and GAS6 can limit inflammation and reduce the risk of inflammation-associated colorectal cancer . The above evidence suggest the double edge sword role of efferocytosis in tumor. Therefore, the specific mechanism of efferocytosis in tumor requires further clarification. Discussion Efferocytosis can maintain homeostasis in biological evolution. Efferocytosis not only serves a role in the above-mentioned diseases, but also affects other diseases, such as skin diseases, retinal degeneration and wound healing. Loss of the phagocytosis receptor Mertk causes overt retinal degeneration . The protein CCN1 is a critical opsonin in skin injury by acting as a bridging molecule in neutrophil efferocytosis . Abnormal activation of complement-mediated phagocytosis also affects retinal diseases, such as glaucoma and age-related macular degeneration . C1q is found to stimulate endothelial cells proliferation and migration and to promote tube formation and sprouting of new vessels . C1q may represent a valuable therapeutic agent for wound healing. The critical role of efferocytosis in multisystem diseases provides new directions for the prevention and treatment of these diseases. More extensive and in-depth studies are needed to clarify the pathophysiological mechanism of efferocytosis in diseases. Traditional Chinese medicine, which features multi-components, multi-targets, multi-pathways and multi-effects, shows clear advantages in the treatment of diseases . Therefore, research on the role of traditional Chinese medicine in regulating efferocytosis promises a new direction of therapy development. More and more studies are being carried out on natural products for efferocytosis regulation. Our previous studies demonstrate the key role of efferocytosis in the development of atherosclerosis and the regulatory effect of efferocytosis on atherosclerosis. Guan Xin Kang (GXK), a formula designed by our research group and composed of Astragalus, Salvia, Leonurus, Trichosanthes kirilowii, Pinellia ternate and Scallions white, can improve the pathological changes in thoracic aorta, increase the phagocytosis rate of splenic macrophages and upregulate the protein expression of thrombospondin 1 and TAM receptors (Tyro3/Axl/Mertk) in LDLR −/− mouse . The expression of Mertk protein in RAW264.7 cells can be upregulated by sera medicated with GXK . The above results indicate that efferocytosis regulation may be effective in treating atherosclerosis. Treatment with catechins in rats can result in anti-atherogenic properties . Kaempferol, luteolin, ellagic acid and berberine can upregulate SR-BI expression and further inhibit ox-LDL uptake in macrophages . Caffeic acid and ferulic acid possess anti-atherogenic properties by enhancing HDL-mediated cholesterol efflux from the macrophages . These natural products, which have been proved to inhibit foam cell formation via efferocytosis , are potential drugs to improve cardiovascular diseases. There are also several natural products that can regulate macrophage activation, recruitment and polarization to reduce inflammation, attenuate lipid accumulation and improve insulin sensitivity in NASH treatment, such as sparstolonin B , berberine and celastrol . Baicalin promotes macrophage polarization to the M2-type in mice with IBD by enhancing the phagocytosis and efferocytosis of macrophages . Polysaccharides from Ganoderma lucidum modulate microglial phagocytosis to attenuate neuroinflammation . An increasing number of natural products, such as pycnogenol , polysaccharides from the roots of Sanguisorba officinalis and tea polysaccharides , enhance the phagocytic function of macrophages and could be used to treat diseases. Having fewer adverse effects and multi-target properties, natural products are prospective medicinal components for the treatment of multi-system diseases in the future. Therefore, more research is needed to explore the mechanism of Chinese medicine that can regulate efferocytosis and provide reliable basis for disease treatment. |
Changes in Microbial Composition During the Succession of Biological Soil Crusts in Alpine Hulun Buir Sandy Land, China | d98957e1-b5ae-48be-8eee-bc5c97e7516b | 10873229 | Microbiology[mh] | Biological soil crusts (biocrusts) are complex communities formed by microorganisms (bacteria, fungi, archaea), cryptoflora (algae, lichens, moss), and other microscopic organisms bonded to soil surface particles via various secretions, such as mycelium, rhizoides, and polysaccharides . As one of the oldest known life forms, biocrusts appeared in the fossil record as early as 2.6 billion years ago . That initial formation of biocrusts long ago is linked to how terrestrial ecosystems originated, in that the widespread development of biocrusts and their improvement of local climate and soil conditions enabled vascular plants to emerge and strongly compete, thereby forming distinct vegetation communities . In this respect, arid and semi-arid ecosystems are particularly noteworthy, since they collectively cover 30–40% of the world’s terrestrial area , but in these relatively dry regions, their limited water carrying capacity restricts the viability of large multidimensional vascular plants. Yet, biocrusts are still widely distributed in these ecosystems, constituting at least 70% of their biological cover in some areas, where they effectively enhance soil stability and perform key ecological functions (e.g., providing windbreaks, regulating hydrology, maintaining moisture), as well increasing the fertility and microbial activity of soil . Accordingly, biocrusts have earned the moniker “desert ecosystem engineers”, being robust indicators for evaluating the health of desert ecosystems . The morphology and structure of biocrusts is highly diverse, being composed of algae, lichens, and mosses, whose functional types and taxa are mixed together in varying degrees. Although biocrusts count among the planet’s major terrestrial communities, their scientific study started late and early progress was limited. Cyanobacteria, algae, archaea, bacteria, and microfungi are the basic substrates of biocrust formation , which together promote colonization by lichens, bryophytes, and microorganisms . At both global and regional scales, the composition and biomass of particular biocrust communities strongly depends on climatic conditions . For example, for regions whose evapotranspiration potential is relatively high, their biocrusts are mostly composed of low-biomass cyanobacteria, bacteria, and microfungi; i.e., cyanobacterial crusts, lacking mosses or lichens . With declining evapotranspiration, cyanobacteria increase in biomass, lichens and mosses appear, leading to the differentiation and diversification of biocrust types . Beside climate, the soil microhabitat and its characteristics—soil type, texture, nutrient content, salinity, pH, and moisture—can be critical factors shaping the composition and distribution of biocrusts on a regional scale . According to the dominant cryptogam present in them during their succession, biocrusts can be broadly classified into three stages: cyanobacterial crust, lichen crust, and moss crust . As their main biological components, soil microbes (bacteria, fungi, archaea) are collectively responsible for essential ecological functions. Not surprisingly then, trends in the number of dominant species, α-diversity and richness, and community composition of microorganisms across biocrust stages are strongly correlated with biocrust succession . Only recently, however, have we begun to explore how the biomass, species composition or ecological roles of these microbial organisms is changed under differing environmental conditions. Technical limitations precluded robust estimates of microbial diversity in previous studies of biocrusts. Fortunately, driven by technological advances in molecular biology within the last decade, microbiome techniques can now be readily applied to reveal the composition of microbial communities at different stages of biocrust succession. Given the crucial ecological functions of biocrusts in arid and semi-arid ecosystems, in recent years the microbial community composition of biocrusts at different successional stages has been extensively studied in various deserts in distinct bioclimatic zones. This includes cold deserts (e.g., Colorado Plateau in the USA) typically found in temperate regions at high elevations, on plateaus, or in mountainous areas, though they also occur in polar regions (Antarctica and Arctic)—that is, generally where the regional mean annual temperature is close to 0 °C ; temperate deserts (e.g., Gurbantunggut Desert and Tengger desert in China, Kyzyl kum desert in Uzbekistan), these usually located at mid-high latitudes, where the regional mean annual temperature is under 18 °C ; and hot deserts (e.g., Atacama Desert in Chile, Namib Desert in Namibia, Negev Desert in Israel), these usually lying at mid-low latitudes and featuring a mean annual temperature above 18 °C, featuring hot summers, daytime temperatures regularly exceeding 30 °C, mild winter climates, and rainfall concentrated in summer (these three desert categories—cold, temperate, and hot—follow the usage at www.britannica.com ) . Yet, despite recent investigations in several regions, significant knowledge gaps remain concerning the composition of biocrust communities on soils in certain regions, particularly those in underrepresented areas . One of China’s largest sandy land areas is Hulun Buir Sandy Land, lying at the highest latitude among them. According to the above desert classification ( www.britannica.com ), the Hulun Buir Sandy Land is arguably a cold desert . Until now, however, no attempt has been made to investigate the soil microbial community composition of its biocrusts and the drivers of their succession process. Hence, the overarching goal of this study was to apply next-generation-sequencing (NGS) to reveal how the soil microbial community changes during biocrust succession in the Hulun Buir Sandy Land region. Furthermore, considering its high latitude and cold climate, we asked: Could the microbial community composition characteristics and pattern of biocrust succession in this region differ from those in other desert ecosystems? Therefore, our objectives were three-fold: (1) to uncover prominent trends in soil physical and chemical properties during biocrust development and succession; (2) to profile the α- and β-diversity of the soil microbial community vis-à-vis biocrust development and succession; and (3) to elucidate relationships between these complex microbial communities and aspects of their environment. The ecological findings will not only bolster our understanding of biocrusts and their community structure, but are also critical for expanding our knowledge of their diversity and functioning across terrestrial ecosystems.
Study Site and Soil Sampling The field research was carried out in the New Barag Left Banner of Hulun Buir Sandy Land (Fig. ), which consists of three large sand belts in northeastern China. Our study site was in the eastern end of the biggest sand belt, which lies along the southern bank of the Hailar River in the northern part of Hulun Buir Sandy Land (118°4′9.7356″ E, 49°19′9.9732″ N, Fig. ). This region has a temperate continental climate, with a mean annual precipitation ( MAP ) of 290 to 400 mm and a mean annual temperature ( MAT ) of − 5 to 1.5 °C. The zonal vegetation type is typical temperate grassland dominated by annual herbaceous plants, such as Leymus chinensis , Stipa grandis , Agropyron cristatum , and Carex duriuscula . In this study, we used a “space-for-time substitution” method to uncover changes in microbial composition during the biocrusts succession in the Hulun Buir Sandy Land in China. At the research site, a total of 12 plots (each 20 m × 20 m) were established, corresponding to four soil cover types: bare sand (no visible crusts, as the control; stage 1 ), cyanobacterial crust ( stage 2 ), lichen crust ( stage 3 ), and moss crust ( stage 4 ) (Fig. ). Plots having the same biocrust successional stage were separated by more than 1 km. Within each plot, we established four representatives’ subplots (each 5 m × 5 m, at least 10 m apart), in which five soil samples were randomly taken at a 0–5 cm depth under the biological layer, using a sterile cutting ring (9.0-cm diam.). These subplot-level samples were mixed to form one composite sample per plot for each soil cover type, for a total of 12 composite plot-level replicate samples (three × four soil cover types). These were sieved (through 2.0 mm mesh) to remove any visible roots and stones, and then each sample was divided into three portions for further analyses. The first was simply air-dried; the second was stored at 4 °C, to analyze various soil physiochemical properties; immediately after its collection, the third portion was transported on ice to the laboratory where it was freeze-dried at –80 °C for subsequent DNA extractions. Soil Physiochemical Properties For each sample per soil cover type, measurements of its soil material (second portion) were made. Soil particle size composition was determined using a Laser Diffraction Particle Size Analyzer (Mastersizer 2000, Malvern, England). Soil pH of a suspension of soil: water in a 1:5 ratio was recorded with a calibrated pH meter (Mettler Toledo, Giessen, Germany) . To measure the total content of water-soluble salt ( WST ) and soil organic matter ( SOM ), we respectively used the residue drying-quality and K 2 Cr 2 O 7 methods . Total nitrogen ( TN ) was measured using a Kjeldahl analysis system (Kjeltec 8400, Foss, Hillerød, Denmark) . Total phosphorus ( TP ) was determined via colorimetry using sulfuric acid-perchloric acid digestion . Flame photometry was used to quantify total potassium ( TK ), using a perchloric acid-hydrofluoric acid digestion . The soil available nitrogen ( AN ), available phosphorus ( AP ), and available potassium ( AK ) were determined by alkaline hydrolysis diffusion method, molybdenum antimony colorimetry, and flame photometry method, respectively . Soil DNA Extraction and High-throughput Sequencing To extract soil DNA from the biocrust samples, the E.Z.N.A. DNA Kit (Omega Bio-Tek, Norcross, GA, USA) was used, by following the manufacturer’s protocols. The obtained nine DNA extracts (three plot-level replicates for each biocrust type (i.e., stages 2 , 3 , 4 ) and control ( stage 1 )) were PCR-amplified and then underwent sequencing analyses. The bacterial 16S ribosomal RNA gene and fungal ITS rRNA genes were respectively amplified, using the primer pairs of 338F_806R and ITS1F_ITS2R , under these thermocycling parameters: 95 °C for 3 min, followed by 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 45 s, with a final extension at 72 °C for 10 min. Both PCRs were performed in triplicate, each using a 20-μL reaction mixture containing 2 μL of 5 × FastPfu buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each F/R primer (5 μM), 0.2 μL of FastPfu polymerase, and 10 ng of template DNA. Amplicons were first extracted from 2% agarose gels and purified, using an AxyPrepDNA gel extraction kit (Axygen Biosciences, Union City, CA, USA) according to the manufacturer’s instructions, and then quantified with a QuantiFluor-ST fluorometer (Promega, Madison, WI, USA). Purified amplicons were pooled in equimolar quantities and paired-end sequenced (2 × 300 bp) on the Illumina MiSeq platform PE300. All obtained raw reads have been deposited into the database of the NCBI Sequence Read Archive (SRA) (accession number: PRJNA1026485). Bioinformatics Analysis The paired-end reads from the original DNA fragments were merged using FLASH software , a tool designed to combine them when reads 1 and 2 overlap. The resulting paired-end reads were then assigned to each sample according to their unique barcodes. Next, the raw sequencing data were quality-filtered according to these criteria: (i) the reads were truncated at any site that received an average quality score < 20 over a 50-bp sliding window, with those truncated reads < 50 bp removed; and (ii) any reads that had exact barcode matches, or two nucleotide mismatches during primer matching, and which contained ambiguous characters were discarded. Only sequences with overlaps > 10 bp were assembled, this according to their overlap sequence; those reads that could not be assembled were discarded. Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE and chimeric sequences were identified and removed using UCHIME . Singleton OTUs was removed from the dataset. The taxonomic status of the 16S and ITS rRNA were identified using the RDP Classifier ( http://rdp.cme.msu.edu/ ) against the SILVA (v132) ( https://www.arb-silva.de/ ) or Unite (v8.0) ( https://unite.ut.ee/ ) database, respectively, at a 0.7-confidence threshold. Importantly, to account for differences in their sequencing depth, all samples were normalized in QIIME software (v1.8.0) . The ensuing OTUs were used to calculate the α-diversity and β-diversity metrics. Statistical Analysis The α-diversity ( http://www.mothur.org/wiki/Calculators ) indexes were calculated using the diversity function of the “vegan” package ( https://CRAN.R-project.org/package=vegan ) for the R computing platform (v3.2.1) ( www.r-project.org ) . To examine differences in the relative abundance of dominant groups of the bacterial (genus level) or fungal (family level) community among stages 1–4 , the Kruskal–Wallis test followed by Tukey’s HSD (honest significant difference) post hoc test was used, both implemented in R with the “agricolae” package ( https://CRAN.R-project.org/package=agricolae ). To gauge the relevance of soil physiochemical properties (i.e., D 1 , D 2 , D 3 , pH , AN , TN , TK , TP , WST , AK , SOM , and AP ) and assess their ability to explain variation in the distribution patterns of microbial community members in the different biocrusts samples, distance-based redundancy analysis (db-RDA) and Monte Carlo permutations were used. Mantel tests were implemented to evaluate how bacterial or fungal community composition was related to the measured site-level soil variables. Pearson’s r coefficient was used to test for a positive ( R > 0) or negative ( R < 0) linear correlation between two variables, carried out in R using its cor function. The “vegan” package was used to run the db-RDA and Mantel tests in R, and also to build the matrices for the pairwise taxonomic distances for bacterial or fungal communities (Bray–Curtis dissimilarity) vis-à-vis the environmental variables (Euclidean distance).
The field research was carried out in the New Barag Left Banner of Hulun Buir Sandy Land (Fig. ), which consists of three large sand belts in northeastern China. Our study site was in the eastern end of the biggest sand belt, which lies along the southern bank of the Hailar River in the northern part of Hulun Buir Sandy Land (118°4′9.7356″ E, 49°19′9.9732″ N, Fig. ). This region has a temperate continental climate, with a mean annual precipitation ( MAP ) of 290 to 400 mm and a mean annual temperature ( MAT ) of − 5 to 1.5 °C. The zonal vegetation type is typical temperate grassland dominated by annual herbaceous plants, such as Leymus chinensis , Stipa grandis , Agropyron cristatum , and Carex duriuscula . In this study, we used a “space-for-time substitution” method to uncover changes in microbial composition during the biocrusts succession in the Hulun Buir Sandy Land in China. At the research site, a total of 12 plots (each 20 m × 20 m) were established, corresponding to four soil cover types: bare sand (no visible crusts, as the control; stage 1 ), cyanobacterial crust ( stage 2 ), lichen crust ( stage 3 ), and moss crust ( stage 4 ) (Fig. ). Plots having the same biocrust successional stage were separated by more than 1 km. Within each plot, we established four representatives’ subplots (each 5 m × 5 m, at least 10 m apart), in which five soil samples were randomly taken at a 0–5 cm depth under the biological layer, using a sterile cutting ring (9.0-cm diam.). These subplot-level samples were mixed to form one composite sample per plot for each soil cover type, for a total of 12 composite plot-level replicate samples (three × four soil cover types). These were sieved (through 2.0 mm mesh) to remove any visible roots and stones, and then each sample was divided into three portions for further analyses. The first was simply air-dried; the second was stored at 4 °C, to analyze various soil physiochemical properties; immediately after its collection, the third portion was transported on ice to the laboratory where it was freeze-dried at –80 °C for subsequent DNA extractions.
For each sample per soil cover type, measurements of its soil material (second portion) were made. Soil particle size composition was determined using a Laser Diffraction Particle Size Analyzer (Mastersizer 2000, Malvern, England). Soil pH of a suspension of soil: water in a 1:5 ratio was recorded with a calibrated pH meter (Mettler Toledo, Giessen, Germany) . To measure the total content of water-soluble salt ( WST ) and soil organic matter ( SOM ), we respectively used the residue drying-quality and K 2 Cr 2 O 7 methods . Total nitrogen ( TN ) was measured using a Kjeldahl analysis system (Kjeltec 8400, Foss, Hillerød, Denmark) . Total phosphorus ( TP ) was determined via colorimetry using sulfuric acid-perchloric acid digestion . Flame photometry was used to quantify total potassium ( TK ), using a perchloric acid-hydrofluoric acid digestion . The soil available nitrogen ( AN ), available phosphorus ( AP ), and available potassium ( AK ) were determined by alkaline hydrolysis diffusion method, molybdenum antimony colorimetry, and flame photometry method, respectively .
To extract soil DNA from the biocrust samples, the E.Z.N.A. DNA Kit (Omega Bio-Tek, Norcross, GA, USA) was used, by following the manufacturer’s protocols. The obtained nine DNA extracts (three plot-level replicates for each biocrust type (i.e., stages 2 , 3 , 4 ) and control ( stage 1 )) were PCR-amplified and then underwent sequencing analyses. The bacterial 16S ribosomal RNA gene and fungal ITS rRNA genes were respectively amplified, using the primer pairs of 338F_806R and ITS1F_ITS2R , under these thermocycling parameters: 95 °C for 3 min, followed by 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 45 s, with a final extension at 72 °C for 10 min. Both PCRs were performed in triplicate, each using a 20-μL reaction mixture containing 2 μL of 5 × FastPfu buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each F/R primer (5 μM), 0.2 μL of FastPfu polymerase, and 10 ng of template DNA. Amplicons were first extracted from 2% agarose gels and purified, using an AxyPrepDNA gel extraction kit (Axygen Biosciences, Union City, CA, USA) according to the manufacturer’s instructions, and then quantified with a QuantiFluor-ST fluorometer (Promega, Madison, WI, USA). Purified amplicons were pooled in equimolar quantities and paired-end sequenced (2 × 300 bp) on the Illumina MiSeq platform PE300. All obtained raw reads have been deposited into the database of the NCBI Sequence Read Archive (SRA) (accession number: PRJNA1026485).
The paired-end reads from the original DNA fragments were merged using FLASH software , a tool designed to combine them when reads 1 and 2 overlap. The resulting paired-end reads were then assigned to each sample according to their unique barcodes. Next, the raw sequencing data were quality-filtered according to these criteria: (i) the reads were truncated at any site that received an average quality score < 20 over a 50-bp sliding window, with those truncated reads < 50 bp removed; and (ii) any reads that had exact barcode matches, or two nucleotide mismatches during primer matching, and which contained ambiguous characters were discarded. Only sequences with overlaps > 10 bp were assembled, this according to their overlap sequence; those reads that could not be assembled were discarded. Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE and chimeric sequences were identified and removed using UCHIME . Singleton OTUs was removed from the dataset. The taxonomic status of the 16S and ITS rRNA were identified using the RDP Classifier ( http://rdp.cme.msu.edu/ ) against the SILVA (v132) ( https://www.arb-silva.de/ ) or Unite (v8.0) ( https://unite.ut.ee/ ) database, respectively, at a 0.7-confidence threshold. Importantly, to account for differences in their sequencing depth, all samples were normalized in QIIME software (v1.8.0) . The ensuing OTUs were used to calculate the α-diversity and β-diversity metrics.
The α-diversity ( http://www.mothur.org/wiki/Calculators ) indexes were calculated using the diversity function of the “vegan” package ( https://CRAN.R-project.org/package=vegan ) for the R computing platform (v3.2.1) ( www.r-project.org ) . To examine differences in the relative abundance of dominant groups of the bacterial (genus level) or fungal (family level) community among stages 1–4 , the Kruskal–Wallis test followed by Tukey’s HSD (honest significant difference) post hoc test was used, both implemented in R with the “agricolae” package ( https://CRAN.R-project.org/package=agricolae ). To gauge the relevance of soil physiochemical properties (i.e., D 1 , D 2 , D 3 , pH , AN , TN , TK , TP , WST , AK , SOM , and AP ) and assess their ability to explain variation in the distribution patterns of microbial community members in the different biocrusts samples, distance-based redundancy analysis (db-RDA) and Monte Carlo permutations were used. Mantel tests were implemented to evaluate how bacterial or fungal community composition was related to the measured site-level soil variables. Pearson’s r coefficient was used to test for a positive ( R > 0) or negative ( R < 0) linear correlation between two variables, carried out in R using its cor function. The “vegan” package was used to run the db-RDA and Mantel tests in R, and also to build the matrices for the pairwise taxonomic distances for bacterial or fungal communities (Bray–Curtis dissimilarity) vis-à-vis the environmental variables (Euclidean distance).
Response of Soil Physicochemical Properties to Biocrust Succession With the succession of biocrusts, significant differences in soil properties emerged in moss crust vis-à-vis the other two crust types and the bare sand (i.e., the control) (Table ). Regarding the three soil particle diameter ( D ) properties—0.002 < D 1 ≤ 0.02 mm; 0.02 < D 2 ≤ 2 mm; D 3 < 0.002 mm—in comparison to bare sand ( stage 1 ), there was a lower proportion of D 1 in all three biocrust types, which declined through their succession trajectory ( stage 2 > stage 3 > stage 4 . Conversely, the D 3 proportion rose in all three biocrusts, but their D 2 proportion remained similar along the successional gradient. Of the nine soil chemical properties, the contents of seven ( AN , TN , TP , WST , AK , SOM , and AP ) tended to increase in the three biocrust types relative to bare sand, being significantly higher in moss crust ( stage 4 ) than either cyanobacterial crust ( stage 2 ) or lichen crust ( stage 3 ). Impressively, in moss crust, these AN , TN , TP , WST , AK , SOM , and AP contents were respectively 3.26, 10, 2.71, 3.38, 5.51, 13.31, and 4.33 times greater than in bare sand, respectively (Table ). Evidently, the development and succession of biocrusts resulted in the enrichment of the shallow soil layer with carbon (C), nitrogen (N), and phosphorus (P), the most common limiting elements in terrestrial ecosystems. In contrast, both soil pH and TK were negligibly affected by the succession of biocrusts in Hulun Buir Sandy Land. These results can be explained by the powerful ecological enhancement function of biocrusts, which mediate most of the input, transport, and output of matter and energy at the surface boundaries of desert soils. Soil aggregates produced by biocrusts have been shown stabilize soil particles and soil structure , thereby altering the ecohydrological processes of desert ecosystems in addition to capturing and retaining resources (e.g., soil, organic matter, seeds, and nutrient-rich dust) . Further, biocrusts can bolster soil fertility by fixing atmospheric C and N and releasing it into the subsoil layer, thus contributing to the global C and N cycling . Moreover, biocrusts were also identified as a key component of biogeochemical phosphorus cycling during the pedogenesis of sandy substrates . Overall, our findings suggest biocrust development markedly improves the soil properties of bare sand, with a well-developed crust (i.e., moss crust, end of succession: stage 4 ) having a stronger ameliorating effect than less-developed crusts (cyanobacterial or lichen crusts). Actually, our findings largely agreed with those recently reported for European dunes . Structure and Succession of the Biocrust Microbial Community Microbial α-diversity was estimated by the Chao1 index (Fig. ). Whereas its mean value for the bacterial community ranged from 1988.7 ± 235.07 ( stage 1 ) to 2529.8 ± 358.53 ( stage 4 ) (Fig. A), it was much lower for the fungal community, ranging from 722.24 ± 196.56 ( stage 4 ) to 943.01 ± 114.2 ( stage 2 ) across the successional gradient. In general, while the bacterial community’s α-diversity continually increased, the fungal community’s first increased and then decreased through succession, but these changes were not significant ( P > 0.05). Other calculated indexes for species richness and diversity of the bacterial and fungal communities are summarized in Table . That bacterial α-diversity increased with biocrust succession—from bare sandy soil to lichen crusts or moss crusts—is consistent with successional theory of low-to-high level shifts in diversity; hence, in this respect, the dynamics of Hulun Buir Sandy Land are much like other desert ecosystems . Interestingly, fungal α-diversity was similar across the different stages, conflicting with the view that it continually increases across biocrust successional stages . In general, fungal diversity has been found to vary with both the age and type of biocrusts, being higher in their late than early succession . We suggest this discrepancy is most probably due to habitat specificity effects on a regional scale. Again, our results show that mossy crusts could provide more pivotal resources and protection for soil bacterial communities, mainly because of their higher dust capture, water-holding, and nutrient retention capacities . Overall, 32 bacterial and 11 fungal phyla were detected across all 12 plot-level samples based on NGS sequencing. For bacteria, the six dominant phyla (i.e., with a relative abundance > 5%) in all samples of stages 1–4 were Cyanobacteria and Actinobacteriota , respectively constituting 23.83% and 23.09% of all sequences, on average; followed by Proteobacteria (17.68%), Chloroflexi (9.71%), Bacteroidota (8.39%), and Acidobacteriota (5.53%) (Fig. A; Table ). For fungi, the dominant phylum in all soil samples was Ascomycota , on average constituting 72.03% of all sequences, followed far behind by Basidiomycota (18.70%), along with Fungi_unclassified (4.73%) and Chytridiomycota (4.32%) (Fig. B; Table ). Globally, the bacterial phyla Actinobacteria , Cyanobacteria , Proteobacteria , Firmicutes , Chloroflexi , Bacteroidetes , Acidobacteria , Verrucomicrobia , Gemmatimonadetes , Planctomycetes , and Deinococcus-Thermus and the fungal phyla Ascomycota , Basidiomycota , and Chytridiomycota have been reported as the most abundant taxa across all biocrusts developmental stages in various desert ecosystems. Hence, our findings strongly agreed with those reported assessments. Not surprisingly, in our study, the relative abundances of those phyla changed with the successional stage of biocrusts. For bacteria, in shifting from cyanobacterial crust to lichen crust and then to moss crust, the corresponding relative abundance of Proteobacteria increased significantly (from 15.33 to 16.08% and then to 23.34%), while that of Cyanobacteria decreased significantly (from 35.16 to 17.00% and then 20.23%) (Fig. A; Table ). Meanwhile, a hump-shaped response whereby relative abundance rose then fell was found for Actinobacteria (from 18.50 to 24.68% and then 21.98%), Chloroflexi (from 8.26 to 14.04% and then 8.06%), and Acidobacteria (from 4.89 to 8.78% and then 4.92%). Of them, Proteobacteria are dominant in a wide range of harsh conditions, especially oligotrophic habitats, with Actinobacteria also described as a dominant group in desert soils given their ability for filamentous growth, which may effectively mitigate damage from drought, high temperatures, and UV radiation . Moreover, as the oldest known photosynthetic autotrophic component of biocrusts, Cyanobacteria can survive and rapidly grow in water and nutrient-poor desert soils; the fossilized soil structure of a 2.6-billion-year-old biocrust indicates that it was most likely composed of Cyanobacteria members . We found that Firmicutes usually attained their highest relative abundance in desert topsoil, but then gradually declined in the course of biocrust succession. Similarly, many other studies have shown that, during the succession of biocrusts, the Cyanobacteria initially dominant in the cyanobacterial crust undergo a predictable reduction in abundance as Actinobacteria , Proteobacteria , Chloroflexi , Acidobacteria , Gemmatimonadetes , Bacteroidetes , Planctomycetes , Verrucomicrobia , and Deinococcus-Thermus become more common . Furthermore, a total of 154 bacterial genera displayed significant differences in their relative abundance across the successional gradient (i.e., stages 1–4 ) (Fig. ; Table ). Among those, the 15 most abundant (in descending rank) were Microcoleus_ PCC-7113, norank_ Coleofasciculaceae , Crinalium_SAG _22.89, norank_ Acetobacteraceae , norank_ Frankiales unclassified_ Micromonosporaceae , Deinococcus , Roseisolibacter , Acidiphilium , Microvirga , Haliangium , Arthrobacter , norank_ Vicinamibacterales , norank_ Spirosomaceae , and Candidatus_Alysiosphaera (Fig. A; Table ). Both Microcoleus _PCC-7113 and norank_ Coleofasciculaceae , which were always affiliated with Cyanobacteria , each reached a significantly higher relative abundance in cyanobacterial crust than in either lichen crust or moss crust and especially vis-à-vis bare sand ( stage 1 ) in the Hulun Buir Sandy Land. Although more than 320 cyanobacterial species from 70 genera have been identified in biocrusts so far, few actually participate in biocrust formation . Among these, Microcoleus is the most dominant cyanobacterial genus in biocrusts found in most arid and semi-arid regions, such as the Colorado Plateau in the USA , the Negev Desert in Israel , and both the Gurbantunggut Desert and Tengger Desert in China, as well as the Kyzyl-Kum desert in Uzbekistan ; its species are typical filamentous nonheterocystous cyanobacteria. Notably, M. vaginatus and M. steenstrupii are often affiliated with Microcoleus , albeit harboring different adaptations to temperature, and both species appear dominant in cyanobacterial biocrust communities worldwide; the former being more abundant in cooler environments, while the latter dominates warmer ones . Also belonging to the Cyanobacteria is the Coleofasciculaceae family, whose members reach substantially higher relative abundances in cyanobacterial crust than other biocrust types, being widely found in the Tengger Desert and Kyzyl-Kum Desert, as well as the Tabernas Desert in Spain . Unlike bacteria, for fungi its community composition has been reported to change negligibly during biocrust succession, with Chytridiomycota found at lower relative abundance in bare sandy soils whereas Ascomycota reached maximal abundances (over 60%) in different successional stages . In our study, however, the relative abundance of Ascomycota in the lichen crust ( stage 2 ) was below 60% (ca. 52%), while that of Basidiomycota reached as high as 40% (Fig. B). Conversely, Chytridiomycota was greatly reduced in abundance, from 11.08% (bare sand) to 0.79% (moss crust), across the biocrust successional gradient of Hulun Buir Sandy Land. In fact, Chytridiomycota has only been detected at very tiny abundances in the Oman and Chihuahuan deserts up to date worldwide . These results are consistent with those of previous studies, which together suggest that Chytridiomycota dominate the early stage of biocrust development, hinting at their tolerance of stressful environments . Collectively, these phyla showed no site-specificity and were ubiquitous in previous research in various desert soil and biocrusts. We found a total of 25 fungal families whose relative abundance differed significantly among stages 1–4 that formed the successional gradient (Table ). Of those, the 15 most abundant (in descending order) were Atheliaceae , Trichomeriaceae , Didymellaceae , Pleosporaceae , Aspergillaceae , Camarosporidiellaceae , unclassified_ Agaricomycetes , Verrucariaceae , Taphrinaceae , Magnaporthaceae , Sclerotiniaceae , Periconiacea e, Cantharellales_fam_Incertae_sedis , Trimorphomycetaceae , and Cephalothecaceae (Fig. B; Table ). This provides compelling evidence that fungal community composition varies more considerably at the family than phylum level through the succession of biocrusts. Environmental Factors Influencing the Community Composition of Biocrust Types The Mantel test results revealed that variation in bacterial and fungal community composition (weighted UniFrac distance matrix-based) responded to the 12 soil parameters examined (Fig. ; Table ). Notably, bacterial community composition was positively and strongly correlated with both D 1 ( r = 0.657, p = 0.001) and AK ( r = 0.534, p = 0.004), moderately so with SOM ( r = 0.454, p = 0.003), TN ( r = 0.439, p = 0.007), AN ( r = 0.437, p = 0.007), and WST ( r = 0.399, p = 0.01), and likewise, but to a lesser degree, with TP ( r = 0.375, p = 0.016) and AP ( r = 0.335, p = 0.023) (Fig. ; Table ). The fungal community composition also had positive correlations of similar magnitude with D 1 ( r = 0.715, p = 0.001), AK ( r = 0.507, p = 0.001), SOM ( r = 0.449, p = 0.002), TN ( r = 0.430, p = 0.001), AN ( r = 0.394, p = 0.006), and WST ( r = 0.389, p = 0.01), along with TP ( r = 0.390, p = 0.012) as well as AP ( r = 0.342, p = 0.043) (Fig. ; Table ). Furthermore, we used db-RDA to evaluate the effects of five VIFs (variance inflation factors) on soil bacterial and fungal community composition (Fig. ; Table ). These results showed that about 31.51% of the variance in bacterial community composition could be explained by the selected edaphic properties (Fig. A; CAP1 and CAP2 explained 19.66% and 11.85% of the variance, respectively). Crucially, three variables alone were mainly responsible for successional shifts in the bacterial community composition of biocrusts: WST ( r 2 = 0.825, p = 0.002), D 1 ( r 2 = 0.770, p = 0.002), and pH ( r 2 = 0.726, p = 0.004) (Fig. A; Table ). Likewise, for fungal community composition, edaphic properties accounted for about 27.86% of its variance (Fig. B; CAP1 and CAP2 explained 17.10% and 10.76% of the variance, respectively). In this respect, the observed shifts in fungal community composition were driven by four variables: D 1 ( r 2 = 0.868, p = 0.001), WST ( r 2 = 0.751, p = 0.003), pH ( r 2 = 0.521, p = 0.028), and TK ( r 2 = 0.525, p = 0.037) (Fig. B; Table ). Nonnegligible, only a small proportion of their community-level variation could be explained by all variables we examined, especially for fungal taxa, for which a high proportion of variation was unexplained. It is largely ascribed to the unmeasured environmental variables. So much unexplained variation in the communities of bacteria and fungi belowground suggested potential effects of neutral or stochastic processes upon community assembly during the succession of biological soil crusts, especially for the fungi . Therefore, more environmental variables, especially availability of soil nutrients (i.e., Ca 2+ , Mg 2+ , and Al − ), should be incorporated into coupling analysis in the future. Previous studies have demonstrated that certain soil properties, namely pH, soil organic carbon, and salinity, can variously play an instrumental role in shaping soil microbial diversity and community composition. Thus, as our results suggest, biocrusts may indirectly affect the microbial community in their underlying soil via their modulation of chemical soil properties. Importantly, the impact of environmental factors on soil bacterial and fungal communities depends on the spatial scale considered. Globally, soil pH is deemed the paramount determinant of bacterial community composition . Regionally, however, the soil type, texture, nutrient content, salinity, and moisture are all critical factors governing the bacterial structure and composition of biocrusts . Despite this new knowledge of changing microbial characteristics through the succession of biocrusts, the responsible mechanisms remain unclear. Therefore, distinguishing the fundamental ecological processes (deterministic versus stochastic) shaping soil microbial community composition in the Hulun Buir Sandy Land is a future research priority of ours. Moreover, we used a “space-for-time substitution” sampling approach to reflect the changes in microbial composition along the cyanobacterial crust–lichen–moss crust successional gradient in Hulun Buir Sandy Land. Admittedly, in our analysis, only three samples from each stage included. Consequently, such small sample size may lead to a bias in the analysis of the results. Thus, it is necessary to collect further samples for a complementary analysis in the future. Over all, our findings thus lend support to this emerging view, and point to complex, possibly divergent mechanisms at work in shaping the successional microbial dynamics of biocrusts in cold desert ecosystems.
With the succession of biocrusts, significant differences in soil properties emerged in moss crust vis-à-vis the other two crust types and the bare sand (i.e., the control) (Table ). Regarding the three soil particle diameter ( D ) properties—0.002 < D 1 ≤ 0.02 mm; 0.02 < D 2 ≤ 2 mm; D 3 < 0.002 mm—in comparison to bare sand ( stage 1 ), there was a lower proportion of D 1 in all three biocrust types, which declined through their succession trajectory ( stage 2 > stage 3 > stage 4 . Conversely, the D 3 proportion rose in all three biocrusts, but their D 2 proportion remained similar along the successional gradient. Of the nine soil chemical properties, the contents of seven ( AN , TN , TP , WST , AK , SOM , and AP ) tended to increase in the three biocrust types relative to bare sand, being significantly higher in moss crust ( stage 4 ) than either cyanobacterial crust ( stage 2 ) or lichen crust ( stage 3 ). Impressively, in moss crust, these AN , TN , TP , WST , AK , SOM , and AP contents were respectively 3.26, 10, 2.71, 3.38, 5.51, 13.31, and 4.33 times greater than in bare sand, respectively (Table ). Evidently, the development and succession of biocrusts resulted in the enrichment of the shallow soil layer with carbon (C), nitrogen (N), and phosphorus (P), the most common limiting elements in terrestrial ecosystems. In contrast, both soil pH and TK were negligibly affected by the succession of biocrusts in Hulun Buir Sandy Land. These results can be explained by the powerful ecological enhancement function of biocrusts, which mediate most of the input, transport, and output of matter and energy at the surface boundaries of desert soils. Soil aggregates produced by biocrusts have been shown stabilize soil particles and soil structure , thereby altering the ecohydrological processes of desert ecosystems in addition to capturing and retaining resources (e.g., soil, organic matter, seeds, and nutrient-rich dust) . Further, biocrusts can bolster soil fertility by fixing atmospheric C and N and releasing it into the subsoil layer, thus contributing to the global C and N cycling . Moreover, biocrusts were also identified as a key component of biogeochemical phosphorus cycling during the pedogenesis of sandy substrates . Overall, our findings suggest biocrust development markedly improves the soil properties of bare sand, with a well-developed crust (i.e., moss crust, end of succession: stage 4 ) having a stronger ameliorating effect than less-developed crusts (cyanobacterial or lichen crusts). Actually, our findings largely agreed with those recently reported for European dunes .
Microbial α-diversity was estimated by the Chao1 index (Fig. ). Whereas its mean value for the bacterial community ranged from 1988.7 ± 235.07 ( stage 1 ) to 2529.8 ± 358.53 ( stage 4 ) (Fig. A), it was much lower for the fungal community, ranging from 722.24 ± 196.56 ( stage 4 ) to 943.01 ± 114.2 ( stage 2 ) across the successional gradient. In general, while the bacterial community’s α-diversity continually increased, the fungal community’s first increased and then decreased through succession, but these changes were not significant ( P > 0.05). Other calculated indexes for species richness and diversity of the bacterial and fungal communities are summarized in Table . That bacterial α-diversity increased with biocrust succession—from bare sandy soil to lichen crusts or moss crusts—is consistent with successional theory of low-to-high level shifts in diversity; hence, in this respect, the dynamics of Hulun Buir Sandy Land are much like other desert ecosystems . Interestingly, fungal α-diversity was similar across the different stages, conflicting with the view that it continually increases across biocrust successional stages . In general, fungal diversity has been found to vary with both the age and type of biocrusts, being higher in their late than early succession . We suggest this discrepancy is most probably due to habitat specificity effects on a regional scale. Again, our results show that mossy crusts could provide more pivotal resources and protection for soil bacterial communities, mainly because of their higher dust capture, water-holding, and nutrient retention capacities . Overall, 32 bacterial and 11 fungal phyla were detected across all 12 plot-level samples based on NGS sequencing. For bacteria, the six dominant phyla (i.e., with a relative abundance > 5%) in all samples of stages 1–4 were Cyanobacteria and Actinobacteriota , respectively constituting 23.83% and 23.09% of all sequences, on average; followed by Proteobacteria (17.68%), Chloroflexi (9.71%), Bacteroidota (8.39%), and Acidobacteriota (5.53%) (Fig. A; Table ). For fungi, the dominant phylum in all soil samples was Ascomycota , on average constituting 72.03% of all sequences, followed far behind by Basidiomycota (18.70%), along with Fungi_unclassified (4.73%) and Chytridiomycota (4.32%) (Fig. B; Table ). Globally, the bacterial phyla Actinobacteria , Cyanobacteria , Proteobacteria , Firmicutes , Chloroflexi , Bacteroidetes , Acidobacteria , Verrucomicrobia , Gemmatimonadetes , Planctomycetes , and Deinococcus-Thermus and the fungal phyla Ascomycota , Basidiomycota , and Chytridiomycota have been reported as the most abundant taxa across all biocrusts developmental stages in various desert ecosystems. Hence, our findings strongly agreed with those reported assessments. Not surprisingly, in our study, the relative abundances of those phyla changed with the successional stage of biocrusts. For bacteria, in shifting from cyanobacterial crust to lichen crust and then to moss crust, the corresponding relative abundance of Proteobacteria increased significantly (from 15.33 to 16.08% and then to 23.34%), while that of Cyanobacteria decreased significantly (from 35.16 to 17.00% and then 20.23%) (Fig. A; Table ). Meanwhile, a hump-shaped response whereby relative abundance rose then fell was found for Actinobacteria (from 18.50 to 24.68% and then 21.98%), Chloroflexi (from 8.26 to 14.04% and then 8.06%), and Acidobacteria (from 4.89 to 8.78% and then 4.92%). Of them, Proteobacteria are dominant in a wide range of harsh conditions, especially oligotrophic habitats, with Actinobacteria also described as a dominant group in desert soils given their ability for filamentous growth, which may effectively mitigate damage from drought, high temperatures, and UV radiation . Moreover, as the oldest known photosynthetic autotrophic component of biocrusts, Cyanobacteria can survive and rapidly grow in water and nutrient-poor desert soils; the fossilized soil structure of a 2.6-billion-year-old biocrust indicates that it was most likely composed of Cyanobacteria members . We found that Firmicutes usually attained their highest relative abundance in desert topsoil, but then gradually declined in the course of biocrust succession. Similarly, many other studies have shown that, during the succession of biocrusts, the Cyanobacteria initially dominant in the cyanobacterial crust undergo a predictable reduction in abundance as Actinobacteria , Proteobacteria , Chloroflexi , Acidobacteria , Gemmatimonadetes , Bacteroidetes , Planctomycetes , Verrucomicrobia , and Deinococcus-Thermus become more common . Furthermore, a total of 154 bacterial genera displayed significant differences in their relative abundance across the successional gradient (i.e., stages 1–4 ) (Fig. ; Table ). Among those, the 15 most abundant (in descending rank) were Microcoleus_ PCC-7113, norank_ Coleofasciculaceae , Crinalium_SAG _22.89, norank_ Acetobacteraceae , norank_ Frankiales unclassified_ Micromonosporaceae , Deinococcus , Roseisolibacter , Acidiphilium , Microvirga , Haliangium , Arthrobacter , norank_ Vicinamibacterales , norank_ Spirosomaceae , and Candidatus_Alysiosphaera (Fig. A; Table ). Both Microcoleus _PCC-7113 and norank_ Coleofasciculaceae , which were always affiliated with Cyanobacteria , each reached a significantly higher relative abundance in cyanobacterial crust than in either lichen crust or moss crust and especially vis-à-vis bare sand ( stage 1 ) in the Hulun Buir Sandy Land. Although more than 320 cyanobacterial species from 70 genera have been identified in biocrusts so far, few actually participate in biocrust formation . Among these, Microcoleus is the most dominant cyanobacterial genus in biocrusts found in most arid and semi-arid regions, such as the Colorado Plateau in the USA , the Negev Desert in Israel , and both the Gurbantunggut Desert and Tengger Desert in China, as well as the Kyzyl-Kum desert in Uzbekistan ; its species are typical filamentous nonheterocystous cyanobacteria. Notably, M. vaginatus and M. steenstrupii are often affiliated with Microcoleus , albeit harboring different adaptations to temperature, and both species appear dominant in cyanobacterial biocrust communities worldwide; the former being more abundant in cooler environments, while the latter dominates warmer ones . Also belonging to the Cyanobacteria is the Coleofasciculaceae family, whose members reach substantially higher relative abundances in cyanobacterial crust than other biocrust types, being widely found in the Tengger Desert and Kyzyl-Kum Desert, as well as the Tabernas Desert in Spain . Unlike bacteria, for fungi its community composition has been reported to change negligibly during biocrust succession, with Chytridiomycota found at lower relative abundance in bare sandy soils whereas Ascomycota reached maximal abundances (over 60%) in different successional stages . In our study, however, the relative abundance of Ascomycota in the lichen crust ( stage 2 ) was below 60% (ca. 52%), while that of Basidiomycota reached as high as 40% (Fig. B). Conversely, Chytridiomycota was greatly reduced in abundance, from 11.08% (bare sand) to 0.79% (moss crust), across the biocrust successional gradient of Hulun Buir Sandy Land. In fact, Chytridiomycota has only been detected at very tiny abundances in the Oman and Chihuahuan deserts up to date worldwide . These results are consistent with those of previous studies, which together suggest that Chytridiomycota dominate the early stage of biocrust development, hinting at their tolerance of stressful environments . Collectively, these phyla showed no site-specificity and were ubiquitous in previous research in various desert soil and biocrusts. We found a total of 25 fungal families whose relative abundance differed significantly among stages 1–4 that formed the successional gradient (Table ). Of those, the 15 most abundant (in descending order) were Atheliaceae , Trichomeriaceae , Didymellaceae , Pleosporaceae , Aspergillaceae , Camarosporidiellaceae , unclassified_ Agaricomycetes , Verrucariaceae , Taphrinaceae , Magnaporthaceae , Sclerotiniaceae , Periconiacea e, Cantharellales_fam_Incertae_sedis , Trimorphomycetaceae , and Cephalothecaceae (Fig. B; Table ). This provides compelling evidence that fungal community composition varies more considerably at the family than phylum level through the succession of biocrusts.
The Mantel test results revealed that variation in bacterial and fungal community composition (weighted UniFrac distance matrix-based) responded to the 12 soil parameters examined (Fig. ; Table ). Notably, bacterial community composition was positively and strongly correlated with both D 1 ( r = 0.657, p = 0.001) and AK ( r = 0.534, p = 0.004), moderately so with SOM ( r = 0.454, p = 0.003), TN ( r = 0.439, p = 0.007), AN ( r = 0.437, p = 0.007), and WST ( r = 0.399, p = 0.01), and likewise, but to a lesser degree, with TP ( r = 0.375, p = 0.016) and AP ( r = 0.335, p = 0.023) (Fig. ; Table ). The fungal community composition also had positive correlations of similar magnitude with D 1 ( r = 0.715, p = 0.001), AK ( r = 0.507, p = 0.001), SOM ( r = 0.449, p = 0.002), TN ( r = 0.430, p = 0.001), AN ( r = 0.394, p = 0.006), and WST ( r = 0.389, p = 0.01), along with TP ( r = 0.390, p = 0.012) as well as AP ( r = 0.342, p = 0.043) (Fig. ; Table ). Furthermore, we used db-RDA to evaluate the effects of five VIFs (variance inflation factors) on soil bacterial and fungal community composition (Fig. ; Table ). These results showed that about 31.51% of the variance in bacterial community composition could be explained by the selected edaphic properties (Fig. A; CAP1 and CAP2 explained 19.66% and 11.85% of the variance, respectively). Crucially, three variables alone were mainly responsible for successional shifts in the bacterial community composition of biocrusts: WST ( r 2 = 0.825, p = 0.002), D 1 ( r 2 = 0.770, p = 0.002), and pH ( r 2 = 0.726, p = 0.004) (Fig. A; Table ). Likewise, for fungal community composition, edaphic properties accounted for about 27.86% of its variance (Fig. B; CAP1 and CAP2 explained 17.10% and 10.76% of the variance, respectively). In this respect, the observed shifts in fungal community composition were driven by four variables: D 1 ( r 2 = 0.868, p = 0.001), WST ( r 2 = 0.751, p = 0.003), pH ( r 2 = 0.521, p = 0.028), and TK ( r 2 = 0.525, p = 0.037) (Fig. B; Table ). Nonnegligible, only a small proportion of their community-level variation could be explained by all variables we examined, especially for fungal taxa, for which a high proportion of variation was unexplained. It is largely ascribed to the unmeasured environmental variables. So much unexplained variation in the communities of bacteria and fungi belowground suggested potential effects of neutral or stochastic processes upon community assembly during the succession of biological soil crusts, especially for the fungi . Therefore, more environmental variables, especially availability of soil nutrients (i.e., Ca 2+ , Mg 2+ , and Al − ), should be incorporated into coupling analysis in the future. Previous studies have demonstrated that certain soil properties, namely pH, soil organic carbon, and salinity, can variously play an instrumental role in shaping soil microbial diversity and community composition. Thus, as our results suggest, biocrusts may indirectly affect the microbial community in their underlying soil via their modulation of chemical soil properties. Importantly, the impact of environmental factors on soil bacterial and fungal communities depends on the spatial scale considered. Globally, soil pH is deemed the paramount determinant of bacterial community composition . Regionally, however, the soil type, texture, nutrient content, salinity, and moisture are all critical factors governing the bacterial structure and composition of biocrusts . Despite this new knowledge of changing microbial characteristics through the succession of biocrusts, the responsible mechanisms remain unclear. Therefore, distinguishing the fundamental ecological processes (deterministic versus stochastic) shaping soil microbial community composition in the Hulun Buir Sandy Land is a future research priority of ours. Moreover, we used a “space-for-time substitution” sampling approach to reflect the changes in microbial composition along the cyanobacterial crust–lichen–moss crust successional gradient in Hulun Buir Sandy Land. Admittedly, in our analysis, only three samples from each stage included. Consequently, such small sample size may lead to a bias in the analysis of the results. Thus, it is necessary to collect further samples for a complementary analysis in the future. Over all, our findings thus lend support to this emerging view, and point to complex, possibly divergent mechanisms at work in shaping the successional microbial dynamics of biocrusts in cold desert ecosystems.
This study employed a “space-for-time substitution” to infer changes in soil properties and microbial dynamics during the succession of biocrusts in the Hulun Buir Sandy Land of Northeast China. Our results revealed significant improvement in the aggregated structure and nutrient status of the shallow soil layer in the course of biocrust succession (i.e., going from bare sandy surface to cyanobacterial crust, then to lichen crust, and eventually to moss crust). Meanwhile, soil bacteria and fungi exhibited contrasting trends during succession, with the former increasing but the latter decreasing. As biocrust succession progressed, soil bacterial and fungal communities at various taxonomic levels (phylum and genus) underwent predictable shifts, albeit to varying degrees, this largely driven by altered soil properties. Although more in-depth studies are needed for the present study, these results still provide guidance for analyzing the ecological functions of biocrusts and related applications.
Below is the link to the electronic supplementary material. Supplementary file1 (XLSX 40 KB)
|
The effect of cigarette smoking and heated tobacco products on different denture materials; an in vitro study | d8354fa6-f6f5-4d85-b522-808564bafde9 | 11789351 | Dentistry[mh] | In the oral environment, dental prostheses are continuously exposed to deleterious complex endogenous and exogenous factors that might result in biodegradation that alters the physical and mechanical properties of the material; one of these is cigarette smoking. According to the World Health Organization, cigarette smoking is a public health problem reported in almost 1.3 billion people around the world , despite protracted anti-smoking campaigns, smoking remains an everyday habit. Conventional cigarette smoke (CS) is composed of a mixture of a gaseous and a particulate phase and contains toxic agents such as carbon monoxide (CO) . Pigments contained in tobacco residue (tar) can be responsible for the discoloration of both dental tissues and resin-based restorations . Also, resin-based restorations may get contaminated by heavy metals such as lead and cadmium changing the chemical and physical properties such as surface roughness, water sorption, solubility, and staining . Recently, new products known as “modified risk tobacco products” (MRTP) have been presented as an alternative to conventional cigarettes and an intermediate step in quitting the smoking habit, assuming that they contain a reduced number of harmful chemicals than regular CS , and many smoke users switched to these types of products. Therefore, the increasing use of MRTP leads to the need to evaluate the effects of such systems on the color stability of restoration materials and dental tissues . Looking closely at both types of smoking to compare their effects, the smoke that directly emerges from a lit cigarette is frequently referred to as “whole smoke.” It comprises liquid droplets suspended in an aerosol mixture of gases and semi-volatile chemicals. This phase is called the particle phase. It is commonly known as “tar” or nicotine-free particulate fraction when it is devoid of nicotine. In comparison, E-cigarettes emit an aerosol that includes nicotine and other substances, but they don’t produce the same particle matter as conventional cigarettes. Consequently, these products are thought to stain less than conventional smoking . Further comparisons between CS and HT show that CS results from incomplete tobacco combustion at temperatures reaching 900 °C. In contrast, the aerosols of heated tobacco are produced at temperatures well below 400 C. This significant difference in combustion temperatures alters the resulting chemical constituents produced, supposedly causing the majority of harmful substances in CS to be absent in heated tobacco. At the same time, those are presented in substantially smaller concentrations . Conventional cigarette smoke affects the marginal integrity of polymeric tooth restorations and denture bases, such as heat-cured, flexible, titanium-reinforced, and 3D-printed resins, and it’s natural to assume that other effects, like discoloration, surface roughness, and bacterial colonization, might also be affected . Therefore, the possibility of in vitro simulation of the staining susceptibility to smoke could be of interest. Unfortunately, there is a lack of standardization for smoke staining protocols . This study explores the claims that non-heated tobacco could be less harmful and have fewer adverse effects than cigarette smoke. Smokers are more likely to quit when they’re made aware of the adverse effects of smoking than when other strategies are employed to induce the same behaviour , according to research on smoking cessation, which was what prompted the authors to perform this study. In this research the materials used are heat cured acrylic resin and several modifications of it. Conventional heat cured acrylic resin is known for its brittleness and low impact strength. Thus attempts to modify these properties involve the use of metal wires or plates, fibers, particles or metal powder. It was noted that the addition of metal fillers, provides improved strength, thermal conductivity and makes the acrylic resin radiopaque . Thus the addition of titanium nanoparticles to acrylic resin in this study. Flexible acrylic resin, on the other hand, shows lower surface roughness, hardness and impact strength, compared to conventional heat cured acrylic resin . A recent study compared the difference in flexural strength between conventional and 3D printed acrylic resin, finding the latter inferior to the formal . This study compared the effect of CS versus heated tobacco using a custom-made chamber device on the discoloration, surface roughness, and bacterial colonization of different oral prosthesis materials. The null hypothesis was that conventional smoke and heated tobacco exposure would not significantly change the surface roughness, bacterial accumulation, and color change of the study samples and that there is no difference in the effect of both types of smoking. The Research and Ethics Committee of the Faculty of Dentistry, The British University in Egypt, reviewed and approved this research project protocol with project approval number 24 − 005. The sample size was calculated by G*Power software for Windows version 3.1.9.4 based on a previous study The minimum sample size was calculated to be 8 samples per group; it was increased to 10 samples per group to compensate for any defects. The primary outcomes are measuring changes in surface roughness, bacterial accumulation, and dental materials’ color stability due to different smoking types. Samples preparation Four different denture base materials were used to construct one hundred and twenty disc-shaped samples of 1 cm diameter and 2 mm thickness: conventional heat-cured acrylic resin (CA) (Acrostone, Egypt), flexible acrylic resin (FA) (Valplast, Valplast International Corp, USA), heat-cured acrylic resin reinforced with titanium nanoparticles (TA) (TA nanoparticles ( Nanogate, Egypt), and 3D printed acrylic resin (PA) (Nexdent, The Netherlands), composition of materials are shown in (Table ). Another sixty samples of artificial teeth were used: conventional ready-made acrylic resin teeth (Acrostone, Egypt) and 3D-printed acrylic resin teeth (Nexdent, The Netherlands). The heat-cured acrylic resin groups were constructed using the conventional compression-molding technique with a long curing cycle (74 °C for 8 h followed by 100 °C for 1 h). For the printed groups (PA and 3D printed teeth), CAD software (Exocad, Darmstadt, Germany) was used to design the samples. Then, the printing angle was set at 90 degrees, and the 3D printer (Anycubic, China) was filled with liquid resin (pink for denture base samples and white teeth samples), and the samples were subsequently printed. The denture base samples were used to assess surface roughness and biofilm formation, while the artificial teeth samples were used to determine color change. All groups were divided according to the smoking method into three subgroups: the control group with no smoking exposure (I), the conventional smoking exposure group (II), and the heated tobacco exposure group (III). All samples were stored in artificial saliva at 37 °C for 24 h to simulate the conditions of the oral cavity before any interference. Artificial saliva was obtained by dissolving the following ingredients in one liter of deionized water: Xanthan gum (0.92), KCl (1.2), NaCl (0.85), MgCl2 (0.05), NaH2PO4 (0.13), C8H8O3 (0.13) and CaCl2 (0.13) . Baseline measurements The surface roughness of all denture base samples was measured using a profilometer (JITAI8101 Surface Roughness Tester—Beijing Jitai Tech Detection Device Co. Ltd, China) at cut off 0.25 mm, number of cuts 1and range ± 40 μm. In compliance with ISO 11,562 recommendations for standardization, each sample was measured three times at different locations (the middle and sides), and the average was obtained to get the mean surface roughness values (Ra). According to the CIE L*a*b* color order system, the three color parameters of each artificial tooth specimen were measured using a VITA Easyshade spectrophotometer Advance 4.01 (VITA Zahnfabric, Bad Sackingen, Germany) at 3 different areas. Mean measurements were then calculated. Smoking standardizing device The smoking standardizing apparatus, designed and constructed at The Dentistry Research Center, Faculty of Dentistry, The British University in Egypt, was a crucial tool in this study. It was created to simulate the smoking process to investigate the effects of smoking on different dental materials. The apparatus includes a motor with a gearbox to lower its speed to 2 Hz (2 cycles per second), a crankshaft, and a connecting rod attached to a slider to convert the rotational movement into a 4.5 cm-long linear movement. A stainless-steel cylinder with an internal diameter of 12 cm (6 cm radius) with a piston to generate suction power with about 500 ml volume, simulating the tidal volume taken during smoking was designed. A cigarette or electronic smoking device is attached to a valve that allows inhalation of the smoke in one direction only, simulating the mouth. Another valve allows the exhalation in one direction only, simulating the nose. To simulate the oral cavity, a pool of water with a heater linked to a thermal sensor regulates the temperature between 36.5 and 37.5 °C with 100% humidity . The samples were mounted on 2 perforated trays to allow total exposure of all samples to the smoke equally (Fig. ). Exposure of specimens to smoking Conventional cigarettes (LM, Philip Morris International Inc., Egypt) and heated tobacco electronic cigarettes (Heets, Russet selection, Philip Morris International Inc., Italy) were used. The samples were exposed to cigarette smoke of 600 cigarettes/heets, representing 30 days of medium smoker behavior (20 cigarettes per day) . Then, the samples were gently washed with distilled water for 1 min. The control groups were mounted on the perforated trays and were placed in the smoking apparatus. A complete cycle was then performed without smoking. Postexposure measurements The surface roughness of the denture base samples was performed using the same previous parameters. The color parameters of each artificial tooth sample were measured using the same previous method, and then the color change was calculated according to the following formula: [12pt]{minimal} $$\: E_{2-1}\:=\:([ L]^2\:+\:[ a]^2\:+\:[ b]^2)^{1/2}$$ SEM assessment One sample from each group was examined by scanning electron microscopy (Thermo Fisher (USA) Quattro S Felid Emission Gun, Environmental SEM “FEG ESEM”) at the Nanotechnology Research Center at The British University in Egypt to evaluate the surface topography. Assessment of bacterial biofilm formation on dental discs by streptococcus mutans strain (S. mutans) Bacterial inoculum preparation A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . Statistical analysis Statistical analysis of the obtained data was performed using SPSS for Windows (version 26.0; SPSS Inc., Chicago, IL, USA). Paired sample t-test was conducted to determine the change in surface roughness. An independent sample t-test was used to compare color changes between different artificial teeth materials. One-way ANOVA and Tukey post hoc tests were used to determine the effect of various materials and smoking types on surface roughness and bacterial biofilm formation. Four different denture base materials were used to construct one hundred and twenty disc-shaped samples of 1 cm diameter and 2 mm thickness: conventional heat-cured acrylic resin (CA) (Acrostone, Egypt), flexible acrylic resin (FA) (Valplast, Valplast International Corp, USA), heat-cured acrylic resin reinforced with titanium nanoparticles (TA) (TA nanoparticles ( Nanogate, Egypt), and 3D printed acrylic resin (PA) (Nexdent, The Netherlands), composition of materials are shown in (Table ). Another sixty samples of artificial teeth were used: conventional ready-made acrylic resin teeth (Acrostone, Egypt) and 3D-printed acrylic resin teeth (Nexdent, The Netherlands). The heat-cured acrylic resin groups were constructed using the conventional compression-molding technique with a long curing cycle (74 °C for 8 h followed by 100 °C for 1 h). For the printed groups (PA and 3D printed teeth), CAD software (Exocad, Darmstadt, Germany) was used to design the samples. Then, the printing angle was set at 90 degrees, and the 3D printer (Anycubic, China) was filled with liquid resin (pink for denture base samples and white teeth samples), and the samples were subsequently printed. The denture base samples were used to assess surface roughness and biofilm formation, while the artificial teeth samples were used to determine color change. All groups were divided according to the smoking method into three subgroups: the control group with no smoking exposure (I), the conventional smoking exposure group (II), and the heated tobacco exposure group (III). All samples were stored in artificial saliva at 37 °C for 24 h to simulate the conditions of the oral cavity before any interference. Artificial saliva was obtained by dissolving the following ingredients in one liter of deionized water: Xanthan gum (0.92), KCl (1.2), NaCl (0.85), MgCl2 (0.05), NaH2PO4 (0.13), C8H8O3 (0.13) and CaCl2 (0.13) . The surface roughness of all denture base samples was measured using a profilometer (JITAI8101 Surface Roughness Tester—Beijing Jitai Tech Detection Device Co. Ltd, China) at cut off 0.25 mm, number of cuts 1and range ± 40 μm. In compliance with ISO 11,562 recommendations for standardization, each sample was measured three times at different locations (the middle and sides), and the average was obtained to get the mean surface roughness values (Ra). According to the CIE L*a*b* color order system, the three color parameters of each artificial tooth specimen were measured using a VITA Easyshade spectrophotometer Advance 4.01 (VITA Zahnfabric, Bad Sackingen, Germany) at 3 different areas. Mean measurements were then calculated. The smoking standardizing apparatus, designed and constructed at The Dentistry Research Center, Faculty of Dentistry, The British University in Egypt, was a crucial tool in this study. It was created to simulate the smoking process to investigate the effects of smoking on different dental materials. The apparatus includes a motor with a gearbox to lower its speed to 2 Hz (2 cycles per second), a crankshaft, and a connecting rod attached to a slider to convert the rotational movement into a 4.5 cm-long linear movement. A stainless-steel cylinder with an internal diameter of 12 cm (6 cm radius) with a piston to generate suction power with about 500 ml volume, simulating the tidal volume taken during smoking was designed. A cigarette or electronic smoking device is attached to a valve that allows inhalation of the smoke in one direction only, simulating the mouth. Another valve allows the exhalation in one direction only, simulating the nose. To simulate the oral cavity, a pool of water with a heater linked to a thermal sensor regulates the temperature between 36.5 and 37.5 °C with 100% humidity . The samples were mounted on 2 perforated trays to allow total exposure of all samples to the smoke equally (Fig. ). Conventional cigarettes (LM, Philip Morris International Inc., Egypt) and heated tobacco electronic cigarettes (Heets, Russet selection, Philip Morris International Inc., Italy) were used. The samples were exposed to cigarette smoke of 600 cigarettes/heets, representing 30 days of medium smoker behavior (20 cigarettes per day) . Then, the samples were gently washed with distilled water for 1 min. The control groups were mounted on the perforated trays and were placed in the smoking apparatus. A complete cycle was then performed without smoking. The surface roughness of the denture base samples was performed using the same previous parameters. The color parameters of each artificial tooth sample were measured using the same previous method, and then the color change was calculated according to the following formula: [12pt]{minimal} $$\: E_{2-1}\:=\:([ L]^2\:+\:[ a]^2\:+\:[ b]^2)^{1/2}$$ One sample from each group was examined by scanning electron microscopy (Thermo Fisher (USA) Quattro S Felid Emission Gun, Environmental SEM “FEG ESEM”) at the Nanotechnology Research Center at The British University in Egypt to evaluate the surface topography. Bacterial inoculum preparation A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . A pure single colony of the reference strain S. (AT ATCC 25175) was used to inoculate 5 ml aliquots in test tubes of brain heart infusion broth supplemented with 2% sucrose. The bacterial cultures were placed in an incubator (Model B 28, BINDER GmbH) at 37 °C for 48 h. The bacterial culture was then adjusted to an optical density (OD) of 0.09 at 600 nm using brain heart infusion broth in the same medium containing 2% sucrose. The concentration of bacteria was determined using a spectrophotometer (Unicam, UK). The denture base samples were then sterilized and inserted separately into 50 ml falcon tubes. Aliquots of 2 ml of adjusted bacterial suspension were pipetted in these falcon tubes for biofilm formation. Then, the discs containing bacterial suspension were incubated for 48 h at 37 °C . After that, the samples were aseptically removed from the cultures using sterile forceps and washed gently three times with 0.9% saline to remove the non-adherent bacteria, then transferred to new falcon tubes containing 5 ml of 0.9% saline. To determine biofilm formation attached to the surface of the samples, the falcon tubes were vortexed with a sonicator (Acculab, USA) at 30 g for 3 min to detach microorganisms from the discs. Then, aliquots of 100 µL of the biofilm suspension were serially diluted up to 10 6 . Dilution was performed in triplicates. After that, 10 µL of each diluted suspension was inoculated on brain heart agar plates and incubated at 37 °C for 48 h (Fig. ). After the incubation, the colony-forming units (CFU) in plates with 30 to 300 typical colonies of S. mutans were counted and then reported in CFU/ml . Statistical analysis of the obtained data was performed using SPSS for Windows (version 26.0; SPSS Inc., Chicago, IL, USA). Paired sample t-test was conducted to determine the change in surface roughness. An independent sample t-test was used to compare color changes between different artificial teeth materials. One-way ANOVA and Tukey post hoc tests were used to determine the effect of various materials and smoking types on surface roughness and bacterial biofilm formation. Figure shows the samples after performing different exposure procedures: I: the control group with no smoking exposure, II: conventional cigarette smoking exposure, and III: heated tobacco exposure. Surface roughness results The control groups did not show a significant increase in surface roughness for all four types of used denture base materials. However, both types of smoking caused a statistically significant increase in surface roughness. The mean surface roughness values before and after exposure are shown in (Table ). Regarding the effect of the type of smoking on change in surface roughness (Δ Ra) of different denture base materials, there was a statistically significant difference between the control and the conventional cigarette smoking subgroups. However, there was no statistically significant difference between the control and the heated tobacco groups. (Table ; Fig. ). Concerning different materials, there was no statistically significant difference between the mean values of Δ Ra of different materials in the control, conventional cigarette smoking, or heated tobacco groups (Table ; Fig. ). The surface topography images of the studied samples at 1000X are presented in (Fig. ). The CS groups showed significant change in the surface topography with increased pitting of the surface compared to the control groups. The change in the topography of the surface of the samples was almost identical for all types of denture base materials. Also, the HT groups presented increased pitting than the control groups but to a lesser amount than that of the CS groups. Bacterial accumulation test Using ANOVA and Tukey as post-hoc tests, it was found that there was a statistically significant difference between all smoking subgroups. In the CA, FA, and PA groups, the heated tobacco subgroup (CA III) showed the highest level of bacterial accumulation, while the control groups showed the least. For the TA group, the heated tobacco subgroup showed the significantly highest level of bacterial accumulation, and there was no difference between the control and the conventional cigarette smoking groups (Table ; Fig. ). In the control subgroup (I), there was a statistically significant difference between all groups. The (FA I) and the (PA I) subgroups showed significantly higher bacterial accumulation than the (CA I) and the (TA I) groups. In the conventional cigarette smoking subgroup (II), there was a statistically significant difference between all groups, with the (CA II) showing the highest significant bacterial accumulation, followed by the (TA II) and (PA II) and the (FA II) showing the least significant. For the heated tobacco subgroup (III), there was a statistically significant difference between all subgroups. The (TA III) showed the highest significant bacterial accumulation, and the (FA III) showed the significantly least bacterial accumulation. There was no statistically significant difference between the (CA III) and (PA III) or the (PA III) and (FA III) (Table ; Fig. ). Color change For both types of teeth, the conventional cigarette smoking groups showed statistically higher significant differences in the mean values of color change (ΔE) than the control and the heated tobacco groups (Table ; Fig. ). Concerning the type of smoking groups, there was no statistically significant difference between the conventional acrylic resin and the 3D-printed groups (Table ; Fig. ). The control groups did not show a significant increase in surface roughness for all four types of used denture base materials. However, both types of smoking caused a statistically significant increase in surface roughness. The mean surface roughness values before and after exposure are shown in (Table ). Regarding the effect of the type of smoking on change in surface roughness (Δ Ra) of different denture base materials, there was a statistically significant difference between the control and the conventional cigarette smoking subgroups. However, there was no statistically significant difference between the control and the heated tobacco groups. (Table ; Fig. ). Concerning different materials, there was no statistically significant difference between the mean values of Δ Ra of different materials in the control, conventional cigarette smoking, or heated tobacco groups (Table ; Fig. ). The surface topography images of the studied samples at 1000X are presented in (Fig. ). The CS groups showed significant change in the surface topography with increased pitting of the surface compared to the control groups. The change in the topography of the surface of the samples was almost identical for all types of denture base materials. Also, the HT groups presented increased pitting than the control groups but to a lesser amount than that of the CS groups. Using ANOVA and Tukey as post-hoc tests, it was found that there was a statistically significant difference between all smoking subgroups. In the CA, FA, and PA groups, the heated tobacco subgroup (CA III) showed the highest level of bacterial accumulation, while the control groups showed the least. For the TA group, the heated tobacco subgroup showed the significantly highest level of bacterial accumulation, and there was no difference between the control and the conventional cigarette smoking groups (Table ; Fig. ). In the control subgroup (I), there was a statistically significant difference between all groups. The (FA I) and the (PA I) subgroups showed significantly higher bacterial accumulation than the (CA I) and the (TA I) groups. In the conventional cigarette smoking subgroup (II), there was a statistically significant difference between all groups, with the (CA II) showing the highest significant bacterial accumulation, followed by the (TA II) and (PA II) and the (FA II) showing the least significant. For the heated tobacco subgroup (III), there was a statistically significant difference between all subgroups. The (TA III) showed the highest significant bacterial accumulation, and the (FA III) showed the significantly least bacterial accumulation. There was no statistically significant difference between the (CA III) and (PA III) or the (PA III) and (FA III) (Table ; Fig. ). For both types of teeth, the conventional cigarette smoking groups showed statistically higher significant differences in the mean values of color change (ΔE) than the control and the heated tobacco groups (Table ; Fig. ). Concerning the type of smoking groups, there was no statistically significant difference between the conventional acrylic resin and the 3D-printed groups (Table ; Fig. ). An in vitro study design was employed to control all the factors and enable accurate data collection. The study evaluated and compared the effect of conventional cigarette smoking and heated tobacco on the surface roughness, bacterial accumulation, and color stability of different denture bases and teeth materials. The null hypothesis of the study were rejected as significant differences were found among different groups in surface roughness, biofilm formation and color change. The results showed that conventional cigarette smoking and heated tobacco significantly increased the surface roughness of different denture base materials. Although conventional smoke increased the surface roughness by a greater degree, this difference was not statistically significant. These results are consistent with previous studies, which state that smoking of all types affects the surface roughness of dental materials and that tobacco consumption of all types is associated with tooth discoloration and changes in the surface properties of dental materials . This finding was supported by the SEM images, which showed that all CS groups had a noticeable increase in the pitting of the acrylic surface. With CS, these changes were attributed to the deposition of cigarette substances on the surface of the acrylic resin . When burning the cigarette, the resultant smoke contains multiple components, such as carbon monoxide, carbon dioxide, nicotine, ammonia, nickel, arsenic, tar, and heavy metals such as lead and cadmium . Another possible explanation may be due to the increase in temperature within the smoking chamber, i.e., the thermal effects of smoking, as reported in a previous study . According to Mathias P et al., the tar of cigarettes contains aromatic hydrocarbons that have a surface-dissolving action on the polymeric materials. Polymeric materials are insoluble in oral fluids but are soluble to some extent in aromatic hydrocarbons . From another point of view, there is a possibility that cigarette smoke will get mixed with saliva, which may produce an acidic pH solution, damaging the surface integrity of the materials . Previous studies have claimed that heated tobacco is a significantly safer smoking option in terms of product release due to the absence of tar, which was identified as a leading cause of increased surface roughness and material discoloration . However, in our study, although the increase in surface roughness after exposure to HT was less than that after CS, this difference was insignificant. This study also showed a significant increase in bacterial biofilm formation on all denture base materials CS and HT, which could be related to surface roughness. The clinical threshold value of surface roughness (Ra) for plaque retention on intraoral materials was 0.2 μm as advocated by Bollen C et al., . In accordance, below this value, no further reduction in plaque accumulation was expected, but above it, a proportional increase occurred . Other studies have previously stated that surface irregularities provide an environment that promotes bacterial colonization and biofilm formation . Surface roughness increases surface area, hydrophobicity, and surface energy, which, in turn, affects the mechanism of the bacteria’s attachment to that surface and its adhesion . The increase in bacterial biofilm formation was more significant in all HT groups than in the CS groups. It was previously claimed by another study that e-smoke promoted the growth of S. mutans, the expression of virulent genes, and the adhesion to and formation of biofilms on tooth surfaces, supporting the increase in bacterial biofilm formation . The increase in surface roughness has long been proven to cause resins to have a rougher surface and a resultant color change. We can see this when we compare resins to dental ceramics, whose highly glazed and polished surfaces result in greater color stability. In comparison, resins are more porous and have a less polished outer surface . A recent study found that 3D printed resins showed inferior mechanical properties and higher water solubility than conventional heat cured acrylic resin, even before external stimuli, which might cause us to expect a significant difference between the two materials when exposed to smoking, however this was not the case with our study, where the difference between the 2 materials was statistically insignificant . Spectrophotometers often report color using the CIELAB color system, representing the international standard for color measurement. It is currently one of the most popular and widely used color spaces. It is well suited for determining minor color differences . ΔE values less than 1 were regarded as undetectable by the human eye. Color differences of 3.3 > ΔE > 1 may be detectable by a skilled operator but were considered clinically acceptable. On the other hand, values of ΔE > 3.3 would be detectable by a nonskilled person and, therefore, considered clinically unacceptable . In this study, all groups except the conventional acrylic resin artificial teeth showed ΔE > 3.3. Both heated tobacco and CS caused a significant color change in 3D-printed teeth. This coincides with another study whose results found that the most remarkable changes in surface roughness were observed in their 3D-printed samples, followed by the heat-polymerized samples, and that these changes can alter their translucency and opacity, thus affecting their color. In contrast, Alfouzan et al. studied and compared the color stability of 3D-printed and conventional heat-polymerized acrylic resins in general following aging, mechanical brushing, and immersion in a staining medium, and found that color changes of 3D-printed denture resins were low compared to conventional heat polymerized resin . Flexible resin, on the other hand, was found by another study to be the least staining denture base material as compared to conventional heat cured acrylic resin . The CS caused a significant color change in conventional acrylic resin and 3D-printed artificial teeth compared to the heated tobacco groups, which could be attributed to the latter’s absence of tar. Several studies report that cigarette smoking affects the color of natural teeth and dental materials, including denture teeth . The results of this study were consistent with those of Mathias P et al., Wasilewski MA, et al., and Mathias P, et al., . These studies evaluated the effect of tobacco smoke on the color of composites. A slight color change occurred in all control group samples, which was relative to baseline; this was assumed to be due to the thermal effect of the immersion temperature and due to water absorption and due to mucin, which is one of the components of the artificial saliva . According to Craig , polymeric teeth are insoluble in oral fluids, but they are soluble to some extent in aromatic hydrocarbons. According to Mathias P, et al., tar contains aromatic hydrocarbons. So, it was deduced from this study that such surface-dissolving substances might be causative factors of discoloration. Also, there was a possibility that the cigarette smoke mixed with saliva may have produced an acidic pH solution, which might have damaged the surface integrity of the materials, thus creating favorable conditions for discoloration . Despite this study’s limitations, we have concluded that conventional cigarette smoking and heated tobacco affect the surface roughness, bacterial biofilm formation, and color changes of dental materials. |
Unveiling the unknown viral world in groundwater | 43ac5ffc-4a94-4679-a1c2-951534819eb4 | 11310336 | Microbiology[mh] | Viruses, the most abundant entities on earth, have profound impacts on all organisms and ecosystems – . There are ~ 2 × 10 29 prokaryotic cells living in groundwater, which represent a major component of genetic diversity and manipulate biogeochemical and ecological processes , . In parallel with in-depth studies of groundwater microorganisms – , some evidence suggests that viruses are also involved in the biogeochemical cycle of the groundwater ecosystem , . The presence of viruses in aquifers has been previously confirmed, with variable morphology and high abundance (10-fold more than prokaryotes) . Increasing numbers of lytic bacteriophages infecting abundant hosts (e.g., Pseudomonas , Bacillus , and Desulfovibrio ) have been identified in phreatic water and deep groundwater , , implying that subsurface environments might be underexplored biotopes in the global virosphere . Although viruses play important roles in host evolution, microbial metabolism, and ecological processes , , only a few viruses could be identified by culture-dependent methods. With the development of next-generation sequencing technology, an increasing number of viral sequences has been identified from meta-omics data – , further deepening our understanding of the virome in different habitats such as oceans , , soil , , human gut , , and wastewater treatment plants . Recent studies of viral diversity and host interaction based on meta-omics data have helped overcome difficulties encountered in capturing groundwater viruses through limited amounts of culture , – . Viruses in groundwater displayed massive novelty different from previously known viruses , . Previously reported virus-host relationship in groundwater, e.g., viruses targeting Altiarchaeota and Firmicutes, has provided an ideal model for viral lifestyle and infection mechanism of some specific taxa , . However, the great diversity of groundwater prokaryotes (spanning over one hundred phyla) and antiviral systems (such as CRISPR-Cas systems and Restriction-Modification) suggest that many unknown virus-host relationships exist but are yet to be identified , . Besides, the limited information about viral auxiliary metabolic genes (AMGs) in groundwater may have hindered understanding of viral impacts on underground biogeochemical processes , , while frequent horizontal gene transfer and broad accessible host range also imply the urgent necessity for new explorations of viral AMGs involved in carbon, nitrogen, sulfur, and phosphorus metabolisms in groundwater , , . More importantly, groundwater ecosystems with typically anoxic or anaerobic environments provide ideal habitats for two keystone taxa, i.e., the candidate phyla radiation (CPR) bacteria and DPANN (an acronym of the names of the five initially found lineages Diapherotrites, Parvarchaeota, Aenigmarchaeota, Nanohaloarchaeota, and Nanoarchaeota) archaea , . CPR bacteria and DPANN archaea, as two remarkable groups in prokaryotic tree of life, usually share conserved traits such as ultrasmall cell size and extremely reduced genome , . Notably, CPR and DPANN microorganisms lack complete biosynthetic pathways for the synthesis of amino acids and nucleotides . Thus, they usually live as symbionts of other free-living prokaryotes to obtain essential biomolecules , and small cells of these symbionts can attach to larger cells of other microbes via diverse cell-surface modifications (e.g., pili, glycosyltransferase, concanavalin, and LamG protein) , . Moreover, antiviral systems such as Restriction-Modification and CRISPR-Cas were also found in CPR bacteria and DPANN archaea , – , implying a rich virome infecting these symbionts. Although a recent study reported some aquifer viruses targeted by the CRISPR spacer of Altiarchaeota , CPR/DPANN viruses in groundwater ecosystems remain largely unknown, especially in terms of diversity, lifestyle, functional potential, and their roles in the microbial symbiosis. Here, we aim to explore the enigmatic groundwater virosphere, including viral diversity, virus-host interaction, and AMGs related to biogeochemical cycling. To this end, we leverage ultra-deep metagenomic sequencing (over 30 giga bases per sample) of groundwater microbiome to establish a comprehensive non-redundant Groundwater Virome Catalogue (GWVC) consisting of 280,420 viral operational taxonomic units (vOTUs) at species level. This represents a ~10-fold expansion in the number of existing species of groundwater viruses derived from publicly available viral databases. Importantly, we unveil over 99% novel viruses and about 95% novel viral clusters in groundwater by comparing the GWVC with previously known viruses. The unique viral infection mode of prokaryotes suggests that microbial symbionts represented by keystone taxa like CPR bacteria and DPANN archaea are more susceptible to viral lysis in groundwater. Moreover, diverse AMGs related to methane, nitrogen, sulfur, and phosphorous cycles imply the important role of groundwater viruses in host metabolism and biogeochemical cycling. This study sheds light on the unknown viral world in groundwater, and emphasizes the fundamental importance of subsurface virosphere in future explorations of viral ecology.
The GWVC substantially expands groundwater virosphere By mining metagenomic data (20.8 tera bases) of 607 samples from monitoring wells in seven geo-environment zones across China (Fig. , Supplementary Data , and Methods), we constructed the Groundwater Virome Catalogue (GWVC). Four virus identification approaches were applied along with quality control (Methods), and 312,741 viral contigs (≥5 kb) were identified and then clustered at 95% average nucleotide identity (ANI). The generated GWVC consisted of 280,420 non-redundant viral contigs (≥5 kb), representing approximately species-level vOTUs (Fig. ). The completeness level of vOTUs in the GWVC varied from short fragments to complete or nearly complete genomes, including 5366 complete, 6092 high-quality, and 15,669 medium-quality genomes (Fig. ). Viral genome size of the GWVC ranged from 5 kb to 543.1 kb, and a sum of 107,610 vOTUs possess a length of ≥10 kb. Complete genomes had the largest mean size (49.0 kb), followed by high-quality (41.2 kb), medium-quality (36.1 kb), low-quality (11.0 kb), and not-determined (8.3 kb). Among 14,578 complete or high-quality genomes from the GWVC in the present study and the IMG/VR (groundwater section) (Fig. S1), the GWVC contributed more than 78.6% (11,458 genomes) of uncultivated viruses. To explore the novelty of GWVC, viral contigs (≥10 kb) and their proteins from the GWVC and the IMG/VR (groundwater, marine, human, surface freshwater, terrestrial, and wastewater) were extracted for comparison analysis (Methods). The GWVC at vOTUs and protein clusters (PCs) level expanded the number of known groundwater viral species 10-fold and PCs 8-fold (Fig. ). In the overlapping fraction between the GWVC and the IMG/VR, the number of vOTUs/PCs related to aquatic ecosystems (vOTUs: n = 156, PCs: n = 277,529) were much higher than those related to human systems (vOTUs: n = 92, PCs: n = 22,316) and terrestrial ecosystems (vOTUs: n = 12, PCs: n = 112,426) (Fig. S2). Remarkably, the vast majority of vOTUs/PCs (vOTUs: 99.8%, PCs: 86.3%) were unique to the GWVC (Fig. ), indicating the great potential of aquifers to act as large reservoirs of unknown viruses. To date, the differences in core features of viral genomes in groundwater and surface water remain unclear, though characteristics of prokaryotic genomes have been found to be strongly driven by environmental selection . The comparison of complete viruses in groundwater (GWVC) and surface water (surface freshwater and ocean sections of IMG/VR) indicated that groundwater viruses possess unique genomic, protein, and functional traits (Fig. 3 and Methods). Groundwater viruses are characterized by larger genome size and higher GC content but lower average molecular weight (Fig. 3a, b, c). Amino-acid biosynthetic cost minimization in microorganisms is regarded as a necessarily adaptive strategy under resource limitation in natural environments . Similarly, groundwater viruses in resource-limited groundwater might preferentially use amino acids with lower molecular weights to reduce assimilation costs . Moreover, groundwater viral proteins appear to possess higher nitrogen atoms per residue side chain (N-ARSC) and sulfur atoms per residue side chain (S-ARSC) but lower carbon atoms per residue side chain (C-ARSC) (Fig. 3d, e, f). The elemental composition of microbial proteins is also highly related to resource availability in environments , . Accordingly, higher N-ARSC and S-ARSC of viral proteins might reflect higher availabilities of nitrogen and sulfur in groundwater, while reducing the carbon usage of viral proteins might contribute to viral survival in carbon-limited conditions. Functionally, the ORFs related to L (replication, recombination, and repair) and S (unknown function) are richer in groundwater viruses (Fig. 3g). Hence, groundwater-specific viral adaptation enables virion reproduction to be maximized under nutrient-limited conditions in groundwater. Overall, 97.0% of vOTUs (≥10 kb) in the GWVC have the taxonomy assignment. Among these, at class level, the vast majority (95.8%) of vOTUs were assigned to the class Caudoviricetes within the realm Duplodnaviria (Fig. ), which contained head-tailed viruses that are common in most natural environments and human hosts , . In fact, over 94.0% of vOTUs (≥10 kb) in the GWVC could not be taxonomically annotated at order- or family-level (Fig. ). We also constructed the Caudoviricetes phylogenetic tree of GWVC vOTUs over high-quality along with NCBI RefSeq viruses based on a concatenated alignment of 77 marker proteins (Fig. 4). Many GWVC viruses appeared as independent clades that were close to certain known families (e.g., Drexlerviridae, and Autographiviridae) and recently proposed families (Casjensviridae, Mesyanzhinovviridae, Zobellviridae, Peduoviridae, and Steigviridae), among which Zobellviridae and Steigviridae (crAssphages) are thought to be widely distributed in global marine and human guts , respectively. Many GWVC viruses formed branches with subfamily-level viruses (e.g., Azeredovirinae, Bronfenbrennervirinae, and Tybeckvirinae), greatly expanding the taxonomic diversity of Caudoviricetes. Intriguingly, some vOTUs (3.3%) were classified as Nucleocytoplasmic Large DNA viruses (NCLDVs) which constitute the phylum Nucleocytoviricota. NCLDVs have complex genomes and large virions similar in size to small cellular organisms, and are usually abundant in marine, terrestrial and wastewaters . Moreover, NCLDVs infect a wide range of eukaryotes, such as protists and algae , implying possible impact on groundwater eukaryotes. Novel viruses occur in varying geo-environmental zones We used a gene-sharing network approach to cluster viral genomes over medium-quality from the GWVC and the RefSeq database, resulting in 2830 non-singleton viral clusters that contained at least one vOTU from the GWVC (Fig. 5a, Supplementary Data , and Methods). Only 5.4% (152 out of 2830) of the GWVC viral clusters overlapped with the RefSeq viruses, suggesting that a larger number of novel viral genera existed in the groundwater samples. The prevalence and abundance of viral clusters across all monitoring wells were calculated based on read recruitment of viral cluster members. As a result, viruses were found widely distributed at continental scale with a great proportion (79.2%, n = 2240) of virus clusters appearing in different geo-environmental zones (Fig. 6a). In the shared virus clusters, 54 viral clusters were prevalent in all geo-environmental zones, with 50 forming new clades on the viral proteomic tree, distinct from known viral families (Fig. 5b). Among these, 11 dominant viral clusters occurred in more than 20% of groundwater monitoring wells and possessed high relative abundance (RPKM ≥ 0.5%) (Fig. 6b, c). A total of 10 novel viral clusters unrelated to known viruses were found in these dominant viral clusters, suggesting many more unknown viral groups could be identified from the GWVC than from existing databases (Fig. 6c). Similar to the results of taxonomic annotations (Fig. ), the most dominant viral clusters ( n = 10) were affiliated with the class Caudoviricetes. Notably, one dominant viral cluster belonged to the family Inoviridae within the realm Monodnaviria. Members of Inoviridae are a large group of viruses evolutionarily and structurally unrelated to Caudoviricetes , and possess a single-stranded DNA genome and filamentous virion . Inoviridae are able to establish the chronic infection that release virions without killing the host . Considering the high prevalence of this viral cluster of Inoviridae in aquifers and their unique properties of host interaction , they might play an important ecological role in groundwater microbial community. Viral infection spans an extremely broad spectrum of prokaryotes To investigate virus-host interaction in groundwater, 34,993 prokaryotic metagenome-assembled genomes (MAGs) with completeness >70% and contamination <10% were reconstructed from 607 metagenomes in this study (Methods). We used four computational approaches to identify 193,952 virus-host connections, resulting in 71,600 vOTUs (25.5%) linked to 21,634 prokaryotic MAGs reconstructed from groundwater metagenomes in this study (Supplementary Data and Supplementary Data ). At phylum level, viruses were predicted to infect 104 bacteria phyla and 15 archaea phyla (Fig. and Supplementary Data ). The number of host phyla infected by viruses in groundwater ecosystems was more than twice than that in other ecosystems (e.g., terrestrial, marine, surface freshwater, humans, and wastewaters) (Fig. ), implying that groundwater is an underexplored hotspot for virus-host interaction. We found that most vOTUs ( n = 41,177) were linked to Proteobacteria dominant in groundwater microbiome, followed by Patescibacteria (CPR bacteria, n = 4338), Bacteroidota ( n = 3868), Actinobacteriota ( n = 2836), and Desulfobacterota ( n = 2703). Archaea were also predicted to act as hosts for 2510 vOTUs, including viruses of Thermoproteota ( n = 1083), Nanoarchaeota ( n = 681), Halobacteriota ( n = 278), Aenigmatarchaeota ( n = 173) and Hadarchaeota ( n = 79). Our results revealed that a total of 4338 and 932 vOTUs were linked to 20 CPR lineages (class-level) and 9 DPANN lineages (phyla-level), respectively (Fig. S7). Among these CPR/DPANN lineages, Paceibacteria (CPR) and Nanoarchaeota (DPANN) were important hosts for groundwater viruses (Fig. S7), unlike Saccharimonadia (CPR) and Altiarchaeota (DPANN) which act as the main hosts for viruses in the digestive tract of mammals , and the deep terrestrial subsurface , respectively. To explore potential viral roles in microbe-mediated biogeochemical cycling, we annotated the functional potentials of host MAGs in methane, nitrogen, and sulfur metabolisms (Methods). We found that numerous viruses ( n = 49,184, 68.7% of host-linked vOTUs) were linked to prokaryotic hosts involved in methane, nitrogen, and sulfur metabolisms, suggesting potential effects of viral predation on microbial-mediated biogeochemical cycles. The virus-host connections suggest that almost all microbial metabolic processes involved in the canonical methane, nitrogen, and sulfur cycles in groundwater environments , might be impacted by viral infection (Fig. S8), especially (1) bacterial methane oxidation, and archaeal methanogenesis; (2) bacterial/archaeal dissimilatory nitrate reduction, denitrification, and nitrogen fixation; and (3) bacterial/archaeal dissimilatory sulfate reduction, sulfate disproportionation, and assimilatory sulfate reduction. On the one hand, viruses are able to reprogramme the host cell during infection , , and thus alter the metabolism of key biogeochemical cycling contributors . On the other hand, viral predation can mediate the turnover of abundant biogeochemical cycling microbes , and strengthen element cycling in groundwater via viral shunt . In the future, the integration viral impacts into biogeochemical models might help to better predict the element cycling in groundwater. To investigate potential impacts of viruses on microbial ecology in groundwater, the lineage-specific viral infection dynamics was assessed based on the virus-host abundance pattern. The composition of prokaryotic viruses was found to be highly coupled with their hosts (Fig. S9a), confirmed by the significant Spearman correlation between virus and host abundance ( p < 10 −5 , R = 0.90) (Fig. S9b). Lineage-specific virus-host abundance ratios revealed a range of virus-host abundance ratios among distinct taxa (Fig. S9c). For almost all lineages, including CPR/DPANN microbes, viral abundances often exceeded host abundances, indicating that microbial symbionts might undergo active viral proliferation like free-living microorganisms. Furthermore, the relationship between viral lifestyle and host in groundwater was revealed through predicting virulent/temperate viruses infecting various hosts (Methods). The proportion of virulent lifestyle was 3.6 and 4.5 times that of temperate lifestyle in CPR and DPANN viruses, respectively, in contrast to 0.62 and 0.87 times in Proteobacteria and other host viruses (Fig. ). Among samples where virulent or temperate viruses were detected, viruses linked to CPR/DPANN possessed higher abundance proportion of virulent lifestyle than viruses linked to Proteobacteria and other microorganisms (Fig. ), implying that the former prefer a virulent lifestyle but the latter are subject to a temperate lifestyle. Viruses infecting CPR/DPANN symbionts were predominantly virulent viruses, which might kill their host cells by lysis and thereby drive the turnover of CPR/DPANN communities and nutrient cycling in groundwater , . By contrast, viruses infecting free-living microorganisms (e.g., Proteobacteria) were mainly temperate viruses that can exploit their hosts through lysogeny rather than killing them unless induction events were triggered . CPR/DPANN viruses regulate microbial symbiotic associations in aquifers We investigated CPR/DPANN virome and their potential impact on the symbiotic relationship in aquifers through constructing a comprehensive dataset of groundwater CPR/DPANN viruses (Methods). A total of 230 CPR viruses and 23 DPANN viruses were identified using CRISPR- and provirus- based methods (Supplementary Data ), and over 25% of these associations can also be found using other host prediction methods (i.e., nucleotide sequence homology or k-mer frequency match) (Supplementary Data ). Evidences were found for the association between viruses and CPR CRISPR-Cas systems (Fig. S10a, b, c), though CRISPR-Cas systems in CPR bacteria were not as prevalent as in other prokaryotic lineages . For example, protospacers of complete or high-quality viral genomes were matched to spacers in complete CRISPR-Cas systems of CPR lineages (e.g., Paceibacteria, Gracilibacteria, and Dojkabacteria). For CPR-prophage association, clear viral genome integration was found in CPR lineages (e.g., Paceibacteria, ABY1, and Gracilibacteria) based on prophage prediction (Fig. S10d, e, f). The complete CPR viral genomes ( n = 7) are 36–55 kb in length (average 39 kb), and the two complete DPANN viral genomes were both 41 kb in length. This suggested that CPR/DPANN viruses possessed relatively smaller genomes than viruses infecting other taxa , (Fig. S11), as if CPR/DPANN themselves contain extremely compact genomes by comparison to other taxa . In the viral gene-sharing networks, CPR viral clusters were closely related (Fig. ), and several single modules contained viruses linked to distinct lineages (e.g., Paceibacteria, Dojkabacteria, Microgenomatia, ABY1, and JAEDAM01) (Fig. and Supplementary Data ), implying that viruses infecting CPR bacteria might be similar in terms of gene compositions. Considering the symbiosis of CPR/DPANN and other free-living microbes , we subsequently verified the potential of the interphylum infection of CPR/DPANN viruses. 7 CPR viruses were predicted to infect non-CPR phyla, but no DPANN viruses were linked to non-DPANN phyla (Supplementary Data ). Among these co-targeted CPR viruses, three proto-spacers from a complete circular genome of Gracilibacteria phage were matched to spacers from Gracilibacteria and Bacteriota, and the three matched spacers clustered with Cas gene in three CRISPR-Cas systems with different repeats (Fig. ). Importantly, the Gracilibacteria phage might be able to replicate in both Gracilibacteria (code 25) and Bacteriota (code 11) leveraging compatible genetic code mechanisms (code 11 and 25). Proto-spacers from another circular Gracilibacteria phage genome compatible with code 11 matched spacers from Bacteriota (Fig. S12a), whereas spacers from Proteobacteria and Bacteriota matched a linear Gracilibacteria phage genome (Fig. S12b). Indeed, no viruses infecting CPR bacteria were isolated, primarily due to the inherent difficulty in propagation of these anaerobic symbionts. However, the predicted results suggested that various laboratory-culturable Bacteriota genomes were linked to Gracilibacteria phage, implying the possibility of obtaining a culture of CPR phage in culturable microorganisms. Given that both isolation- and metagenomics-based studies have reported some phages capable of infecting across distinct bacterial phyla , , , and thus the broad host range of CPR phages identified in this study warrants further experimental verification. In such circumstances, the possibility of acquisition of such spacer(s) through horizontal gene transfer should be excluded because the CRISPR arrays (both repeat sequences and contiguous spacers) were completely different. These results suggested that CPR microorganisms as small extracellular symbionts might serve as viral bait for free-living microbes, once interphylum infection of CPR phages has occurred in groundwater ecosystems , . We further investigated how virus-associated functions may augment the metabolic and survival capacities of CPR/DPANN hosts (Fig. and Supplementary Data ). About 10 CPR phages linked to 4 host lineages (Microgenomatia, Paceibacteria, Dojkabacteria, and ABY1) and 7 DPANN viruses that infected 2 lineages (Altiarchaeota and EX4484-52) encoded Concanavalins/LamG domain proteins. Homology modeling and structure predictions suggest that these viral proteins might be involved in cell adhesion of symbionts to free-living microbes (Fig. S13), indicating viral roles in attachment or biofilm formation in the CPR/DPANN organism , . Many viruses linked to different CPR/DPANN lineages (ABY1, Dojkabacteria, Microgenomatia, Paceibacteria, Micrarchaeota, and EX4484-52) encoded glycosyltransferases involving in glycosylation modification. Similar to CPR/DPANN organisms, glycosyltransferase genes were also found to be common among their viruses, indicating potential viral assistance in cell attachment and cell-surface environment regulation by enhancing the host biosynthesis capabilities of saccharides, polysaccharides and glycoproteins , . Besides, some AMGs were identified in CPR/DPANN viruses (Supplementary Data ). For example, DUT genes related to pyrimidine metabolism and DNMT1 genes associated with methionine degradation were detected in CPR/DPANN viruses, suggesting viral contributions to the adaptation of CPR/DPANN microbes with limited metabolic capacity. Viral auxiliary metabolic genes involved in methane, nitrogen, sulfur, and phosphorus cycles In the GWVC, the predicted ORFs of viral contigs were widely spanned across 23 COG categories (Fig. S14a). Annotated ORFs mainly occupied the following categories which guaranteed viral transcription and replication: L (replication, recombination and repair), K (transcription), M (cell wall/membrane/envelope biogenesis), and O (post-translational modification, protein turnover, chaperones). A substantial proportion of viral ORFs were assigned to C (energy production and conversion), G (carbohydrate transport and metabolism), and Q (secondary metabolites biosynthesis, transport and catabolism), suggesting viral functional potential to supplement the host metabolism. Based on the results of CAZy annotation, numerous GH (glycoside hydrolases, n = 7792) indicated the potential impact of viruses on the microbial carbohydrate metabolism in groundwater (Fig. S14b). Viral AMGs might directly affect biogeochemical processes by altering the methane, nitrogen, sulfur, and phosphorous metabolisms (Fig. and Supplementary Data ). With regard to the methane metabolism, 5 pmoC (a methane-oxidizing gene widespread in methanotrophic microorganisms) were found in 5 vOTUs, but no methanogenesis-related AMGs were identified (Fig. and S15; Supplementary Data ). Methane as a common trace constituent of groundwater could be oxidized by methanotrophic or methylotrophic microorganisms acting as an energy source , . Viruses with the pmoC gene might promote microbial methane oxidation for energy production during the infection cycle (Fig. ). Phylogeny suggests that viruses might obtain pmoC from Gammaproteobacteria and Alphaproteobateria, probably in two transfer events (Fig. S15). Viral pmoC genes acquired from Gammaproteobacteria in our study along with those recently identified in large phages from lake form a virus-specific clade , suggesting that these viral pmoC genes in surface and subsurface freshwater might share a common origin in the past. Within our knowledge, viral pmoC was previously reported in lake and soil , but not found in groundwater. For nitrogen-cycling, two kinds of denitrification AMGs ( nirK and norB ) were identified in three vOTUs (Figs. and S16; Supplementary Data ), implying that viruses could be involved in denitrification in aquifers. The phylogenies suggested that these denitrification AMGs might be transferred from Proteobacteria and Bacteroidetes (Fig. S16). Viral norB and nirK genes have been identified in marine samples , yet were not reported in groundwater ecosystems. Four kinds of sulfur-cycling AMGs ( cysH , cysD , dsrE , and sat ) in 1114 vOTUs implied that groundwater viruses might facilitate dissimilatory sulfate reduction/oxidation and assimilatory sulfate reduction (Figs. and S16; Supplementary Data ). CysD and sat genes involve in transform SO 4 - to APS (Adenosine 5’-phosphosulfate) in sulfate reduction/oxidation processes. Intriguingly, the most abundant sulfur-cycling AMGs were cysH genes ( n = 1149), which participated in the reduction of PAPS to SO 3 2- . Hosts for cysH viruses mainly included Proteobacteria, Bacteroidetes, Firmicutes, CPR bacteria, Chlorofexi, and some archaea, and phylogeny further supported the horizontal transfer of cysH from these microbial taxa to viruses (Fig. S17). Virus-associated cysH genes have been increasingly found in human and other environmental systems , , and are expected to show great potential as participants in the sulfur cycle in groundwater ecosystems. Additionally, six kinds of phosphorous-cycling AMGs ( ppa , phoA , phnD , phnE , pstS , and phoH ) identified from 1741 vOTUs suggested the importance of viral auxiliary metabolism in inorganic phosphorus solubilization, organic phosphorus mineralization, and phosphorus transportation (Figs. and S18; Supplementary Data ). As two major phosphorus-acquisition strategies, viral ppa encoding inorganic pyrophosphatase might confer host acquire phosphate via catalyzing the hydrolysis of pyrophosphate into phosphate , while viral phoA encoding alkaline phosphatase likely release bioavailable phosphate from recalcitrant phosphomonoesters . Phylogenetic analysis suggested that viral ppa were transferred from Campylobacterota, Verrucomicrobiota, and Bacteroidota, and viruses might obtain phoA mainly from Actinobacteriota and Proteobacteria (Fig. S18). Viral pstS and phnD / phnE might involve in host phosphate transportation and phosphonate transportation , respectively. The most abundant phosphorus-cycling AMGs were phoH genes ( n = 1661), which encode a presumed phosphate regulon protein that can be induced under phosphorus starvation . These AMGs such as viral ppa , phoA , phnD , phnE , pstS , and phoH were seldom reported in groundwater, though they were noted in previous studies under surface environments , . The diverse phosphorous-cycling AMGs might be of significance for groundwater viruses to assist their hosts to cope with phosphorous-limiting stresses , . In summary, we established the largest Groundwater Virome Catalogue to date containing 280,420 viral species, and unveiled more than 99% novel viruses and about 95% novel viral clusters in the groundwater ecosystem at the continental scale. Our study expanded ~10-fold currently known aquifer viral species and doubled the number of prokaryotic phyla known to be virus-infected in groundwater. Virus-host relation analysis revealed that small-celled microbial symbionts represented by keystone microbes (CPR bacteria and DPANN archaea) in groundwater were more susceptible to viral lysis. Notably, CPR phage appeared capable of infecting free-living bacterial phyla, and CPR/DPANN viruses assisted symbiotic adhesion of cells to free-living cells. Viral AMGs related to methane, nitrogen, sulfur, and phosphorous metabolisms might directly involve in host metabolism and biogeochemical cycling. This study has provided a tremendous opportunity to understand the underexplored viral world in groundwater, and highlighted the significance of subsurface virosphere for future studies of viral ecology.
By mining metagenomic data (20.8 tera bases) of 607 samples from monitoring wells in seven geo-environment zones across China (Fig. , Supplementary Data , and Methods), we constructed the Groundwater Virome Catalogue (GWVC). Four virus identification approaches were applied along with quality control (Methods), and 312,741 viral contigs (≥5 kb) were identified and then clustered at 95% average nucleotide identity (ANI). The generated GWVC consisted of 280,420 non-redundant viral contigs (≥5 kb), representing approximately species-level vOTUs (Fig. ). The completeness level of vOTUs in the GWVC varied from short fragments to complete or nearly complete genomes, including 5366 complete, 6092 high-quality, and 15,669 medium-quality genomes (Fig. ). Viral genome size of the GWVC ranged from 5 kb to 543.1 kb, and a sum of 107,610 vOTUs possess a length of ≥10 kb. Complete genomes had the largest mean size (49.0 kb), followed by high-quality (41.2 kb), medium-quality (36.1 kb), low-quality (11.0 kb), and not-determined (8.3 kb). Among 14,578 complete or high-quality genomes from the GWVC in the present study and the IMG/VR (groundwater section) (Fig. S1), the GWVC contributed more than 78.6% (11,458 genomes) of uncultivated viruses. To explore the novelty of GWVC, viral contigs (≥10 kb) and their proteins from the GWVC and the IMG/VR (groundwater, marine, human, surface freshwater, terrestrial, and wastewater) were extracted for comparison analysis (Methods). The GWVC at vOTUs and protein clusters (PCs) level expanded the number of known groundwater viral species 10-fold and PCs 8-fold (Fig. ). In the overlapping fraction between the GWVC and the IMG/VR, the number of vOTUs/PCs related to aquatic ecosystems (vOTUs: n = 156, PCs: n = 277,529) were much higher than those related to human systems (vOTUs: n = 92, PCs: n = 22,316) and terrestrial ecosystems (vOTUs: n = 12, PCs: n = 112,426) (Fig. S2). Remarkably, the vast majority of vOTUs/PCs (vOTUs: 99.8%, PCs: 86.3%) were unique to the GWVC (Fig. ), indicating the great potential of aquifers to act as large reservoirs of unknown viruses. To date, the differences in core features of viral genomes in groundwater and surface water remain unclear, though characteristics of prokaryotic genomes have been found to be strongly driven by environmental selection . The comparison of complete viruses in groundwater (GWVC) and surface water (surface freshwater and ocean sections of IMG/VR) indicated that groundwater viruses possess unique genomic, protein, and functional traits (Fig. 3 and Methods). Groundwater viruses are characterized by larger genome size and higher GC content but lower average molecular weight (Fig. 3a, b, c). Amino-acid biosynthetic cost minimization in microorganisms is regarded as a necessarily adaptive strategy under resource limitation in natural environments . Similarly, groundwater viruses in resource-limited groundwater might preferentially use amino acids with lower molecular weights to reduce assimilation costs . Moreover, groundwater viral proteins appear to possess higher nitrogen atoms per residue side chain (N-ARSC) and sulfur atoms per residue side chain (S-ARSC) but lower carbon atoms per residue side chain (C-ARSC) (Fig. 3d, e, f). The elemental composition of microbial proteins is also highly related to resource availability in environments , . Accordingly, higher N-ARSC and S-ARSC of viral proteins might reflect higher availabilities of nitrogen and sulfur in groundwater, while reducing the carbon usage of viral proteins might contribute to viral survival in carbon-limited conditions. Functionally, the ORFs related to L (replication, recombination, and repair) and S (unknown function) are richer in groundwater viruses (Fig. 3g). Hence, groundwater-specific viral adaptation enables virion reproduction to be maximized under nutrient-limited conditions in groundwater. Overall, 97.0% of vOTUs (≥10 kb) in the GWVC have the taxonomy assignment. Among these, at class level, the vast majority (95.8%) of vOTUs were assigned to the class Caudoviricetes within the realm Duplodnaviria (Fig. ), which contained head-tailed viruses that are common in most natural environments and human hosts , . In fact, over 94.0% of vOTUs (≥10 kb) in the GWVC could not be taxonomically annotated at order- or family-level (Fig. ). We also constructed the Caudoviricetes phylogenetic tree of GWVC vOTUs over high-quality along with NCBI RefSeq viruses based on a concatenated alignment of 77 marker proteins (Fig. 4). Many GWVC viruses appeared as independent clades that were close to certain known families (e.g., Drexlerviridae, and Autographiviridae) and recently proposed families (Casjensviridae, Mesyanzhinovviridae, Zobellviridae, Peduoviridae, and Steigviridae), among which Zobellviridae and Steigviridae (crAssphages) are thought to be widely distributed in global marine and human guts , respectively. Many GWVC viruses formed branches with subfamily-level viruses (e.g., Azeredovirinae, Bronfenbrennervirinae, and Tybeckvirinae), greatly expanding the taxonomic diversity of Caudoviricetes. Intriguingly, some vOTUs (3.3%) were classified as Nucleocytoplasmic Large DNA viruses (NCLDVs) which constitute the phylum Nucleocytoviricota. NCLDVs have complex genomes and large virions similar in size to small cellular organisms, and are usually abundant in marine, terrestrial and wastewaters . Moreover, NCLDVs infect a wide range of eukaryotes, such as protists and algae , implying possible impact on groundwater eukaryotes.
We used a gene-sharing network approach to cluster viral genomes over medium-quality from the GWVC and the RefSeq database, resulting in 2830 non-singleton viral clusters that contained at least one vOTU from the GWVC (Fig. 5a, Supplementary Data , and Methods). Only 5.4% (152 out of 2830) of the GWVC viral clusters overlapped with the RefSeq viruses, suggesting that a larger number of novel viral genera existed in the groundwater samples. The prevalence and abundance of viral clusters across all monitoring wells were calculated based on read recruitment of viral cluster members. As a result, viruses were found widely distributed at continental scale with a great proportion (79.2%, n = 2240) of virus clusters appearing in different geo-environmental zones (Fig. 6a). In the shared virus clusters, 54 viral clusters were prevalent in all geo-environmental zones, with 50 forming new clades on the viral proteomic tree, distinct from known viral families (Fig. 5b). Among these, 11 dominant viral clusters occurred in more than 20% of groundwater monitoring wells and possessed high relative abundance (RPKM ≥ 0.5%) (Fig. 6b, c). A total of 10 novel viral clusters unrelated to known viruses were found in these dominant viral clusters, suggesting many more unknown viral groups could be identified from the GWVC than from existing databases (Fig. 6c). Similar to the results of taxonomic annotations (Fig. ), the most dominant viral clusters ( n = 10) were affiliated with the class Caudoviricetes. Notably, one dominant viral cluster belonged to the family Inoviridae within the realm Monodnaviria. Members of Inoviridae are a large group of viruses evolutionarily and structurally unrelated to Caudoviricetes , and possess a single-stranded DNA genome and filamentous virion . Inoviridae are able to establish the chronic infection that release virions without killing the host . Considering the high prevalence of this viral cluster of Inoviridae in aquifers and their unique properties of host interaction , they might play an important ecological role in groundwater microbial community.
To investigate virus-host interaction in groundwater, 34,993 prokaryotic metagenome-assembled genomes (MAGs) with completeness >70% and contamination <10% were reconstructed from 607 metagenomes in this study (Methods). We used four computational approaches to identify 193,952 virus-host connections, resulting in 71,600 vOTUs (25.5%) linked to 21,634 prokaryotic MAGs reconstructed from groundwater metagenomes in this study (Supplementary Data and Supplementary Data ). At phylum level, viruses were predicted to infect 104 bacteria phyla and 15 archaea phyla (Fig. and Supplementary Data ). The number of host phyla infected by viruses in groundwater ecosystems was more than twice than that in other ecosystems (e.g., terrestrial, marine, surface freshwater, humans, and wastewaters) (Fig. ), implying that groundwater is an underexplored hotspot for virus-host interaction. We found that most vOTUs ( n = 41,177) were linked to Proteobacteria dominant in groundwater microbiome, followed by Patescibacteria (CPR bacteria, n = 4338), Bacteroidota ( n = 3868), Actinobacteriota ( n = 2836), and Desulfobacterota ( n = 2703). Archaea were also predicted to act as hosts for 2510 vOTUs, including viruses of Thermoproteota ( n = 1083), Nanoarchaeota ( n = 681), Halobacteriota ( n = 278), Aenigmatarchaeota ( n = 173) and Hadarchaeota ( n = 79). Our results revealed that a total of 4338 and 932 vOTUs were linked to 20 CPR lineages (class-level) and 9 DPANN lineages (phyla-level), respectively (Fig. S7). Among these CPR/DPANN lineages, Paceibacteria (CPR) and Nanoarchaeota (DPANN) were important hosts for groundwater viruses (Fig. S7), unlike Saccharimonadia (CPR) and Altiarchaeota (DPANN) which act as the main hosts for viruses in the digestive tract of mammals , and the deep terrestrial subsurface , respectively. To explore potential viral roles in microbe-mediated biogeochemical cycling, we annotated the functional potentials of host MAGs in methane, nitrogen, and sulfur metabolisms (Methods). We found that numerous viruses ( n = 49,184, 68.7% of host-linked vOTUs) were linked to prokaryotic hosts involved in methane, nitrogen, and sulfur metabolisms, suggesting potential effects of viral predation on microbial-mediated biogeochemical cycles. The virus-host connections suggest that almost all microbial metabolic processes involved in the canonical methane, nitrogen, and sulfur cycles in groundwater environments , might be impacted by viral infection (Fig. S8), especially (1) bacterial methane oxidation, and archaeal methanogenesis; (2) bacterial/archaeal dissimilatory nitrate reduction, denitrification, and nitrogen fixation; and (3) bacterial/archaeal dissimilatory sulfate reduction, sulfate disproportionation, and assimilatory sulfate reduction. On the one hand, viruses are able to reprogramme the host cell during infection , , and thus alter the metabolism of key biogeochemical cycling contributors . On the other hand, viral predation can mediate the turnover of abundant biogeochemical cycling microbes , and strengthen element cycling in groundwater via viral shunt . In the future, the integration viral impacts into biogeochemical models might help to better predict the element cycling in groundwater. To investigate potential impacts of viruses on microbial ecology in groundwater, the lineage-specific viral infection dynamics was assessed based on the virus-host abundance pattern. The composition of prokaryotic viruses was found to be highly coupled with their hosts (Fig. S9a), confirmed by the significant Spearman correlation between virus and host abundance ( p < 10 −5 , R = 0.90) (Fig. S9b). Lineage-specific virus-host abundance ratios revealed a range of virus-host abundance ratios among distinct taxa (Fig. S9c). For almost all lineages, including CPR/DPANN microbes, viral abundances often exceeded host abundances, indicating that microbial symbionts might undergo active viral proliferation like free-living microorganisms. Furthermore, the relationship between viral lifestyle and host in groundwater was revealed through predicting virulent/temperate viruses infecting various hosts (Methods). The proportion of virulent lifestyle was 3.6 and 4.5 times that of temperate lifestyle in CPR and DPANN viruses, respectively, in contrast to 0.62 and 0.87 times in Proteobacteria and other host viruses (Fig. ). Among samples where virulent or temperate viruses were detected, viruses linked to CPR/DPANN possessed higher abundance proportion of virulent lifestyle than viruses linked to Proteobacteria and other microorganisms (Fig. ), implying that the former prefer a virulent lifestyle but the latter are subject to a temperate lifestyle. Viruses infecting CPR/DPANN symbionts were predominantly virulent viruses, which might kill their host cells by lysis and thereby drive the turnover of CPR/DPANN communities and nutrient cycling in groundwater , . By contrast, viruses infecting free-living microorganisms (e.g., Proteobacteria) were mainly temperate viruses that can exploit their hosts through lysogeny rather than killing them unless induction events were triggered .
We investigated CPR/DPANN virome and their potential impact on the symbiotic relationship in aquifers through constructing a comprehensive dataset of groundwater CPR/DPANN viruses (Methods). A total of 230 CPR viruses and 23 DPANN viruses were identified using CRISPR- and provirus- based methods (Supplementary Data ), and over 25% of these associations can also be found using other host prediction methods (i.e., nucleotide sequence homology or k-mer frequency match) (Supplementary Data ). Evidences were found for the association between viruses and CPR CRISPR-Cas systems (Fig. S10a, b, c), though CRISPR-Cas systems in CPR bacteria were not as prevalent as in other prokaryotic lineages . For example, protospacers of complete or high-quality viral genomes were matched to spacers in complete CRISPR-Cas systems of CPR lineages (e.g., Paceibacteria, Gracilibacteria, and Dojkabacteria). For CPR-prophage association, clear viral genome integration was found in CPR lineages (e.g., Paceibacteria, ABY1, and Gracilibacteria) based on prophage prediction (Fig. S10d, e, f). The complete CPR viral genomes ( n = 7) are 36–55 kb in length (average 39 kb), and the two complete DPANN viral genomes were both 41 kb in length. This suggested that CPR/DPANN viruses possessed relatively smaller genomes than viruses infecting other taxa , (Fig. S11), as if CPR/DPANN themselves contain extremely compact genomes by comparison to other taxa . In the viral gene-sharing networks, CPR viral clusters were closely related (Fig. ), and several single modules contained viruses linked to distinct lineages (e.g., Paceibacteria, Dojkabacteria, Microgenomatia, ABY1, and JAEDAM01) (Fig. and Supplementary Data ), implying that viruses infecting CPR bacteria might be similar in terms of gene compositions. Considering the symbiosis of CPR/DPANN and other free-living microbes , we subsequently verified the potential of the interphylum infection of CPR/DPANN viruses. 7 CPR viruses were predicted to infect non-CPR phyla, but no DPANN viruses were linked to non-DPANN phyla (Supplementary Data ). Among these co-targeted CPR viruses, three proto-spacers from a complete circular genome of Gracilibacteria phage were matched to spacers from Gracilibacteria and Bacteriota, and the three matched spacers clustered with Cas gene in three CRISPR-Cas systems with different repeats (Fig. ). Importantly, the Gracilibacteria phage might be able to replicate in both Gracilibacteria (code 25) and Bacteriota (code 11) leveraging compatible genetic code mechanisms (code 11 and 25). Proto-spacers from another circular Gracilibacteria phage genome compatible with code 11 matched spacers from Bacteriota (Fig. S12a), whereas spacers from Proteobacteria and Bacteriota matched a linear Gracilibacteria phage genome (Fig. S12b). Indeed, no viruses infecting CPR bacteria were isolated, primarily due to the inherent difficulty in propagation of these anaerobic symbionts. However, the predicted results suggested that various laboratory-culturable Bacteriota genomes were linked to Gracilibacteria phage, implying the possibility of obtaining a culture of CPR phage in culturable microorganisms. Given that both isolation- and metagenomics-based studies have reported some phages capable of infecting across distinct bacterial phyla , , , and thus the broad host range of CPR phages identified in this study warrants further experimental verification. In such circumstances, the possibility of acquisition of such spacer(s) through horizontal gene transfer should be excluded because the CRISPR arrays (both repeat sequences and contiguous spacers) were completely different. These results suggested that CPR microorganisms as small extracellular symbionts might serve as viral bait for free-living microbes, once interphylum infection of CPR phages has occurred in groundwater ecosystems , . We further investigated how virus-associated functions may augment the metabolic and survival capacities of CPR/DPANN hosts (Fig. and Supplementary Data ). About 10 CPR phages linked to 4 host lineages (Microgenomatia, Paceibacteria, Dojkabacteria, and ABY1) and 7 DPANN viruses that infected 2 lineages (Altiarchaeota and EX4484-52) encoded Concanavalins/LamG domain proteins. Homology modeling and structure predictions suggest that these viral proteins might be involved in cell adhesion of symbionts to free-living microbes (Fig. S13), indicating viral roles in attachment or biofilm formation in the CPR/DPANN organism , . Many viruses linked to different CPR/DPANN lineages (ABY1, Dojkabacteria, Microgenomatia, Paceibacteria, Micrarchaeota, and EX4484-52) encoded glycosyltransferases involving in glycosylation modification. Similar to CPR/DPANN organisms, glycosyltransferase genes were also found to be common among their viruses, indicating potential viral assistance in cell attachment and cell-surface environment regulation by enhancing the host biosynthesis capabilities of saccharides, polysaccharides and glycoproteins , . Besides, some AMGs were identified in CPR/DPANN viruses (Supplementary Data ). For example, DUT genes related to pyrimidine metabolism and DNMT1 genes associated with methionine degradation were detected in CPR/DPANN viruses, suggesting viral contributions to the adaptation of CPR/DPANN microbes with limited metabolic capacity.
In the GWVC, the predicted ORFs of viral contigs were widely spanned across 23 COG categories (Fig. S14a). Annotated ORFs mainly occupied the following categories which guaranteed viral transcription and replication: L (replication, recombination and repair), K (transcription), M (cell wall/membrane/envelope biogenesis), and O (post-translational modification, protein turnover, chaperones). A substantial proportion of viral ORFs were assigned to C (energy production and conversion), G (carbohydrate transport and metabolism), and Q (secondary metabolites biosynthesis, transport and catabolism), suggesting viral functional potential to supplement the host metabolism. Based on the results of CAZy annotation, numerous GH (glycoside hydrolases, n = 7792) indicated the potential impact of viruses on the microbial carbohydrate metabolism in groundwater (Fig. S14b). Viral AMGs might directly affect biogeochemical processes by altering the methane, nitrogen, sulfur, and phosphorous metabolisms (Fig. and Supplementary Data ). With regard to the methane metabolism, 5 pmoC (a methane-oxidizing gene widespread in methanotrophic microorganisms) were found in 5 vOTUs, but no methanogenesis-related AMGs were identified (Fig. and S15; Supplementary Data ). Methane as a common trace constituent of groundwater could be oxidized by methanotrophic or methylotrophic microorganisms acting as an energy source , . Viruses with the pmoC gene might promote microbial methane oxidation for energy production during the infection cycle (Fig. ). Phylogeny suggests that viruses might obtain pmoC from Gammaproteobacteria and Alphaproteobateria, probably in two transfer events (Fig. S15). Viral pmoC genes acquired from Gammaproteobacteria in our study along with those recently identified in large phages from lake form a virus-specific clade , suggesting that these viral pmoC genes in surface and subsurface freshwater might share a common origin in the past. Within our knowledge, viral pmoC was previously reported in lake and soil , but not found in groundwater. For nitrogen-cycling, two kinds of denitrification AMGs ( nirK and norB ) were identified in three vOTUs (Figs. and S16; Supplementary Data ), implying that viruses could be involved in denitrification in aquifers. The phylogenies suggested that these denitrification AMGs might be transferred from Proteobacteria and Bacteroidetes (Fig. S16). Viral norB and nirK genes have been identified in marine samples , yet were not reported in groundwater ecosystems. Four kinds of sulfur-cycling AMGs ( cysH , cysD , dsrE , and sat ) in 1114 vOTUs implied that groundwater viruses might facilitate dissimilatory sulfate reduction/oxidation and assimilatory sulfate reduction (Figs. and S16; Supplementary Data ). CysD and sat genes involve in transform SO 4 - to APS (Adenosine 5’-phosphosulfate) in sulfate reduction/oxidation processes. Intriguingly, the most abundant sulfur-cycling AMGs were cysH genes ( n = 1149), which participated in the reduction of PAPS to SO 3 2- . Hosts for cysH viruses mainly included Proteobacteria, Bacteroidetes, Firmicutes, CPR bacteria, Chlorofexi, and some archaea, and phylogeny further supported the horizontal transfer of cysH from these microbial taxa to viruses (Fig. S17). Virus-associated cysH genes have been increasingly found in human and other environmental systems , , and are expected to show great potential as participants in the sulfur cycle in groundwater ecosystems. Additionally, six kinds of phosphorous-cycling AMGs ( ppa , phoA , phnD , phnE , pstS , and phoH ) identified from 1741 vOTUs suggested the importance of viral auxiliary metabolism in inorganic phosphorus solubilization, organic phosphorus mineralization, and phosphorus transportation (Figs. and S18; Supplementary Data ). As two major phosphorus-acquisition strategies, viral ppa encoding inorganic pyrophosphatase might confer host acquire phosphate via catalyzing the hydrolysis of pyrophosphate into phosphate , while viral phoA encoding alkaline phosphatase likely release bioavailable phosphate from recalcitrant phosphomonoesters . Phylogenetic analysis suggested that viral ppa were transferred from Campylobacterota, Verrucomicrobiota, and Bacteroidota, and viruses might obtain phoA mainly from Actinobacteriota and Proteobacteria (Fig. S18). Viral pstS and phnD / phnE might involve in host phosphate transportation and phosphonate transportation , respectively. The most abundant phosphorus-cycling AMGs were phoH genes ( n = 1661), which encode a presumed phosphate regulon protein that can be induced under phosphorus starvation . These AMGs such as viral ppa , phoA , phnD , phnE , pstS , and phoH were seldom reported in groundwater, though they were noted in previous studies under surface environments , . The diverse phosphorous-cycling AMGs might be of significance for groundwater viruses to assist their hosts to cope with phosphorous-limiting stresses , . In summary, we established the largest Groundwater Virome Catalogue to date containing 280,420 viral species, and unveiled more than 99% novel viruses and about 95% novel viral clusters in the groundwater ecosystem at the continental scale. Our study expanded ~10-fold currently known aquifer viral species and doubled the number of prokaryotic phyla known to be virus-infected in groundwater. Virus-host relation analysis revealed that small-celled microbial symbionts represented by keystone microbes (CPR bacteria and DPANN archaea) in groundwater were more susceptible to viral lysis. Notably, CPR phage appeared capable of infecting free-living bacterial phyla, and CPR/DPANN viruses assisted symbiotic adhesion of cells to free-living cells. Viral AMGs related to methane, nitrogen, sulfur, and phosphorous metabolisms might directly involve in host metabolism and biogeochemical cycling. This study has provided a tremendous opportunity to understand the underexplored viral world in groundwater, and highlighted the significance of subsurface virosphere for future studies of viral ecology.
Sampling and filtration In this study, metagenomic sequencing was performed on 607 groundwater samples collected from 525 newly constructed and 82 reconstructed monitoring wells throughout China during 2016–2017 (Fig. and Supplementary Data ). The monitoring wells were distributed in seven geo-environmental zones (Northeast plain-mountain, Huanghuaihai and river delta Yangtze plain, South China bedrock low mountain foothill, Northwest loess plateau, Southwest China Karst rock mountain, Northwest arid desert, and Qinghai-Tibet plateau Alpine frozen soil) at depths ranging from of 0 to 600 m, taking full consideration of hydrology, geological environment, and groundwater burial conditions. Groundwater was pumped after well flushing and filtered through 0.22 μm polycarbonate membranes (Millipore, USA) to capture microbial organisms. All filtered membranes were frozen at −80 °C for high-throughput sequencing. Groundwater samples for physicochemical analysis were collected in 5 L sterile PET bottles and stored at −20 °C. DNA extraction and metagenomic sequencing Total genomic DNA of 607 samples was extracted using the MoBio PowerSoil® kit (MoBio Laboratories, Carlsbad, CA, USA) following the manufacturer’s protocol. DNA quantity and quality were determined using a NanoDrop Spectrophotometer (NanoDrop Technologies Inc., Wilmington, DE, USA). Genomic DNA was sequenced by Illumina HiSeq 4000 platform (Majorbio Company, Shanghai, China) with 2 × 150 bp paired-end reads. The sequencing generated 607 metagenomic datasets, encompassing over 116 billion raw reads of length 150 bp (Supplementary Data ). Development of the Groundwater Virome Catalogue All raw reads were trimmed using the Read_qc module (default parameters) of metaWRAP v1.2.3 . Clean reads in each sample were then de-novo assembled by the assembly module (--megahit -l 500) of metaWRAP. Assembled contigs were processed for putative viral contig identification using four different viral identification methods (Earth Virome Pipeline , , ViralVerify v1.1 , VIBRANT v1.2.1 and PPR-meta v1.1 ). Viral Verify, VIBRANT and PPR-meta were used with default parameters. For Earth Virome Pipeline, we expanded the Viral Protein Family (VPF) database by strict detection and manual curation of VPFs generated from groundwater metagenomics data following the recommended protocols . Specifically, we compared the raw VPF database against contigs from 607 groundwater metagenomics using hmmsearch . Contigs with five or more VPFs and a length of ≥ 50 kb were used for further filtrations. Then, we compared these contigs to Kegg Orthology (KO) and protein family (Pfam) databases, and removed those with >10% of genes annotated by KO or >25% of genes annotated by Pfam. Viral proteins derived from retained contigs were de-replicated using USEARCH with 70% identity threshold, and clustered into groups using Markov cluster algorithm . Proteins within clusters were aligned using MAFFT , and then VPFs were created using hmmbuild . Generated VPFs in this study were complemented into the raw VPF database for viral identification , . Putative viral contigs identified by the above four methods were filtered using geNomad v1.7.3 , and non-viral sequences classified by geNomad were further removed. Viral contigs were merged together for further host contamination removal and completeness estimation using CheckV v1.0.1 , and viral contigs with a length of ≥5 kb were retained. According to Minimum Information about an Uncultivated Virus Genome (MIUViG) , all validated viral contigs ( n = 312,741) were then clustered at 95% nucleotide identity over 85% coverage using CD-HIT v4.8.1 (parameters: -c 0.95 -d 400 -T 20 -M 20000 -n 5). Generated 280,420 non-redundant species-level vOTUs with a length of ≥5 kb constituted the Groundwater Virome Catalogue (GWVC), including 107,610 vOTUs with a length of ≥ 10 kb. Comparison of viral genomes and proteins to public databases Viral contigs (≥10 kb) and their encoded proteins in the GWVC were compared against existing viral genome and protein databases in IMG/VR v.3.0 . IMG/VR sequences were derived from groundwater, marine, human, surface freshwater, terrestrial, and wastewaters. To identify overlapping vOTUs between IMG/VR and GWVC, all sequences were clustered using CD-HIT at 95% identity over 85% coverage. GWVC amino acid sequences were identified using Prodigal v2.6.3 and then clustered with IMG/VR sequences using CD-HIT v.4.8.1 (parameters: -c 0.6 -G 0 -aS 0.8 -n 4). Calculation of GC content, protein molecular weight, and protein elemental composition Complete genomes of groundwater viruses from the GWVC and surface-water viruses from surface freshwater and ocean sections of the IMG/VR were selected for calculation of GC content of viral genomes, molecular weight, and carbon/nitrogen/sulfur atoms per residue side chain (C/N/S-ARSC) of viral proteins. Following previous methods , GC content, molecular weight, and C/N/S-ARSC were calculated using the python scripts ‘get_gc_and_narsc.py’ ( https://github.com/faylward/pangenomics/ ). Viral taxonomy Taxonomic annotation of vOTUs (≥10 kb) in the GWVC was performed on geNomad v1.7.3 with default parameters ( https://github.com/apcamargo/genomad ). Viral genes of GWVC vOTUs were annotated using taxonomically informative marker profiles of geNomad, and vOTUs were then classified into distinct viral lineages according to Virus Metadata Resource of ICTV (International Committee on Taxonomy of Viruses). Host assignment and lifestyle prediction Contigs from each of the 607 groundwater metagenomes were binned using the binning module of metaWRAP (--maxbin2 --concoct --metabat2 options), and generated MAGs were then refined using the bin_refinement module of metaWRAP (-c 70 -x 10 options). Completeness and contamination of MAGs were assessed using CheckM v1.1.2 , resulting in 34,993 MAGs with >70% completeness and <10% contamination used for host assignment. All genomes were also dereplicated at an estimated species level (ANI ≥ 95%) with dRep v2.5.4 (-pa 0.9 -sa 0.95 -cm larger -comp 75 -con 5 -nc 0.30 options). The taxonomy of each genome was assigned using GTDB-Tk v2.1.6 with the GTDB database r207 , . The maximum-likelihood phylogenetic tree inferred from a concatenation of 120 bacterial or 122 archaeal marker genes was also generated using GTDB-Tk. Four previously reported in silico methods were used to link vOTUs and putative host MAGs , , , in terms of CRISPR spacer match, provirus identified in host genome, nucleotide sequence homology, and k-mer frequency match. First, CRISPR spacers in microbial genomes were detected using minced ( https://github.com/ctSkennerton/minced ) and then matched against viral contigs with ≤1 mismatch over ≥95% of the spacer length using BLASTn (-word_size 8 -task ‘blastn-short’). Second, viral genomes identified as prophages by both geNomad and CheckV were linked to their corresponding host MAGs. Third, nucleotide sequence homology of vOTUs and prokaryotic MAGs were compared using BLASTn . Host predictions were then based on matches of ≥90% nucleotide identity covering ≥2 kb of the virus and putative host sequences. Fourth, Prokaryotic virus Host Predictor (PHP) was run with default parameters to predict viral host based on k-mer frequency match. Viral lifestyle (virulent/temperate) on complete or high-quality vOTUs was predicted using the geNomad, CheckV and BACPHLIP tools . Integrated proviruses identified by both geNomad and CheckV are considered as temperate viruses. For the remaining complete or high-quality vOTUs, the BACPHLIP based on random forest was used to predict lifestyle, and vOTUs with a greater probability (>90%) in BACPHLIP predictions were classified as virulent or temperate viruses. Construction of groundwater CPR/DPANN virus dataset To construct a comprehensive dataset of groundwater CPR/DPANN viruses, we identified putative CPR/DPANN viral genomes from the GWVC and public datasets (IMG/VR and NCBI). Specifically, CPR/DPANN genomes available in NCBI Genbank were collected and filtered for quality using CheckM . As before, the CPR/DPANN genomes (>70% completeness and <10% contamination) were used for predicting CRISPR spacers using minced ( https://github.com/ctSkennerton/minced ). Extracted spacers of CPR/DPANN genomes from the NCBI and the present study were merged and then matched against viral contigs from the GWVC and the IMG/VR (groundwater section) using BLASTn (-word_size 8 -task ‘blastn-short’), allowing a maximum of one mismatch over ≥95% of the spacer length. According to NCBI BioSample annotations, proviruses from CPR/DPANN genomes derived from groundwater environments were also identified using geNomad. As stated above, all CPR/DPANN viruses from the public datasets were filtered and estimated using CheckV. A total of 230 CPR viruses and 23 DPANN viruses were identified using CRISPR- and provirus- based methods, including 90 CPR viruses and 5 DPANN viruses from the GWVC. Host prediction methods based on nucleotide sequence homology and k-mer frequency match were also used to examine these linkages between viruses and CPR/DPANN genomes as stated above. To examine whether CPR or DPANN viruses can be targeted by spacers of non-CPR or non-DPANN genomes, the spacer database of the iPHoP were compared to CPR/DPANN viruses using BLASTn (-word_size 8 -task ‘blastn-short’) with a maximum of one mismatch over ≥ 95% of the spacer length, resulting in 7 viruses being found to co-target Gracilibacteria (CPR) and non-CPR phyla. CRISPR-Cas systems of host genomes of these co-targeted viruses were carefully examined using CRISPRCasFinder and classified into different types . Genomic maps of co-targeted Gracilibacteria phages were generated using prodigal (single mode) in terms of genetic codes code 11 and code 25. Generation of viral clusters using gene-sharing networks Two viral gene-sharing networks were constructed to generate viral clusters using vConTACT2 . One network contained vOTUs over medium-quality from the GWVC and prokaryotic viruses from the NCBI Viral RefSeq (v201). Another network contained CPR/DPANN viruses identified from the GWVC and the public datasets. Visualization of the gene-sharing networks was implemented in Cytoscape v3.7.1 . Abundance profiling RPKM (Reads per kilobase per million mapped reads) values were used to represent relative abundances of vOTUs and their host MAGs. Quality-controlled reads from each sample were mapped to a contig database with Bowtie2 . Sam files were sorted using SAMtools , and sorted bam files were then passed to CoverM v0.3.1 ( https://github.com/wwood/CoverM ) to filter low-quality mappings and generate RPKM profiles for all samples (parameters: contig mode for viral contigs, genome mode for prokaryotic MAGs, --trim-min 0.10 --trim-max 0.90 --min-read-percent-identity 0.95 --min-read-aligned-percent 0.75 -m rpkm). Functional annotation and AMGs identification Viral protein function was annotated by eggnog-mapper v2.0 . Briefly, predicted viral ORFs were annotated based on Diamond blastp search against protein family databases: KEGG , COG , NCBI-NR , Uniref , CAZy and VOGDB . Metabolic capacity (methane, nitrogen, and sulfur metabolisms) of host MAGs was analyzed by searching predicted ORFs against a curated set of KEGG, TIGRfam , Pfam and custom HMM profiles using METABOLIC v.4.0 ( https://github.com/AnantharamanLab/METABOLIC ) . For reliable AMGs identification , , , we performed VirSorter2 (--prep-for-dramv) on identified viruses to generate the input files required for DRAM-v , and viral contigs with high viral scores (>0.5) were selected for AMGs annotation using DRAM-v. We checked the genomic content of viral contigs containing AMGs, and only the AMGs flanked by two viral genes or viral hallmark genes were further analyzed. For genomic context assessment, genome maps for viral contigs with AMGs were visualized based on DRAM-v and VirSorter2 annotations. Protein structural homology searches and prediction were performed using the Phyre2 web portal . Viral proteomic tree generation Complete and high-quality GWVC vOTUs in viral clusters prevalent in all geo-environmental zones were compared with complete viral genomes publicly available in NCBI Refseq to generate a viral proteomic tree using VIPtree . In brief, a proteomic similarity score was calculated for each pair of genomes based on an all-versus-all tblastx similarity. A proteomic tree is generated by BIONJ based on the genomic distances, and iTOL ( https://itol.embl.de/ ) was used to visualize and display the proteomic tree . Phylogenetic tree generation We constructed a concatenated protein phylogeny of Caudoviricetes as previously described , . The 77 marker proteins were identified from the GWVC vOTUs over high-quality and NCBI Refseq viral genomes using HMMER v3.1b1 . Specifically, HMMs for the 77 markers were used to search against protein sequences, and the best hits (highest bitscore) were selected. Only genomes containing at least three markers were retained. All marker alignments were individually trimmed using trimAl v.1.4 (parameter: -gt 0.5) and concatenated by filling in gap positions where markers were absent. We further removed genomes with <5% alignment columns, leading to a final multiple sequence alignment of 7199 genomes (4238 GWVC vOTUs and 2961 Refseq viruses) with 23,268 columns. The Caudoviricetes phylogeny was inferred from the multiple sequence alignment using FastTree v2.7.1 under the WAG + G model. The midpoint-rooted tree was visualized using iToL, and the family/sub-family taxonomic annotations for the NCBI Refseq viral genomes are straightly from the Virus Metadata Resource of ICTV. Sequences similar to AMGs were recruited from the 34,993 MAGs in this study and the NCBI nr database, based on the blastp searching of the identified viral AMGs (threshold of 100 for bit score, 1e-5 for E-value). These sets of viral AMGs and related protein sequences were aligned using Muscle , and the alignments were manually curated to remove poorly aligned positions using Jalview . Maximum-likelihood trees were computed using FastTree v2.7.1 and visualized using iTOL. Reporting summary Further information on research design is available in the linked to this article.
In this study, metagenomic sequencing was performed on 607 groundwater samples collected from 525 newly constructed and 82 reconstructed monitoring wells throughout China during 2016–2017 (Fig. and Supplementary Data ). The monitoring wells were distributed in seven geo-environmental zones (Northeast plain-mountain, Huanghuaihai and river delta Yangtze plain, South China bedrock low mountain foothill, Northwest loess plateau, Southwest China Karst rock mountain, Northwest arid desert, and Qinghai-Tibet plateau Alpine frozen soil) at depths ranging from of 0 to 600 m, taking full consideration of hydrology, geological environment, and groundwater burial conditions. Groundwater was pumped after well flushing and filtered through 0.22 μm polycarbonate membranes (Millipore, USA) to capture microbial organisms. All filtered membranes were frozen at −80 °C for high-throughput sequencing. Groundwater samples for physicochemical analysis were collected in 5 L sterile PET bottles and stored at −20 °C.
Total genomic DNA of 607 samples was extracted using the MoBio PowerSoil® kit (MoBio Laboratories, Carlsbad, CA, USA) following the manufacturer’s protocol. DNA quantity and quality were determined using a NanoDrop Spectrophotometer (NanoDrop Technologies Inc., Wilmington, DE, USA). Genomic DNA was sequenced by Illumina HiSeq 4000 platform (Majorbio Company, Shanghai, China) with 2 × 150 bp paired-end reads. The sequencing generated 607 metagenomic datasets, encompassing over 116 billion raw reads of length 150 bp (Supplementary Data ).
All raw reads were trimmed using the Read_qc module (default parameters) of metaWRAP v1.2.3 . Clean reads in each sample were then de-novo assembled by the assembly module (--megahit -l 500) of metaWRAP. Assembled contigs were processed for putative viral contig identification using four different viral identification methods (Earth Virome Pipeline , , ViralVerify v1.1 , VIBRANT v1.2.1 and PPR-meta v1.1 ). Viral Verify, VIBRANT and PPR-meta were used with default parameters. For Earth Virome Pipeline, we expanded the Viral Protein Family (VPF) database by strict detection and manual curation of VPFs generated from groundwater metagenomics data following the recommended protocols . Specifically, we compared the raw VPF database against contigs from 607 groundwater metagenomics using hmmsearch . Contigs with five or more VPFs and a length of ≥ 50 kb were used for further filtrations. Then, we compared these contigs to Kegg Orthology (KO) and protein family (Pfam) databases, and removed those with >10% of genes annotated by KO or >25% of genes annotated by Pfam. Viral proteins derived from retained contigs were de-replicated using USEARCH with 70% identity threshold, and clustered into groups using Markov cluster algorithm . Proteins within clusters were aligned using MAFFT , and then VPFs were created using hmmbuild . Generated VPFs in this study were complemented into the raw VPF database for viral identification , . Putative viral contigs identified by the above four methods were filtered using geNomad v1.7.3 , and non-viral sequences classified by geNomad were further removed. Viral contigs were merged together for further host contamination removal and completeness estimation using CheckV v1.0.1 , and viral contigs with a length of ≥5 kb were retained. According to Minimum Information about an Uncultivated Virus Genome (MIUViG) , all validated viral contigs ( n = 312,741) were then clustered at 95% nucleotide identity over 85% coverage using CD-HIT v4.8.1 (parameters: -c 0.95 -d 400 -T 20 -M 20000 -n 5). Generated 280,420 non-redundant species-level vOTUs with a length of ≥5 kb constituted the Groundwater Virome Catalogue (GWVC), including 107,610 vOTUs with a length of ≥ 10 kb.
Viral contigs (≥10 kb) and their encoded proteins in the GWVC were compared against existing viral genome and protein databases in IMG/VR v.3.0 . IMG/VR sequences were derived from groundwater, marine, human, surface freshwater, terrestrial, and wastewaters. To identify overlapping vOTUs between IMG/VR and GWVC, all sequences were clustered using CD-HIT at 95% identity over 85% coverage. GWVC amino acid sequences were identified using Prodigal v2.6.3 and then clustered with IMG/VR sequences using CD-HIT v.4.8.1 (parameters: -c 0.6 -G 0 -aS 0.8 -n 4).
Complete genomes of groundwater viruses from the GWVC and surface-water viruses from surface freshwater and ocean sections of the IMG/VR were selected for calculation of GC content of viral genomes, molecular weight, and carbon/nitrogen/sulfur atoms per residue side chain (C/N/S-ARSC) of viral proteins. Following previous methods , GC content, molecular weight, and C/N/S-ARSC were calculated using the python scripts ‘get_gc_and_narsc.py’ ( https://github.com/faylward/pangenomics/ ).
Taxonomic annotation of vOTUs (≥10 kb) in the GWVC was performed on geNomad v1.7.3 with default parameters ( https://github.com/apcamargo/genomad ). Viral genes of GWVC vOTUs were annotated using taxonomically informative marker profiles of geNomad, and vOTUs were then classified into distinct viral lineages according to Virus Metadata Resource of ICTV (International Committee on Taxonomy of Viruses).
Contigs from each of the 607 groundwater metagenomes were binned using the binning module of metaWRAP (--maxbin2 --concoct --metabat2 options), and generated MAGs were then refined using the bin_refinement module of metaWRAP (-c 70 -x 10 options). Completeness and contamination of MAGs were assessed using CheckM v1.1.2 , resulting in 34,993 MAGs with >70% completeness and <10% contamination used for host assignment. All genomes were also dereplicated at an estimated species level (ANI ≥ 95%) with dRep v2.5.4 (-pa 0.9 -sa 0.95 -cm larger -comp 75 -con 5 -nc 0.30 options). The taxonomy of each genome was assigned using GTDB-Tk v2.1.6 with the GTDB database r207 , . The maximum-likelihood phylogenetic tree inferred from a concatenation of 120 bacterial or 122 archaeal marker genes was also generated using GTDB-Tk. Four previously reported in silico methods were used to link vOTUs and putative host MAGs , , , in terms of CRISPR spacer match, provirus identified in host genome, nucleotide sequence homology, and k-mer frequency match. First, CRISPR spacers in microbial genomes were detected using minced ( https://github.com/ctSkennerton/minced ) and then matched against viral contigs with ≤1 mismatch over ≥95% of the spacer length using BLASTn (-word_size 8 -task ‘blastn-short’). Second, viral genomes identified as prophages by both geNomad and CheckV were linked to their corresponding host MAGs. Third, nucleotide sequence homology of vOTUs and prokaryotic MAGs were compared using BLASTn . Host predictions were then based on matches of ≥90% nucleotide identity covering ≥2 kb of the virus and putative host sequences. Fourth, Prokaryotic virus Host Predictor (PHP) was run with default parameters to predict viral host based on k-mer frequency match. Viral lifestyle (virulent/temperate) on complete or high-quality vOTUs was predicted using the geNomad, CheckV and BACPHLIP tools . Integrated proviruses identified by both geNomad and CheckV are considered as temperate viruses. For the remaining complete or high-quality vOTUs, the BACPHLIP based on random forest was used to predict lifestyle, and vOTUs with a greater probability (>90%) in BACPHLIP predictions were classified as virulent or temperate viruses.
To construct a comprehensive dataset of groundwater CPR/DPANN viruses, we identified putative CPR/DPANN viral genomes from the GWVC and public datasets (IMG/VR and NCBI). Specifically, CPR/DPANN genomes available in NCBI Genbank were collected and filtered for quality using CheckM . As before, the CPR/DPANN genomes (>70% completeness and <10% contamination) were used for predicting CRISPR spacers using minced ( https://github.com/ctSkennerton/minced ). Extracted spacers of CPR/DPANN genomes from the NCBI and the present study were merged and then matched against viral contigs from the GWVC and the IMG/VR (groundwater section) using BLASTn (-word_size 8 -task ‘blastn-short’), allowing a maximum of one mismatch over ≥95% of the spacer length. According to NCBI BioSample annotations, proviruses from CPR/DPANN genomes derived from groundwater environments were also identified using geNomad. As stated above, all CPR/DPANN viruses from the public datasets were filtered and estimated using CheckV. A total of 230 CPR viruses and 23 DPANN viruses were identified using CRISPR- and provirus- based methods, including 90 CPR viruses and 5 DPANN viruses from the GWVC. Host prediction methods based on nucleotide sequence homology and k-mer frequency match were also used to examine these linkages between viruses and CPR/DPANN genomes as stated above. To examine whether CPR or DPANN viruses can be targeted by spacers of non-CPR or non-DPANN genomes, the spacer database of the iPHoP were compared to CPR/DPANN viruses using BLASTn (-word_size 8 -task ‘blastn-short’) with a maximum of one mismatch over ≥ 95% of the spacer length, resulting in 7 viruses being found to co-target Gracilibacteria (CPR) and non-CPR phyla. CRISPR-Cas systems of host genomes of these co-targeted viruses were carefully examined using CRISPRCasFinder and classified into different types . Genomic maps of co-targeted Gracilibacteria phages were generated using prodigal (single mode) in terms of genetic codes code 11 and code 25.
Two viral gene-sharing networks were constructed to generate viral clusters using vConTACT2 . One network contained vOTUs over medium-quality from the GWVC and prokaryotic viruses from the NCBI Viral RefSeq (v201). Another network contained CPR/DPANN viruses identified from the GWVC and the public datasets. Visualization of the gene-sharing networks was implemented in Cytoscape v3.7.1 .
RPKM (Reads per kilobase per million mapped reads) values were used to represent relative abundances of vOTUs and their host MAGs. Quality-controlled reads from each sample were mapped to a contig database with Bowtie2 . Sam files were sorted using SAMtools , and sorted bam files were then passed to CoverM v0.3.1 ( https://github.com/wwood/CoverM ) to filter low-quality mappings and generate RPKM profiles for all samples (parameters: contig mode for viral contigs, genome mode for prokaryotic MAGs, --trim-min 0.10 --trim-max 0.90 --min-read-percent-identity 0.95 --min-read-aligned-percent 0.75 -m rpkm).
Viral protein function was annotated by eggnog-mapper v2.0 . Briefly, predicted viral ORFs were annotated based on Diamond blastp search against protein family databases: KEGG , COG , NCBI-NR , Uniref , CAZy and VOGDB . Metabolic capacity (methane, nitrogen, and sulfur metabolisms) of host MAGs was analyzed by searching predicted ORFs against a curated set of KEGG, TIGRfam , Pfam and custom HMM profiles using METABOLIC v.4.0 ( https://github.com/AnantharamanLab/METABOLIC ) . For reliable AMGs identification , , , we performed VirSorter2 (--prep-for-dramv) on identified viruses to generate the input files required for DRAM-v , and viral contigs with high viral scores (>0.5) were selected for AMGs annotation using DRAM-v. We checked the genomic content of viral contigs containing AMGs, and only the AMGs flanked by two viral genes or viral hallmark genes were further analyzed. For genomic context assessment, genome maps for viral contigs with AMGs were visualized based on DRAM-v and VirSorter2 annotations. Protein structural homology searches and prediction were performed using the Phyre2 web portal .
Complete and high-quality GWVC vOTUs in viral clusters prevalent in all geo-environmental zones were compared with complete viral genomes publicly available in NCBI Refseq to generate a viral proteomic tree using VIPtree . In brief, a proteomic similarity score was calculated for each pair of genomes based on an all-versus-all tblastx similarity. A proteomic tree is generated by BIONJ based on the genomic distances, and iTOL ( https://itol.embl.de/ ) was used to visualize and display the proteomic tree .
We constructed a concatenated protein phylogeny of Caudoviricetes as previously described , . The 77 marker proteins were identified from the GWVC vOTUs over high-quality and NCBI Refseq viral genomes using HMMER v3.1b1 . Specifically, HMMs for the 77 markers were used to search against protein sequences, and the best hits (highest bitscore) were selected. Only genomes containing at least three markers were retained. All marker alignments were individually trimmed using trimAl v.1.4 (parameter: -gt 0.5) and concatenated by filling in gap positions where markers were absent. We further removed genomes with <5% alignment columns, leading to a final multiple sequence alignment of 7199 genomes (4238 GWVC vOTUs and 2961 Refseq viruses) with 23,268 columns. The Caudoviricetes phylogeny was inferred from the multiple sequence alignment using FastTree v2.7.1 under the WAG + G model. The midpoint-rooted tree was visualized using iToL, and the family/sub-family taxonomic annotations for the NCBI Refseq viral genomes are straightly from the Virus Metadata Resource of ICTV. Sequences similar to AMGs were recruited from the 34,993 MAGs in this study and the NCBI nr database, based on the blastp searching of the identified viral AMGs (threshold of 100 for bit score, 1e-5 for E-value). These sets of viral AMGs and related protein sequences were aligned using Muscle , and the alignments were manually curated to remove poorly aligned positions using Jalview . Maximum-likelihood trees were computed using FastTree v2.7.1 and visualized using iTOL.
Further information on research design is available in the linked to this article.
Supplementary Information Peer Review File Description of Additional Supplementary Files Reporting Summary Supplementary Data 1 Supplementary Data 2 Supplementary Data 3 Supplementary Data 4 Supplementary Data 5 Supplementary Data 6 Supplementary Data 7 Supplementary Data 8 Supplementary Data 9 Supplementary Data 10
Source Data
|
INSIHGT: an accessible multi-scale, multi-modal 3D spatial biology platform | a0a57824-ec57-4253-8bb2-5ae4425273a3 | 11685604 | Biochemistry[mh] | The complexity of biological systems mandates high-dimensional measurements to obtain an integrative understanding. However, measurements are inevitably perturbative, affecting the authenticity of the retrieved information. Spatially resolved transcriptomics and highly multiplexed immunohistochemistry (IHC) have proven to be powerful approaches to extract spatial molecular insights from tissue slices, but the two-dimensional (2D) readout limits the representativeness of the information extracted. Meanwhile, three-dimensional (3D) multiplexed visualization of tissue structural and molecular features can reveal previously unknown organization principles , . Optical tissue clearing technologies promises to reveal the authentic 3D nature of tissue architecture and molecular distributions . Despite its significant advancements, the achievable depths of probe penetration limits the depth of analysis . The limited penetration of antibodies in 3D IHC represents one of the most significant barrier to 3D spatial biology . In recent years, multiple creative solutions have been proposed for deep immunohistochemistry – . However, an accessible technology that balances the authenticity and volume of data extracted is still lacking. For example, signal homogeneity across penetration depth is suboptimal with most methods, where probes preferentially deposit near the tissue surface and complicates downstream quantitative protein expression determination , . The homogeneous penetration can only be attained either through complicated operations or equipment – , or extensive tissue permeabilization , or incubation times measuring in weeks . These shortcomings hinder the wide adoption of 3D tissue analysis in research and renders them unsatisfactory for clinical translation. Here, we report the development of In si tu H ost- G uest Chemistry for T hree-dimensional Histology (INSIHGT). INSIHGT is a user-friendly 3D histochemistry method, featuring (1) homogeneous probe penetration up to centimeter depths, (2) producing quantitative, highly specific immunostaining signals, (3) a fast and affordable workflow to accommodate different tissue sizes and shapes, (4) simple immersion-based staining at room temperature, thus easily adopted in any laboratory and ready for scaling and automate, and (5) uses off-the-shelf antibodies or probes and is directly applicable to otherwise unlabeled mouse and human tissues fixed with paraformaldehyde only. INSIHGT was developed based on the manipulation of macromoleular diffusiophoresis using closo -dodecahydrododecaborate [B 12 H 12 ] 2- and a γ-cyclodextrin derivative. If tissue clearing is required, INSIHGT works best with solvent-based clearing methods – .
Modulation of antibody-antigen binding for enhanced probe penetration The limited penetration of macromolecular probes in complex biological systems belongs to the broader subject of transport phenomena, where diffusion and advections respectively drive the dissipation and directional drift of mass, energy and momentum. When biomolecules such as proteins are involved, the (bio)molecular fluxes are additionally determined by binding reactions, which can significantly deplete biomolecules due to their high binding affinities and low concentrations employed - a “reaction barrier” to deep antibody penetration. This is first described and postulated by Renier et al. (as in immunolabeling-enabled three-dimensional imaging of solvent-cleared organs, iDISCO) and Murray et al. (as in system-wide control of interaction time and kinetics of chemicals, SWITCH), and the latter further showed that the modulation of antibody-antigen (Ab-Ag) binding affinity (SWITCH labeling) can lead to homogeneous penetration of up to 1 mm for an anti-Histone H3 antibody using low concentrations of sodium dodecyl sulfate (SDS). Other techniques similarly utilizes urea , sodium deoxycholate , and heat to modulate antibody-antigen binding. However, others and we observed a general compromise between antibody labelling quality, penetration depth and uniformity, and duration of incubation. Deep penetration invariably requires long incubation times with inhomogeneous signal across depth, while faster methods leads to weak or nonspecific staining, as well as non-uniform penetration , , . Specifically, the use of SDS for deep labelling with SWITCH labelling has only been demonstrated for a handful of antigens (e.g., Histone H3 , NeuN , ColIV, αSMA, and TubIII ). It was found that deep staining with SDS was not universally applicable , resulting in weak calbindin staining , insufficient staining depth for β-amyloid plaques , and often required tailored refinement of buffer concentration . In our validation data, we similarly observed the variable performance when SDS is co-applied with antibodies (Supplementary Fig. ). Furthermore, although adding antibodies or probes theoretically improves penetration via steep concentration gradients, either the cost becomes prohibitive or it produces a biased representation of rimmed surface staining pattern , , especially for densely expressed binding targets. In the most extreme cases, the superficial staining signal would saturate microscope detectors while the core remains unstained (Supplementary Fig. ). Nonetheless, the conception of modulating antibody-antigen binding kinetics as a means to control probe flux through tissues is highly attractive , , given the simplicity, scalability, and affordability should the method be robust and generalizable. We postulated that the reason for the highly variable performance of SDS-assisted deep immunostaining is two-fold: the denaturation of antibodies beyond reparability, and the ineffective reinstatement of binding reactions. This prompted us to search for alternative approaches that can tune biomolecular binding affinities while preserving both macromolecular probe mobility and stability. In addition, the negation of the modulatory effect should be efficient and robust to reinstate biomolecular reactions within the complex tissue environment. Therefore, here we aim to develop a fast, equipment-free, deep and uniform multiplexed immunostaining method, which will help bring 3D histology to any basic laboratories. Boron cluster host–guest chemistry for in situ macromolecular probe mobility control Our initial attempts by using heat and the GroEL-GroES system to denature and refold antibodies in situ respectively have proved unsuccessful (Supplementary Fig. ). We thus switched from the natural molecular chaperones to artificial ones using milder detergents (e.g., sodium deoxycholate (SDC) and 3-([3-Cholamidopropyl]dimethylammonio)- 2-hydroxy-1-propanesulfonate i.e., CHAPSO) and their charge-complementary, size-matched host-complexing agents (e.g., β-cyclodextrins and their derivatives such as heptakis-(6-amino-6-deoxy)-beta-cyclodextrin, i.e., 6NβCD), which improved antibody penetration and staining success rate (Supplementary Fig. ). However, despite extensive optimization on the structure and derivatization on the detergents and their size- and charge-complementary cyclodextrins, they still have limited generality for a panel of antibodies tested (Supplementary Fig. ), producing nonspecific vascular precipitates or nuclear stainings. We then explored the use of chaotropes, which are known to solubilize proteins with enhanced antibody penetration . However, these approaches require long incubation times with extensive tissue pre-processing. Furthermore, higher concentrations of chaotropes often denature proteins as they directly interact with various protein residues and backbone , (Fig. ). We hence focus on testing weakly coordinating superchaotropes (WCS), a class of chemicals that we hypothesized to inhibit antibody-antigen interactions while preserving their structure and hence functions (Fig. ). We searched for weakly coordinating ions based on their utility in isolating extremely electrophilic species for X-ray crystallography, or as conjugate bases of superacids. We can then select a subset of these coordinatively inert ionic species that possess high chaotropicity as candidates for our deep immunostaining purpose. After antibodies and WCS have been homogeneously distributed throughout the tissue matrix, measures must be taken to negate the superchaotropicity to reinstate inter-biomolecular interactions in a bio-orthogonal and system-wide manner. To do so, we took advantage of the enthalpy-driven chaotropic assembly reaction, where the activities of superchaotropes can be effectively negated with supramolecular hosts in situ, reactivating interactions between the macromolecular probes and their tissue targets. Based on the above analysis, we designed a scalable deep molecular phenotyping method, performed in two stages: a first infiltrative stage where macromolecular probes co-diffuse homogeneously with WCS with minimized reaction barriers, followed by the addition of macrocyclic compounds for in situ host-guest reactions to reinstate antibody-antigen binding. With a much-narrowed list of chemicals to screen, we first benchmarked the performances of several putative WCS host-guest systems using a standard protocol as previously published , , (Supplementary Fig. ). These include perrhenate/α-cyclodextrin (ReO 4 − /αCD), ferrocenium/βCD ([Fe(C 5 H 5 ) 2 ] + /βCD), closo -dodecaborate ions ([B 12 X 12 ] 2− /γCD (where X = H, Cl, Br, or I)), metallacarborane ([Co(7,8-C 2 B 9 H 11 ) 2 ] − /γCD), and polyoxometalates ([PM 12 O 40 ] 3− /γCD (where M = Mo, or W)) (Fig. ). Group 5 and 6 halide clusters and rhenium chalcogenide clusters such as [Ta 6 Br 12 ] 2+ , [Mo 6 Cl 14 ] 2- and {Re 6 Se 8 } 2+ derivatives were excluded due to instability in aqueous environments. Only ReO 4 - , [B 12 H 12 ] 2− , and [Co(7,8-C 2 B 9 H 11 ) 2 ] − proved compatible with immunostaining conditions without causing tissue destruction or precipitation. [B 12 H 12 ] 2− /γCD produced the best staining sensitivity, specificity and signal homogeneity across depth (Supplementary Fig. ), while the effect of derivatizing γCD was negligible (Supplementary Fig. ). Finally, we chose the more soluble 2-hydroxypropylated derivative (2HPγCD) for its higher water solubility in our applications. We term our method INSIHGT, for In si tu h ost- g uest chemistry for t hree-dimensional histology. In situ host–guest chemistry for three-dimensional histology (INSIHGT) INSIHGT was designed to be a minimally perturbative, deeply and homogeneously penetrating staining method for 3D histology. Designed for affordability and scalability, INSIHGT involves simply incubating the conventional formaldehyde-fixed tissues in [B 12 H 12 ] 2- /PBS with antibodies, then in 2HPγCD/PBS (Fig. ) - both at room temperature with no specialized equipment. We compared INSIHGT with other 3D IHC techniques using a stringent benchmarking experiment as previously published (see “Methods”, Supplementary Fig. ) to compare their penetration depths and homogeneity , . Briefly, a mouse hemibrain was first stained in bulk for an antigen using various deep immunostaining methods (“bulk-staining”), followed by cutting the tissue coronally in the middle (thickest dimension) and re-stained for the same marker with a different fluorophore using a standardized control method (“cut-staining”), which serves as the reference signal without penetration limitations. The tissue was then imaged on the cut face to compare the bulk-staining intensity (deep staining method signal) and cut-staining intensity (reference signal) as a function of the bulk-staining penetration depth. We found that INSIHGT achieved the deepest immunolabeling penetration with the best signal homogeneity throughout the penetration depth (Fig. ). To quantitatively compare the signal, we segmented the labeled cells and compared the ratio between the deep immunolabelling signal and the reference signal against their penetration depths. Exponential decay curve fitting showed that the signal homogeneity was near-ideal (Fig. , Supplementary Table )—where there was negligible decay in deep immunolabelling signals across the penetration depth. We repeated our benchmarking experiment with different markers, and by correlating INSIHGT signal with the reference signal, we found INSIHGT provides reliable relative quantification of cellular marker expression levels throughout an entire mouse hemi-brain stained for 1 day (Fig. ). We supplemented our comparison with the binding kinetics modulating buffers employed in eFLASH and SWITCH-pumping of mELAST tissue-hydrogel, as we lacked the specialized equipment to provide the external force fields and mechanical compressions, respectively (Supplementary Fig. ). For SWITCH-pumping of mELAST tissue-hydrogel, we utilized the latest protocol and buffer recipe . Our results also showed the use of binding kinetics modulating buffers alone from eFLASH and SWITCH-pumping of mELAST tissue-hydrogel lead to shallower staining penetration than INSIHGT, confirming the deep penetration of these methods is mainly contributed by the added external force fields and mechanical compressions, respectively. Hence, with excellent penetration homogeneity with a simple operating protocol, INSIHGT can be the ideal method for mapping whole organs with cellular resolution. It is also the fastest deep immunolabelling from tissue harvesting to image (Fig. ). Due to its compatibility with solvent-based delipidation methods, we recommend the use of solvent-based clearing – for an overall fastest INSIHGT protocol, although aqueous-based clearing techniques are also compatible (see “INSIHGT protocol in Supplementary Materials” for further discussions). However, protocols involving the use of Triton X-100 , and triethylamine must be replaced with alternatives as they form precipitates with [B 12 H 12 ] 2− . Notably, after washing, only a negligible effect of [B 12 H 12 ] 2- -treatment will remain within the tissue. This is evident as the cut-staining intensity profile of INSIHGT showed very steep exponential decay with increasing cut-staining penetration depth, and became similar to that of iDISCO (Supplementary Fig. ) which has identical tissue pre-processing steps. Upon the addition of 2HPγCD and washing off the so-formed complexes, the penetration enhancement effect was completely abolished. This suggests that [B 12 H 12 ] 2- and cyclodextrins do not further permeabilize or disrupt the delipidated tissue. High-throughput, multiplexed, dense whole organ mapping After confirming INSIHGT can achieve uniform, deeply penetrating immunostaining, we next applied INSIHGT to address the challenges in whole organ multiplexed immunostaining, where the limited penetration of macromolecular probes hinders the scale, speed, or choice of antigens that can be reliably mapped. Due to the operational simplicity, scaling up the sample size in organ mapping experiments with INSIHGT is straightforward and can be done using multiwell cell culture plates (Fig. ). For example, we demonstrated our case by mapping 14 mouse kidneys in parallel (Fig. ) within 6 days of tissue harvesting using a standard 24-well cell culture plate. We then exemplify the capability of INSIHGT to simultaneously map densely expressed targets in whole organs (Fig. , Supplementary Fig. - ). We first performed multiplexed staining on mouse kidney with 3 days of incubation for Lycopersicon esculentum lectin (LEL), Peanut agglutinin (PNA), Griffonia simplicifolia lectin (GSL), and AQP-1, which are targets associated with poor probe penetration due to their binding targets’ dense expression (Fig. , Supplementary Fig. , Supplementary Fig. ). With the use of INSIHGT, the dense tubules and vascular structures can be reliably visualized and traced (Supplementary Fig. ). We then proceeded to map the whole brain of a 3-year-old mouse at the time of euthanasia. We utilized INSIHGT with 3 days of staining for Calbindin (CALB1), NeuN, and c-Fos, providing cell type and activity information across the aged organ (Fig. , Supplementary Fig. ). With whole organ sampling, we identified regions where aging-related changes were prominent, these include cavitations in the bilateral thalamus and striatum (Fig. ), as well as calbindin-positive deposits in the stratum radiatum of hippocampus (Fig. ). Interestingly, there seems to be an increased c-Fos expression level among the neurons surrounding thalamic cavitations (Fig. ) which are located deep within the brain tissue and thus cannot be explained by preferential antibody penetration, suggesting these cavitations may affect baseline neuronal activities. Similar 1-step multiplexed mapping of calcium-binding proteins across a whole adult mouse brain can also be performed with 3 days of staining (with a fixed tissue-to-image time of 6 days) (Fig. , Supplementary Movie ). Similarly, whole adult mouse brain mapping and statistics can be obtained for ~35 million NeuN+ cells, their GABA quantities and c-Fos expression levels using the same protocol (Supplementary Fig. ), allowing structure, neurotransmitter, and activity markers to be analyzed simultaneously. Overall, INSIHGT overcomes technical, operational, and cost bottlenecks towards accessible organ mapping for every basic molecular biology laboratory, providing rapid workflows to qualitatively evaluate organ-wide structural, molecular, and functional changes in health and disease, regardless of the spatial density of the visualization target. Boron cluster-based supramolecular histochemistry as a foundation for spatial multi-omics With the maturation of single-cell omics technologies, integrating these high-dimensional datasets becomes problematic. Embedding these data in their native 3D spatial contexts is the most biologically informative approach. Hence, we next tested whether our boron cluster supramolecular chemistry allows the retention and detection of multiple classes of biomolecules and their features, based on which 3D spatial multi-omics technologies can be developed. With identical tissue processing steps and INSIHGT conditions, we tested 357 antibodies and found 323 of them (90.5%) produced the expected immunostaining patterns as manually validated with reference to the human protein atlas and/ or existing literature (Fig. , Supplementary Figs. – , Supplementary Table ). This was at least six times the number of compatible antibodies demonstrated by any other deep immunostaining method (Fig. ), demonstrating the robustness and scalability of INSIHGT. Antigens ranging from small molecules (e.g., neurotransmitters), epigenetic modifications, peptides to proteins and their phosphorylated forms were detectable using INSIHGT (Fig. ). The specificity of immunostaining even allowed the degree of lysine methylations (i.e., mono-, di- and tri-methylation) and the symmetricity of arginine dimethylations to be distinguished from one another (Fig. ). We further tested 21 lectins to detect complex glycosylations, proving that [B 12 H 12 ] 2− do not sequester divalent metal ions essential for their carbohydrate recognition (Fig. , Supplementary Fig. ). Small molecule dyes such as nucleic acid probes, which are mostly positively charged, present a separate challenge as they precipitate with closo -dodecaborates, forming [probe] n+ /[B 12 H 12 ] 2− precipitates when co-applied with INSIHGT. We found size-matched and charge-complementing cyclodextrin derivatives as cost-effective supramolecular host agents for non-destructive deep tissue penetration and preventing precipitation. For example, sulfobutylether-βCD (SBEβCD) (Fig. ) can react with nucleic acid probes to form [probe⊂SBEβCD], which exhibits penetration enhancement during INSIHGT (Fig. ) without precipitation problems. The so-formed [probe⊂SBEβCD] complex can thus be co-incubated with antibodies in the presence of [B 12 H 12 ] 2− for a simpler protocol. We also performed RNA integrity number (RIN) and whole genome DNA extraction analyses on INSIHGT-treated samples (Supplementary Fig. ). We found each step of the INSIHGT protocol did not result in a significant decrease in RNA integrity number (RIN) (Supplementary Fig. ). The total RNA extracted after undergoing the whole INSIHGT protocol has an RIN of 7.2, compared with a RIN of 9 from a treatment-naive control sample. For whole genome DNA, both control versus INSIHGT-protocol-treated samples have similar sample integrity and total DNA yield per mm 3 sample (14.6 μg versus 10.12 μg), as well as subsequent whole genome sequencing quality (total clean base 114.5 Gb versus 125.2 Gb) with both having a mapping rate of 99.96% (Supplementary Fig. see also “Methods” on the quality control descriptions). With RNA sequencing whole transcriptomic comparing an INSIHGT-treated sample and a paired control sample (the opposite mouse hemibrain), the results showed essentially no differentially expressed genes profiles (Supplementary Fig. ). The Pearson correlation coefficient of the expression of all genes was 0.967. Hence, unsurprisingly, we found single-molecule fluorescent in situ hybridization (FISH) is also applicable for co-detection of protein antigens and RNAs with INSIHGT. Combining all the above probes, simultaneous 3D visualization of protein antigens, RNA transcripts, protein glycosylations, epigenetic modifications, and nuclear DNA is possible using a mixed supramolecular system in conventionally formalin-fixed intact tissue (Fig. , Table ). Taken together, our results suggest in situ boron cluster supramolecular histochemistry can form the foundation for volumetric spatial multi-omics method development. The implication of well-preserved RNAs suggests the possibility of post-INSIHGT section-based spatial transcriptomics. Centimeter-scale 3D histochemistry by isolated diffusional propagation Since antibody penetration remains the most challenging obstacle, we focus the remainder of our investigation on larger-scale 3D immunophenotyping. We thus applied INSIHGT to visualize centimeter-scale human brain samples, without using any external force fields to drive the penetration of macromolecular probes. These large, pigmented samples were sliced in the middle of the tissues’ smallest dimensions to allow imaging of the deepest areas with tiling confocal microscopy. We show that INSIHGT can process a 1.5 cm × 1.5 cm × 3 cm human cortex block for parvalbumin (PV) (Fig. ), with excellent homogeneity and demonstration of parvalbumin neurons predominantly in layer 4 of the human cortex. We then scaled INSIHGT to a 1.75 cm × 2.0 cm × 2.2 cm human cerebellum block for blood vessels (using Griffonia simplicifolia lectin I, GSL-I ) (Fig. ). As light-sheet microscopy is suboptimal due to the large human sample, we assessed the INSIHGT staining penetration on the cut face along the thickest dimension using confocal microscopy (Fig. , Supplementary Fig. ). This again reveals excellent homogeneity with no decay of signal across the centimeter of penetration depth. This shows that the use of boron cluster-based host-guest chemistry remains applicable for highly complex environments at the centimeter scale. The results further show that macromolecular transport within a dense biological matrix can remain unrestricted in a non-denaturing manner by globally adjusting inter-biomolecular interactions. We further applied INSIHGT to a 1.0 cm × 1.4 cm × 1.4 cm human brainstem with dementia with Lewy bodies (DLB) for phosphorylated alpha-synuclein at serine 129 (αSyn-pS129) (Fig. , Supplementary Fig. ). The large scale of imaging enabled registration and hence correlation with mesoscale imaging modalities such as magnetic resonance imaging (MRI) (Fig. , Supplementary Movie ). With this, we confirmed the localization of Lewy body pathologies to the locus ceruleus complex–subcerulean nuclei and substantia nigra, in keeping with the prominent rapid eye movement sleep behavior disorder (RBD) symptoms of this patient. Such a radio-histopathology approach would allow for correlative structural-molecular studies for neurodegenerative diseases. Overall, the capability of INSIHGT in achieving centimeter-sized tissue staining bridges the microscopic and mesoscopic imaging modalities, providing a general approach to correlative magnetic resonance-molecular imaging. Volumetric spatial morpho-proteomic cartography for cell type identification and neuropeptide proximity analysis We next extended along the molecular dimension on conventionally fixed tissues, where highly multiplexed immunostaining-based molecular profiling in 3D had not been accomplished previously. A single round of INSIHGT-based indirect immunofluorescence plus lectin histochemistry can simultaneously map up to 6 antigens (Supplementary Fig. ), tolerating a total protein concentration at >0.5 μg/μl in the staining buffer, and is limited only by spectral overlap and species compatibility. To achieve higher multiplexing, antibodies can be stripped off with 0.1 M sodium sulfite in the [B 12 H 12 ] 2- -containing buffer after overnight incubation at 37 °C (Fig. , Supplementary Fig. ). Since [B 12 H 12 ] 2− does not significantly disrupt intramolecular and intermolecular noncovalent protein interactions, the approach can be directly applied to routine formaldehyde-fixed tissues, we observed no tissue damage and little distortion, obviating the need for additional or specialist fixation methods. We exemplified this approach by mapping 28 marker expression levels in a 2 mm-thick mouse hypothalamus slice over 7 imaging rounds (Fig. , Supplementary Figs. , ). With each iterative round taking 48 h (including imaging, retrieval and elution), the whole manual process from tissue preparation to the 28-plex image took 16 days. After registration and segmentation using Cellpose 2.0 (Fig. , see “Methods”), we obtained 192,075 cells and their differentially expressed proteins (DEPs) based on immunostaining signals. Note that other user-friendly approaches such as StarDist and BCFind can also be used. Omitting 3 blood vessel channels, we then obtained the normalized mean intensities of the remaining 25 markers, their standard deviations (S.D.s) of signal intensities of the same 25 markers, as well as their distance to the nearest vessels for dimensionality reduction analysis and clustering. The S.D.s of signal intensities for each cell served as a measure of heterogeneous expression of a certain marker within the cell (e.g., strictly cytoplasmic or nuclear expression will have a higher S.D. than a marker expressing in both the cytoplasms and nuclei, as illustrated in Fig. ). Uniform manifold approximation and projection (UMAP) analysis of a subset of 84,139 cells based on these 51 markers (Fig. , Supplementary Figs. , ) plus their distance to the nearest vessels revealed 42 cell type clusters, allowing their 3D spatial interrelationships to be determined (Supplementary Fig. ). INSIHGT allows both 3D morphology and molecular information to be well-visualized via immunostaining, which is more difficult to access via current section-based spatial transcriptomics or single-cell multi-omics despite ongoing efforts . Recent characterizations of neuronal network activities based on the diffusional spread of neuropeptides highlight the need for 3D spatial mapping of protein antigens. To obtain these morphological-molecular relationships using INSIHGT, we segmented the neuropeptide Y (NPY)-positive fibers and computed the 3D distance to each UMAP-clustered cell types’ somatic membrane (Fig. ). While most clusters have a similar distance from NPY fibers, certain clustered cells (notably right tile clusters 1 and 2) are more proximally associated with NPY fibers, suggesting these cell clusters are differentially modulated by NPY when isotropic diffusion is assumed in the local brain parenchyma. Nonetheless, our dataset and analysis demonstrated it is possible to estimate the likely modulatory influence for a given cell-neuropeptide pair, providing an alternative approach to discovering neuronal dynamics paradigms. Fine-scale 3D imaging reveals unsuspected intercellular contacts traversing the Bowman space in mouse kidneys We found that the process of INSIHGT from fixation to completion preserves delicate structures such as free-hanging filaments and podia, enabling fine-scale analysis of compact structures such as the renal glomeruli. We found unsuspected intercellular contacts traversing the Bowman space, which was not known to be present in normal glomeruli even with serial sectioning electron microscopy studies – (Fig. ). These filaments are mostly originated from the podocytic surface, although some were also seen to emerge from PECs. They were numerous and found around the glomerular globe (Fig. ), with varied in their length, distance from each other, and morphologies (Fig. , Supplementary Fig. ). We classified these podocyte-to-PEC microfilaments into “reachers” and “stayers”, depending on whether they reached the PEC surface or not (Fig. ). Microfilaments of the reachers-type were more numerous than the stayers-type per glomerulus (Fig. ). Visually, we noted the emergence of these filaments tended to cluster together, especially for the reachers-type. To quantify such spatial clustering, we calculated the glomerular surface geodesic distances between the podocytic attachment points for each microfilament, which showed an inverse relationship with their path lengths (Fig. ), and reachers-type filament are geodesically located nearer to each other than the stayers type (Supplementary Fig. ). This suggests that the emergence of long, projecting microfilaments that reach across the Bowman space is localized on a few hotspots of the glomerular surface. Whether these hotspots of long-reaching microfilaments are driven by signals originated from the podocyte, the glomerular environment underneath, or the nearest PECs across the Bowmann space remains to be investigated and may reveal previously unsuspected podocyte physiological responses within their microenvironments. Notably, similar structures have been observed in the pathological state of cresenteric glomerulonephritis, in conjunction with whole cells traversing the Bowman space. As cresenteric glmoerulonephiritis is a final common pathway of glomerulonephropathies, it would be interesting to investigate whether there is a continuum of progressive changes from microfilaments physiologically to larger trans-Bowman space connections pathologically. In addition, morphologically similar structures have been observed in the microglia , pericytes , between tumor and immune cells , and between normal and apoptotic cells in cell culture . The podocyte-PEC connections described here thus add another organ to the growing list of nanostructural connections mediating information and matter exchange between different cell types in their physiological states. Sparsely distributed neurofilament inclusions unique to the human cerebellum We next completely mapped a 3 mm-thick (post-dehydration dimensions) human cerebellar folium for NF-H, GFAP, and blood vessels (Fig , Supplementary Figs. , , Supplementary Movie ), with preserved details down to the Bergmann glia fibers, perivascular astrocytic endfeet, and Purkinje cell axons that make the amenable to 3D orientation analysis and visualization (Fig. , Supplementary Figs. , ). The detailed visualization of filamentous structures throughout the 3 mm-thickness is in stark contrast to our previous attempts with similar specimens employing various methods, which showed weak NF-H signal in cerebellar sulci and barely visible GFAP signal in cerebellar white matter due to poor antibody penetration. We discovered sparsely distributed NF-H-intense inclusions that are easily missed in 2D sectioning and thus remain poorly characterized. We manually traced and identified 1078 inclusions throughout the entire imaged volume (Fig. ), where they were found in all of the three basic layers of the cerebellar cortex. A typical morphology of one type of these inclusion is a single bright globular inclusion at the sub-Purkinje layer radial location, with an elongated thick fiber extension that coils back and project to the adjacent molecular layer (Fig. ). However, much more protean morphologies also exist (Fig. , Supplementary Fig. ). To capture the morphological and spatial diversities of these inclusions, we obtained their spatial-morphometric statistics (Supplementary Fig. ), followed by principal component analysis of the compiled morphometrics such as Sholl analysis and Horton-Strahler number. The results reveal most of these inclusions to be morphologically homogeneous with variations explained largely by their path lengths, with a small subset characterized by much higher branching of the NF-H-intense filaments (Supplementary Fig. ). However, further understanding of these inclusions awaits broader investigations in normal and various disease states other than in DLB. Preliminarily, we have also observed these inclusions in normal human cerebellum tissues (Supplementary Fig. ). With the advancements in technologies, correlated mulit-pronged approaches using superresolution microscopy, electron microscopy and spatially resolved proteomics are expected to help greatly clarify the pathobiology of these inclusions. INSIHGT bridges the gap between 3D histology and traditional 2D pathology in current clinical practice The bio-orthogonal nature of the INSIHGT chemical system underlies its non-destructiveness. To highlight the clinical impact of INSIHGT in addition to 3D imaging of human samples, we found that INSIHGT-processed samples can be retrieved and processed as naïve tissues for traditional 2D histology via paraffin wax embedding and sectioning. Notably, staining qualities of routine hematoxylin and eosin (H&E) and various special stains on the post-INSIHGT processed slides were indistinguishable from the pre-INSIHGT processed slides even by a senior pathologist (Fig. ). In addition to not interfering with downstream clinical processes, the preserved quality of special staining allows for multi-modal cross-validation of 3D fluorescent imaging findings, making INSIHGT the ideal platform choice for next-generation histopathology (Fig. ). Together with the possibility for post-INSIHGT DNA and RNA sequencing, we envision (Supplementary Fig. ) quantitative 3D information within clinical specimens can be maximally extracted and preserved with high authenticity in a non-consumptive manner using INSIHGT, and its fast speed promises compatibility with current clinical workflows and constraints, allowing digital pathology and precision medicine to benefit from 3D analysis.
The limited penetration of macromolecular probes in complex biological systems belongs to the broader subject of transport phenomena, where diffusion and advections respectively drive the dissipation and directional drift of mass, energy and momentum. When biomolecules such as proteins are involved, the (bio)molecular fluxes are additionally determined by binding reactions, which can significantly deplete biomolecules due to their high binding affinities and low concentrations employed - a “reaction barrier” to deep antibody penetration. This is first described and postulated by Renier et al. (as in immunolabeling-enabled three-dimensional imaging of solvent-cleared organs, iDISCO) and Murray et al. (as in system-wide control of interaction time and kinetics of chemicals, SWITCH), and the latter further showed that the modulation of antibody-antigen (Ab-Ag) binding affinity (SWITCH labeling) can lead to homogeneous penetration of up to 1 mm for an anti-Histone H3 antibody using low concentrations of sodium dodecyl sulfate (SDS). Other techniques similarly utilizes urea , sodium deoxycholate , and heat to modulate antibody-antigen binding. However, others and we observed a general compromise between antibody labelling quality, penetration depth and uniformity, and duration of incubation. Deep penetration invariably requires long incubation times with inhomogeneous signal across depth, while faster methods leads to weak or nonspecific staining, as well as non-uniform penetration , , . Specifically, the use of SDS for deep labelling with SWITCH labelling has only been demonstrated for a handful of antigens (e.g., Histone H3 , NeuN , ColIV, αSMA, and TubIII ). It was found that deep staining with SDS was not universally applicable , resulting in weak calbindin staining , insufficient staining depth for β-amyloid plaques , and often required tailored refinement of buffer concentration . In our validation data, we similarly observed the variable performance when SDS is co-applied with antibodies (Supplementary Fig. ). Furthermore, although adding antibodies or probes theoretically improves penetration via steep concentration gradients, either the cost becomes prohibitive or it produces a biased representation of rimmed surface staining pattern , , especially for densely expressed binding targets. In the most extreme cases, the superficial staining signal would saturate microscope detectors while the core remains unstained (Supplementary Fig. ). Nonetheless, the conception of modulating antibody-antigen binding kinetics as a means to control probe flux through tissues is highly attractive , , given the simplicity, scalability, and affordability should the method be robust and generalizable. We postulated that the reason for the highly variable performance of SDS-assisted deep immunostaining is two-fold: the denaturation of antibodies beyond reparability, and the ineffective reinstatement of binding reactions. This prompted us to search for alternative approaches that can tune biomolecular binding affinities while preserving both macromolecular probe mobility and stability. In addition, the negation of the modulatory effect should be efficient and robust to reinstate biomolecular reactions within the complex tissue environment. Therefore, here we aim to develop a fast, equipment-free, deep and uniform multiplexed immunostaining method, which will help bring 3D histology to any basic laboratories.
Our initial attempts by using heat and the GroEL-GroES system to denature and refold antibodies in situ respectively have proved unsuccessful (Supplementary Fig. ). We thus switched from the natural molecular chaperones to artificial ones using milder detergents (e.g., sodium deoxycholate (SDC) and 3-([3-Cholamidopropyl]dimethylammonio)- 2-hydroxy-1-propanesulfonate i.e., CHAPSO) and their charge-complementary, size-matched host-complexing agents (e.g., β-cyclodextrins and their derivatives such as heptakis-(6-amino-6-deoxy)-beta-cyclodextrin, i.e., 6NβCD), which improved antibody penetration and staining success rate (Supplementary Fig. ). However, despite extensive optimization on the structure and derivatization on the detergents and their size- and charge-complementary cyclodextrins, they still have limited generality for a panel of antibodies tested (Supplementary Fig. ), producing nonspecific vascular precipitates or nuclear stainings. We then explored the use of chaotropes, which are known to solubilize proteins with enhanced antibody penetration . However, these approaches require long incubation times with extensive tissue pre-processing. Furthermore, higher concentrations of chaotropes often denature proteins as they directly interact with various protein residues and backbone , (Fig. ). We hence focus on testing weakly coordinating superchaotropes (WCS), a class of chemicals that we hypothesized to inhibit antibody-antigen interactions while preserving their structure and hence functions (Fig. ). We searched for weakly coordinating ions based on their utility in isolating extremely electrophilic species for X-ray crystallography, or as conjugate bases of superacids. We can then select a subset of these coordinatively inert ionic species that possess high chaotropicity as candidates for our deep immunostaining purpose. After antibodies and WCS have been homogeneously distributed throughout the tissue matrix, measures must be taken to negate the superchaotropicity to reinstate inter-biomolecular interactions in a bio-orthogonal and system-wide manner. To do so, we took advantage of the enthalpy-driven chaotropic assembly reaction, where the activities of superchaotropes can be effectively negated with supramolecular hosts in situ, reactivating interactions between the macromolecular probes and their tissue targets. Based on the above analysis, we designed a scalable deep molecular phenotyping method, performed in two stages: a first infiltrative stage where macromolecular probes co-diffuse homogeneously with WCS with minimized reaction barriers, followed by the addition of macrocyclic compounds for in situ host-guest reactions to reinstate antibody-antigen binding. With a much-narrowed list of chemicals to screen, we first benchmarked the performances of several putative WCS host-guest systems using a standard protocol as previously published , , (Supplementary Fig. ). These include perrhenate/α-cyclodextrin (ReO 4 − /αCD), ferrocenium/βCD ([Fe(C 5 H 5 ) 2 ] + /βCD), closo -dodecaborate ions ([B 12 X 12 ] 2− /γCD (where X = H, Cl, Br, or I)), metallacarborane ([Co(7,8-C 2 B 9 H 11 ) 2 ] − /γCD), and polyoxometalates ([PM 12 O 40 ] 3− /γCD (where M = Mo, or W)) (Fig. ). Group 5 and 6 halide clusters and rhenium chalcogenide clusters such as [Ta 6 Br 12 ] 2+ , [Mo 6 Cl 14 ] 2- and {Re 6 Se 8 } 2+ derivatives were excluded due to instability in aqueous environments. Only ReO 4 - , [B 12 H 12 ] 2− , and [Co(7,8-C 2 B 9 H 11 ) 2 ] − proved compatible with immunostaining conditions without causing tissue destruction or precipitation. [B 12 H 12 ] 2− /γCD produced the best staining sensitivity, specificity and signal homogeneity across depth (Supplementary Fig. ), while the effect of derivatizing γCD was negligible (Supplementary Fig. ). Finally, we chose the more soluble 2-hydroxypropylated derivative (2HPγCD) for its higher water solubility in our applications. We term our method INSIHGT, for In si tu h ost- g uest chemistry for t hree-dimensional histology.
INSIHGT was designed to be a minimally perturbative, deeply and homogeneously penetrating staining method for 3D histology. Designed for affordability and scalability, INSIHGT involves simply incubating the conventional formaldehyde-fixed tissues in [B 12 H 12 ] 2- /PBS with antibodies, then in 2HPγCD/PBS (Fig. ) - both at room temperature with no specialized equipment. We compared INSIHGT with other 3D IHC techniques using a stringent benchmarking experiment as previously published (see “Methods”, Supplementary Fig. ) to compare their penetration depths and homogeneity , . Briefly, a mouse hemibrain was first stained in bulk for an antigen using various deep immunostaining methods (“bulk-staining”), followed by cutting the tissue coronally in the middle (thickest dimension) and re-stained for the same marker with a different fluorophore using a standardized control method (“cut-staining”), which serves as the reference signal without penetration limitations. The tissue was then imaged on the cut face to compare the bulk-staining intensity (deep staining method signal) and cut-staining intensity (reference signal) as a function of the bulk-staining penetration depth. We found that INSIHGT achieved the deepest immunolabeling penetration with the best signal homogeneity throughout the penetration depth (Fig. ). To quantitatively compare the signal, we segmented the labeled cells and compared the ratio between the deep immunolabelling signal and the reference signal against their penetration depths. Exponential decay curve fitting showed that the signal homogeneity was near-ideal (Fig. , Supplementary Table )—where there was negligible decay in deep immunolabelling signals across the penetration depth. We repeated our benchmarking experiment with different markers, and by correlating INSIHGT signal with the reference signal, we found INSIHGT provides reliable relative quantification of cellular marker expression levels throughout an entire mouse hemi-brain stained for 1 day (Fig. ). We supplemented our comparison with the binding kinetics modulating buffers employed in eFLASH and SWITCH-pumping of mELAST tissue-hydrogel, as we lacked the specialized equipment to provide the external force fields and mechanical compressions, respectively (Supplementary Fig. ). For SWITCH-pumping of mELAST tissue-hydrogel, we utilized the latest protocol and buffer recipe . Our results also showed the use of binding kinetics modulating buffers alone from eFLASH and SWITCH-pumping of mELAST tissue-hydrogel lead to shallower staining penetration than INSIHGT, confirming the deep penetration of these methods is mainly contributed by the added external force fields and mechanical compressions, respectively. Hence, with excellent penetration homogeneity with a simple operating protocol, INSIHGT can be the ideal method for mapping whole organs with cellular resolution. It is also the fastest deep immunolabelling from tissue harvesting to image (Fig. ). Due to its compatibility with solvent-based delipidation methods, we recommend the use of solvent-based clearing – for an overall fastest INSIHGT protocol, although aqueous-based clearing techniques are also compatible (see “INSIHGT protocol in Supplementary Materials” for further discussions). However, protocols involving the use of Triton X-100 , and triethylamine must be replaced with alternatives as they form precipitates with [B 12 H 12 ] 2− . Notably, after washing, only a negligible effect of [B 12 H 12 ] 2- -treatment will remain within the tissue. This is evident as the cut-staining intensity profile of INSIHGT showed very steep exponential decay with increasing cut-staining penetration depth, and became similar to that of iDISCO (Supplementary Fig. ) which has identical tissue pre-processing steps. Upon the addition of 2HPγCD and washing off the so-formed complexes, the penetration enhancement effect was completely abolished. This suggests that [B 12 H 12 ] 2- and cyclodextrins do not further permeabilize or disrupt the delipidated tissue.
After confirming INSIHGT can achieve uniform, deeply penetrating immunostaining, we next applied INSIHGT to address the challenges in whole organ multiplexed immunostaining, where the limited penetration of macromolecular probes hinders the scale, speed, or choice of antigens that can be reliably mapped. Due to the operational simplicity, scaling up the sample size in organ mapping experiments with INSIHGT is straightforward and can be done using multiwell cell culture plates (Fig. ). For example, we demonstrated our case by mapping 14 mouse kidneys in parallel (Fig. ) within 6 days of tissue harvesting using a standard 24-well cell culture plate. We then exemplify the capability of INSIHGT to simultaneously map densely expressed targets in whole organs (Fig. , Supplementary Fig. - ). We first performed multiplexed staining on mouse kidney with 3 days of incubation for Lycopersicon esculentum lectin (LEL), Peanut agglutinin (PNA), Griffonia simplicifolia lectin (GSL), and AQP-1, which are targets associated with poor probe penetration due to their binding targets’ dense expression (Fig. , Supplementary Fig. , Supplementary Fig. ). With the use of INSIHGT, the dense tubules and vascular structures can be reliably visualized and traced (Supplementary Fig. ). We then proceeded to map the whole brain of a 3-year-old mouse at the time of euthanasia. We utilized INSIHGT with 3 days of staining for Calbindin (CALB1), NeuN, and c-Fos, providing cell type and activity information across the aged organ (Fig. , Supplementary Fig. ). With whole organ sampling, we identified regions where aging-related changes were prominent, these include cavitations in the bilateral thalamus and striatum (Fig. ), as well as calbindin-positive deposits in the stratum radiatum of hippocampus (Fig. ). Interestingly, there seems to be an increased c-Fos expression level among the neurons surrounding thalamic cavitations (Fig. ) which are located deep within the brain tissue and thus cannot be explained by preferential antibody penetration, suggesting these cavitations may affect baseline neuronal activities. Similar 1-step multiplexed mapping of calcium-binding proteins across a whole adult mouse brain can also be performed with 3 days of staining (with a fixed tissue-to-image time of 6 days) (Fig. , Supplementary Movie ). Similarly, whole adult mouse brain mapping and statistics can be obtained for ~35 million NeuN+ cells, their GABA quantities and c-Fos expression levels using the same protocol (Supplementary Fig. ), allowing structure, neurotransmitter, and activity markers to be analyzed simultaneously. Overall, INSIHGT overcomes technical, operational, and cost bottlenecks towards accessible organ mapping for every basic molecular biology laboratory, providing rapid workflows to qualitatively evaluate organ-wide structural, molecular, and functional changes in health and disease, regardless of the spatial density of the visualization target.
With the maturation of single-cell omics technologies, integrating these high-dimensional datasets becomes problematic. Embedding these data in their native 3D spatial contexts is the most biologically informative approach. Hence, we next tested whether our boron cluster supramolecular chemistry allows the retention and detection of multiple classes of biomolecules and their features, based on which 3D spatial multi-omics technologies can be developed. With identical tissue processing steps and INSIHGT conditions, we tested 357 antibodies and found 323 of them (90.5%) produced the expected immunostaining patterns as manually validated with reference to the human protein atlas and/ or existing literature (Fig. , Supplementary Figs. – , Supplementary Table ). This was at least six times the number of compatible antibodies demonstrated by any other deep immunostaining method (Fig. ), demonstrating the robustness and scalability of INSIHGT. Antigens ranging from small molecules (e.g., neurotransmitters), epigenetic modifications, peptides to proteins and their phosphorylated forms were detectable using INSIHGT (Fig. ). The specificity of immunostaining even allowed the degree of lysine methylations (i.e., mono-, di- and tri-methylation) and the symmetricity of arginine dimethylations to be distinguished from one another (Fig. ). We further tested 21 lectins to detect complex glycosylations, proving that [B 12 H 12 ] 2− do not sequester divalent metal ions essential for their carbohydrate recognition (Fig. , Supplementary Fig. ). Small molecule dyes such as nucleic acid probes, which are mostly positively charged, present a separate challenge as they precipitate with closo -dodecaborates, forming [probe] n+ /[B 12 H 12 ] 2− precipitates when co-applied with INSIHGT. We found size-matched and charge-complementing cyclodextrin derivatives as cost-effective supramolecular host agents for non-destructive deep tissue penetration and preventing precipitation. For example, sulfobutylether-βCD (SBEβCD) (Fig. ) can react with nucleic acid probes to form [probe⊂SBEβCD], which exhibits penetration enhancement during INSIHGT (Fig. ) without precipitation problems. The so-formed [probe⊂SBEβCD] complex can thus be co-incubated with antibodies in the presence of [B 12 H 12 ] 2− for a simpler protocol. We also performed RNA integrity number (RIN) and whole genome DNA extraction analyses on INSIHGT-treated samples (Supplementary Fig. ). We found each step of the INSIHGT protocol did not result in a significant decrease in RNA integrity number (RIN) (Supplementary Fig. ). The total RNA extracted after undergoing the whole INSIHGT protocol has an RIN of 7.2, compared with a RIN of 9 from a treatment-naive control sample. For whole genome DNA, both control versus INSIHGT-protocol-treated samples have similar sample integrity and total DNA yield per mm 3 sample (14.6 μg versus 10.12 μg), as well as subsequent whole genome sequencing quality (total clean base 114.5 Gb versus 125.2 Gb) with both having a mapping rate of 99.96% (Supplementary Fig. see also “Methods” on the quality control descriptions). With RNA sequencing whole transcriptomic comparing an INSIHGT-treated sample and a paired control sample (the opposite mouse hemibrain), the results showed essentially no differentially expressed genes profiles (Supplementary Fig. ). The Pearson correlation coefficient of the expression of all genes was 0.967. Hence, unsurprisingly, we found single-molecule fluorescent in situ hybridization (FISH) is also applicable for co-detection of protein antigens and RNAs with INSIHGT. Combining all the above probes, simultaneous 3D visualization of protein antigens, RNA transcripts, protein glycosylations, epigenetic modifications, and nuclear DNA is possible using a mixed supramolecular system in conventionally formalin-fixed intact tissue (Fig. , Table ). Taken together, our results suggest in situ boron cluster supramolecular histochemistry can form the foundation for volumetric spatial multi-omics method development. The implication of well-preserved RNAs suggests the possibility of post-INSIHGT section-based spatial transcriptomics.
Since antibody penetration remains the most challenging obstacle, we focus the remainder of our investigation on larger-scale 3D immunophenotyping. We thus applied INSIHGT to visualize centimeter-scale human brain samples, without using any external force fields to drive the penetration of macromolecular probes. These large, pigmented samples were sliced in the middle of the tissues’ smallest dimensions to allow imaging of the deepest areas with tiling confocal microscopy. We show that INSIHGT can process a 1.5 cm × 1.5 cm × 3 cm human cortex block for parvalbumin (PV) (Fig. ), with excellent homogeneity and demonstration of parvalbumin neurons predominantly in layer 4 of the human cortex. We then scaled INSIHGT to a 1.75 cm × 2.0 cm × 2.2 cm human cerebellum block for blood vessels (using Griffonia simplicifolia lectin I, GSL-I ) (Fig. ). As light-sheet microscopy is suboptimal due to the large human sample, we assessed the INSIHGT staining penetration on the cut face along the thickest dimension using confocal microscopy (Fig. , Supplementary Fig. ). This again reveals excellent homogeneity with no decay of signal across the centimeter of penetration depth. This shows that the use of boron cluster-based host-guest chemistry remains applicable for highly complex environments at the centimeter scale. The results further show that macromolecular transport within a dense biological matrix can remain unrestricted in a non-denaturing manner by globally adjusting inter-biomolecular interactions. We further applied INSIHGT to a 1.0 cm × 1.4 cm × 1.4 cm human brainstem with dementia with Lewy bodies (DLB) for phosphorylated alpha-synuclein at serine 129 (αSyn-pS129) (Fig. , Supplementary Fig. ). The large scale of imaging enabled registration and hence correlation with mesoscale imaging modalities such as magnetic resonance imaging (MRI) (Fig. , Supplementary Movie ). With this, we confirmed the localization of Lewy body pathologies to the locus ceruleus complex–subcerulean nuclei and substantia nigra, in keeping with the prominent rapid eye movement sleep behavior disorder (RBD) symptoms of this patient. Such a radio-histopathology approach would allow for correlative structural-molecular studies for neurodegenerative diseases. Overall, the capability of INSIHGT in achieving centimeter-sized tissue staining bridges the microscopic and mesoscopic imaging modalities, providing a general approach to correlative magnetic resonance-molecular imaging.
We next extended along the molecular dimension on conventionally fixed tissues, where highly multiplexed immunostaining-based molecular profiling in 3D had not been accomplished previously. A single round of INSIHGT-based indirect immunofluorescence plus lectin histochemistry can simultaneously map up to 6 antigens (Supplementary Fig. ), tolerating a total protein concentration at >0.5 μg/μl in the staining buffer, and is limited only by spectral overlap and species compatibility. To achieve higher multiplexing, antibodies can be stripped off with 0.1 M sodium sulfite in the [B 12 H 12 ] 2- -containing buffer after overnight incubation at 37 °C (Fig. , Supplementary Fig. ). Since [B 12 H 12 ] 2− does not significantly disrupt intramolecular and intermolecular noncovalent protein interactions, the approach can be directly applied to routine formaldehyde-fixed tissues, we observed no tissue damage and little distortion, obviating the need for additional or specialist fixation methods. We exemplified this approach by mapping 28 marker expression levels in a 2 mm-thick mouse hypothalamus slice over 7 imaging rounds (Fig. , Supplementary Figs. , ). With each iterative round taking 48 h (including imaging, retrieval and elution), the whole manual process from tissue preparation to the 28-plex image took 16 days. After registration and segmentation using Cellpose 2.0 (Fig. , see “Methods”), we obtained 192,075 cells and their differentially expressed proteins (DEPs) based on immunostaining signals. Note that other user-friendly approaches such as StarDist and BCFind can also be used. Omitting 3 blood vessel channels, we then obtained the normalized mean intensities of the remaining 25 markers, their standard deviations (S.D.s) of signal intensities of the same 25 markers, as well as their distance to the nearest vessels for dimensionality reduction analysis and clustering. The S.D.s of signal intensities for each cell served as a measure of heterogeneous expression of a certain marker within the cell (e.g., strictly cytoplasmic or nuclear expression will have a higher S.D. than a marker expressing in both the cytoplasms and nuclei, as illustrated in Fig. ). Uniform manifold approximation and projection (UMAP) analysis of a subset of 84,139 cells based on these 51 markers (Fig. , Supplementary Figs. , ) plus their distance to the nearest vessels revealed 42 cell type clusters, allowing their 3D spatial interrelationships to be determined (Supplementary Fig. ). INSIHGT allows both 3D morphology and molecular information to be well-visualized via immunostaining, which is more difficult to access via current section-based spatial transcriptomics or single-cell multi-omics despite ongoing efforts . Recent characterizations of neuronal network activities based on the diffusional spread of neuropeptides highlight the need for 3D spatial mapping of protein antigens. To obtain these morphological-molecular relationships using INSIHGT, we segmented the neuropeptide Y (NPY)-positive fibers and computed the 3D distance to each UMAP-clustered cell types’ somatic membrane (Fig. ). While most clusters have a similar distance from NPY fibers, certain clustered cells (notably right tile clusters 1 and 2) are more proximally associated with NPY fibers, suggesting these cell clusters are differentially modulated by NPY when isotropic diffusion is assumed in the local brain parenchyma. Nonetheless, our dataset and analysis demonstrated it is possible to estimate the likely modulatory influence for a given cell-neuropeptide pair, providing an alternative approach to discovering neuronal dynamics paradigms.
We found that the process of INSIHGT from fixation to completion preserves delicate structures such as free-hanging filaments and podia, enabling fine-scale analysis of compact structures such as the renal glomeruli. We found unsuspected intercellular contacts traversing the Bowman space, which was not known to be present in normal glomeruli even with serial sectioning electron microscopy studies – (Fig. ). These filaments are mostly originated from the podocytic surface, although some were also seen to emerge from PECs. They were numerous and found around the glomerular globe (Fig. ), with varied in their length, distance from each other, and morphologies (Fig. , Supplementary Fig. ). We classified these podocyte-to-PEC microfilaments into “reachers” and “stayers”, depending on whether they reached the PEC surface or not (Fig. ). Microfilaments of the reachers-type were more numerous than the stayers-type per glomerulus (Fig. ). Visually, we noted the emergence of these filaments tended to cluster together, especially for the reachers-type. To quantify such spatial clustering, we calculated the glomerular surface geodesic distances between the podocytic attachment points for each microfilament, which showed an inverse relationship with their path lengths (Fig. ), and reachers-type filament are geodesically located nearer to each other than the stayers type (Supplementary Fig. ). This suggests that the emergence of long, projecting microfilaments that reach across the Bowman space is localized on a few hotspots of the glomerular surface. Whether these hotspots of long-reaching microfilaments are driven by signals originated from the podocyte, the glomerular environment underneath, or the nearest PECs across the Bowmann space remains to be investigated and may reveal previously unsuspected podocyte physiological responses within their microenvironments. Notably, similar structures have been observed in the pathological state of cresenteric glomerulonephritis, in conjunction with whole cells traversing the Bowman space. As cresenteric glmoerulonephiritis is a final common pathway of glomerulonephropathies, it would be interesting to investigate whether there is a continuum of progressive changes from microfilaments physiologically to larger trans-Bowman space connections pathologically. In addition, morphologically similar structures have been observed in the microglia , pericytes , between tumor and immune cells , and between normal and apoptotic cells in cell culture . The podocyte-PEC connections described here thus add another organ to the growing list of nanostructural connections mediating information and matter exchange between different cell types in their physiological states.
We next completely mapped a 3 mm-thick (post-dehydration dimensions) human cerebellar folium for NF-H, GFAP, and blood vessels (Fig , Supplementary Figs. , , Supplementary Movie ), with preserved details down to the Bergmann glia fibers, perivascular astrocytic endfeet, and Purkinje cell axons that make the amenable to 3D orientation analysis and visualization (Fig. , Supplementary Figs. , ). The detailed visualization of filamentous structures throughout the 3 mm-thickness is in stark contrast to our previous attempts with similar specimens employing various methods, which showed weak NF-H signal in cerebellar sulci and barely visible GFAP signal in cerebellar white matter due to poor antibody penetration. We discovered sparsely distributed NF-H-intense inclusions that are easily missed in 2D sectioning and thus remain poorly characterized. We manually traced and identified 1078 inclusions throughout the entire imaged volume (Fig. ), where they were found in all of the three basic layers of the cerebellar cortex. A typical morphology of one type of these inclusion is a single bright globular inclusion at the sub-Purkinje layer radial location, with an elongated thick fiber extension that coils back and project to the adjacent molecular layer (Fig. ). However, much more protean morphologies also exist (Fig. , Supplementary Fig. ). To capture the morphological and spatial diversities of these inclusions, we obtained their spatial-morphometric statistics (Supplementary Fig. ), followed by principal component analysis of the compiled morphometrics such as Sholl analysis and Horton-Strahler number. The results reveal most of these inclusions to be morphologically homogeneous with variations explained largely by their path lengths, with a small subset characterized by much higher branching of the NF-H-intense filaments (Supplementary Fig. ). However, further understanding of these inclusions awaits broader investigations in normal and various disease states other than in DLB. Preliminarily, we have also observed these inclusions in normal human cerebellum tissues (Supplementary Fig. ). With the advancements in technologies, correlated mulit-pronged approaches using superresolution microscopy, electron microscopy and spatially resolved proteomics are expected to help greatly clarify the pathobiology of these inclusions.
The bio-orthogonal nature of the INSIHGT chemical system underlies its non-destructiveness. To highlight the clinical impact of INSIHGT in addition to 3D imaging of human samples, we found that INSIHGT-processed samples can be retrieved and processed as naïve tissues for traditional 2D histology via paraffin wax embedding and sectioning. Notably, staining qualities of routine hematoxylin and eosin (H&E) and various special stains on the post-INSIHGT processed slides were indistinguishable from the pre-INSIHGT processed slides even by a senior pathologist (Fig. ). In addition to not interfering with downstream clinical processes, the preserved quality of special staining allows for multi-modal cross-validation of 3D fluorescent imaging findings, making INSIHGT the ideal platform choice for next-generation histopathology (Fig. ). Together with the possibility for post-INSIHGT DNA and RNA sequencing, we envision (Supplementary Fig. ) quantitative 3D information within clinical specimens can be maximally extracted and preserved with high authenticity in a non-consumptive manner using INSIHGT, and its fast speed promises compatibility with current clinical workflows and constraints, allowing digital pathology and precision medicine to benefit from 3D analysis.
The convergence of multiple technological advances has paved the way for the acquisition of large-scale molecular phenotyping datasets at single-cell resolution, most notably single-cell transcriptomics . With a large number of previously undiscovered cell states, the quest to extend towards spatially resolved cell phenotyping based on translated and post-translationally expressed biomolecular signatures is paramount to understanding their structural and functional properties in biology . Scalable, high-resolution 3D tissue mapping provides a powerful approach to further our understanding of these previously unidentified cell types. Clinically, 3D histology has been shown to improve diagnosis in bladder cancer , predict biochemical recurrence in prostate cancer , and evaluate response to chemotherapy in ovarian carcinoma , By sampling across whole intact samples, 3D histology can deliver unbiased, quantitative, ground-truth data on the spatial distributions of molecules and cell types in their native tissue contexts . However, 3D tissue imaging is yet to be widely adopted despite the increasing accessibility of tissue clearing, optical sectioning microscopy, and coding-free image processing software. This is in large part due to the limited penetration of probes that plague the field regardless of the combinations of these technologies employed , , yielding variable, surface-biased data with questionable representativeness. Creative approaches have provided solutions to the penetration problem but are limited in their scalability and accessibility . Constrained by the requirements of non-advective approaches and compatibility with off-the-shelf reagents, the development of INSIHGT involved re-examining biomolecular transport and protein stability from the first principles, which led us to identify weakly coordinating superchaotrope and its chemical activity modulation by in situ host–guest reactions to implement our theoretical formulation. With the use of closo -dodecaborate and cyclodextrin as an additive in PBS, we solved the bottleneck of 3D histology by providing a cost-efficient, scalable, and affordable approach to quantitatively map multiple molecules in centimeter-sized tissues. With an equivalent tissue processing pipeline to iDISCO , INSIHGT shares the same affordability and scalability while providing much faster processing and greatly improved image quality, due to enhanced antibody penetration depth and homogeneity. Mapping tissue blocks simultaneously in multi-well dishes is easily accomplished in any basic molecular biology laboratory. Such simplicity in operation makes it highly accessible and automatable, as it requires no specialized equipment or skills. Furthermore, cocktails of off-the-shelf antibodies can be directly added to PBS supplemented with [B 12 H 12 ] 2− . Finally, we note that both [B 12 H 12 ] 2− salts and cyclodextrins are non-hazardous and stable indefinitely at ambient temperatures . With the affordability and accessibility of INSIHGT, we anticipate its diverse applications in 2D and 3D histology applications. Meanwhile, boron cluster-based supramolecular histochemistry can form the backbone for 3D spatial molecular-structural-functional profiling methods and studies, as well as atlas mapping efforts. The high-depth, quantitative readout of well-preserved tissue biomolecules offered by INSIHGT forms the foundation for multiplexed, multi-modal, and multi-scale 3D spatial biology. By making non-destructive 3D tissue molecular probing accessible, INSIHGT can empower researchers to bridge molecular-structural inferences from subcellular to the organ-wide level, even up to clinical radiological imaging scales for radio-histopathological correlations. Finally, the compatibility of INSIHGT with downstream traditional 2D histology methods indicates its non-interference with subsequent clinical decision-making. This paves the way for the translation and development of 3D histology-based tissue diagnostics, promising rapid and accurate generation of groundtruth data across entire tissue specimens. We recognize that INSIHGT still has room for further improvements. Immunostaining penetration homogeneities for larger tissues and denser antigens can be further enhanced, Practically, this is limited to a maximum of ~2 cm 3 sized tissues, and extremely dense antigens such as GAPDH, type I collagen, actin, and myosin remain difficult for whole organ staining with homogeneous penetration. Nonetheless, for any antigens stained using the iDISCO+ protocol with 7 days of primary antibody staining, INSIHGT with 3 days of antibody staining will at least provide 10–20× penetration enhancement, along with a noticeable enhancement in penetration homogeneity. Penetration can be further enhanced by prolonging the incubation times and ensuring an adequate amount probes has been added relative to the tissue expression level (see “Supplementary Note and INSIHGT protocol therein”). If available, the use of primary nanobodies with fluorescently-labeled secondary whole IgGs will further increase the penetration by about 5–10 times. In addition, the penetration homogeneity of small molecule dyes and lectins were still suboptimal for millimeter-scale tissues and remains to be further enhanced. In multi-round immunostaining, we noticed that the staining specificity and sensitivity deteriorated with each round of antibody elution with sulfite or β-mercaptoethanol, calling for a better 3D immunostaining elution method. Alternatively, hyperspectral imaging , nonlinear optics , time-resolved fluorescence techniques , and same-species antibody multiplexing could be explored to extend the multiplexing capabilities of INSIHGT. Finally, although theoretically applicable, we have yet to apply the INSIHGT-based multi-round staining in tissues from other species. Our discovery of boron clusters’ capabilities to solubilize proteins globally in a titratable manner, combined with their bio-orthogonal removal with supramolecular click chemistry, can reach beyond histology applications. Given the surprisingly robust performance of INSIHGT in complex tissue environments, we envision they can be applied in simpler in vitro settings to control intermolecular interactions—particularly when involving proteins—in a spatiotemporally precise manner.
Ethical statement For animal tissues, all experimental procedures were approved by the Animal Research Ethics Committee of the Chinese University of Hong Kong (CUHK) and were performed in accordance with the Guide for the Care and Use of Laboratory Animals (AEEC number 20-287-MIS). The housing of animals was provided by the Laboratory Animal Service Center of CUHK. For human tissues donated post-mortem, prior ethics approvals have been obtained and approved by the Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee (approval number 2022.137), with consent obtained from the donor and his family. Chemicals and reagents The antibodies utilized in this study were listed in Supplementary Table . All protein-conjugating fluorophores tested and their compatibility with INSIHGT were listed in Supplementary Table . Secondary Fab fragments or nanobodies were acquired from Jackson ImmunoResearch or Synaptic Systems, and all lectins were sourced from VectorLabs. Conjugation of secondary antibodies and lectins with fluorophores was achieved through N -hydroxysuccinimidyl (NHS) chemistry. The process was conducted at room temperature for a duration exceeding 16 h at antibody concentrations >3 mg/ml, using a tenfold molar excess of the reactive dye-NHS ester. Dodecahydro- closo -dodecaborate salts and other boron cluster compounds were procured from Katchem, while cyclodextrin derivatives were obtained from Cyclolab, Cyclodextrin Shop, or Sigma Aldrich. We noticed occasionally the chemicals involved in the INSIHGT process require purification. Specifically, for Na 2 [B 12 H 12 ], if insoluble flakes were noticed after dissolution in PBS, the solution was then acidified to pH 1 with concentrated hydrochloric acid, extracted with diethyl ether (Sigma Aldrich), and the organic solvent was removed and distilled off with a warm water bath. The residual H 2 B 12 H 12 was then dissolved in minimal amount of water, and neutralized with 1 M Na 2 CO 3 solution until pH 7 is reached with no further evanescence. The solution was then concentrated by distillation under vacuum and dried in an oven. For 2-hydroxypropyl-γ-cyclodextrin and sulfobutylether-β-cyclodextrin, if insoluble specks or dusts were noticed after dissolution in PBS, the solution was vacuum filtered through 0.22μm hydrophilic cellulose membrane filters (GSWP14250) using a Buchner funnel before use. A slight brownish-yellow discoloration of the resulting solution would not interfere with the INSIHGT results. For benzyl benzoate, if the solution is yellowish (possibly due to the impurities of fluorenone present), the solvent is poured into a metal bowl or glass crystallization dish and refrigerated to 4 ० C until crystallization begins. If no crystallization occurs, a small crystal seed of benzyl benzoate obtained by freezing the solvent at −20 ० C in a microcentrifuge tube can be put into the cooled solvent to kick-start the process. The crystals were then collected by vacuum filtration with air continuously drawn at room temperature until the crystals are white, which were warmed to 37 ० C to result in clear, colorless benzyl benzoate. If the resulting colorless benzyl benzoate is cloudy, 3 Å molecular sieves were added to the solvent to absorb the admixed water from condensation, before filtering off to result in a clear colorless benzyl benzoate. This purified benzyl benzoate is ready to constitute BABB clearing solution for imaging. Human and animal tissues Adult male C57BL/6 were utilized. These mice were housed in a controlled environment (22–23 °C) with a 12-h light-dark cycle, provided by the Laboratory Animal Service Center of CUHK. Unrestricted access to a standard mouse diet and water was ensured, and the environment was maintained at <70% relative humidity. Tissues were perfusion formaldehyde-fixed and collected by post-mortem dissection. In the case of immunostaining for neurotransmitters where Immusmol antibodies were used, the tissues were perfusion-fixed with the STAINperfect™ immunostaining kit A (Immusmol) with the antibody staining steps replaced with those in our INSIHGT method. For human tissues, brain and kidney tissues donated post-mortem by a patient (aged 77 at the time of passing) were used in this study. Prior ethics approvals have been obtained and approved by the Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee (approval number 2022.137), with consent from the donor and his family. Human dissection was performed by an anatomist (HML) after perfusion fixation with 4% paraformaldehyde via the femoral artery. The post-mortem delay to fixation and tissue harvesting was 4 weeks at −18 °C refrigeration, and the fixation duration was 1 week at room temperature. The corresponding organs were then harvested and stored in 1x PBS at room temperature until use. Screening deep staining approaches with in situ antibody recovery 4% PFA-fixed, 1mm-thick mouse cerebellum slices, 0.5 μg anti-parvalbumin antibody (Invitrogen, PA1-933), and 0.5 μg AlexaFluor 647-labeled Fab fragments of Donkey anti-Rabbit antibody (Jackson Immunoresearch 711-607-003) were used in this experiment to develop our method. Co-incubation of the secondary Fab fragment and primary antibody was utilized for 1-step immunostaining. All stainings were performed with an overnight immunostaining first stage at room temperature (unless specified otherwise) in various buffers, with subsequent recovery secondary stage at room temperature (unless specified otherwise) in various buffers, as detailed for each strategy below. The tissues were then washed in 1x PBSN, dehydrated with graded methanol, and cleared in BABB, before proceeding to imaging with confocal microscopy. For the SDS/αCD system, immunostaining was performed in a solution consisting of 10 mM sodium dodecylsulphate (SDS) in 1xPBS, while recovery was performed with a solution consisting of 10 mM αCD in 1x PBS. For the GnCl/GroEL+GroES system, immunostaining was performed in solution consisting of 6 M guanidinium chloride in 1x PBS, while recovery was performed with GroEL+GroES refolding buffer, consisting of 0.5 μM GroEL (MCLabs GEL-100), 1 μM GroES (MCLabs GES-100), 2.5 mM adenosine triphosphate, 20 mM Tris base, 300 mM NaCl, 10 mM MgSO 4 , 10 mM KCl, 1 mM tris(2-carboxylethyl)phosphine hydrochloride, 10% glycerol, with pH adjusted to 7.9 . For the sodium deoxycholate (SDC)/βCD system, immunostaining was performed in a solution consisting of 15 mM sodium deoxycholate (SDC) with 240 mM Tris base, 360 mM CAPS ( N -cyclohexyl-3-aminopropanesulfonic acid), with pH adjusted to 8, while recovery was performed with a solution consisting of 15 mM βCD with 240 mM Tris base, 360 mM CAPS, with pH adjusted to 8. For the Na 2 [B 12 H 12 ]/γCD system, immunostaining was performed in a solution consisting of 0.1 M Na 2 [B 12 H 12 ] in 1x PBS, while recovery was performed in a solution consisting of 0.1 M γCD in 1x PBS. Benchmarking experiments We designed a stringent benchmarking scheme for quantitative evaluation of antibody penetration depth and signal homogeneity across depth for comparison across existing deep immunostaining methods, based on our previously described principles (Supplementary Fig. ) The benchmarking experiment is carried out in two parts, the first part using a whole mouse hemisphere stained in bulk with anti-Parvalbumin (PV) antibodies with excess AlexaFluor 647-conjugated secondary Fab fragments—termed bulk-staining—after which the tissue is cut coronally at defined locations using a brain matrix and re-stained with anti-PV antibodies and AlexaFluor 488-conjugated secondary Fab fragments—termed cut-staining (Supplementary Fig. ). Hence, signals from bulk-staining can be distinguished easily from cut-staining and reveal different penetration depths of the two-staged immunostaining. We tested different deep immunostaining methods in the bulk-staining stage of the experiments, while the cut-staining was performed in 1× PBS with 0.1% Tween-20 as a conventional immunostaining buffer. The bulk-staining duration for INSIHGT was 24 h in benchmarking. All benchmarking samples were perfusion-fixed with 4% paraformaldehyde (PFA) in 1× PBS followed by post-fixation in 4% PFA overnight at 4 °C, except for SHIELD and mELAST samples where the SHIELD protocol was used. In addition, the final RI matching where the benzyl alcohol/benzyl benzoate (BABB) clearing method was universally employed to standardize the changes in tissue volumes and hence penetration distance adjustments. The standardized optical clearing avoids the variability in fluorescent quenching and tissue shrinkage/expansion introduced by different RI matching agents. For bulk-staining during our benchmarking experiment, we followed the published protocols except for eFLASH and mELAST due to the lack of specialized in-house equipment. For eFLASH , we stained the SHIELDed and SDS-delipidated tissue in the alkaline sodium deoxycholate buffer (240 mM Tris, 160 mM CAPS, 20% w/v D-sorbitol, 0.9% w/v sodium deoxycholate) and titrated-in acid-adjusting booster buffer (20% w/v D-sorbitol and 60 mM boric acid) hourly over 24 h to achieve a −0.1 ± 0.1 pH/h adjustment rate, using primary IgGs with secondary fluorophore-labeled Fab fragments. The tissue was then washed with 1× PBSTN (1× PBS, 1% v/v Triton X-100, and 0.02% w/v NaN 3 ) two times 3 h each before imaging. For mELAST , , , we stained the SHIELDed and SDS-delipidated tissue with the antibody and Fab fragments in 0.2 × PBSNaCh (0.2× PBS, 5% w/v NaCh and 0.02% w/v NaN 3 , 5% v/v normal donkey serum) first for 1 day at 37 °C without embedding the SHIELDed tissue in elastic gel nor compression/stretching, followed by adding Triton X-100 to a final concentration of ~5% and incubated for 1 more day. The tissue was then washed with 1× PBSTN 2 times 3 h each before imaging. For CUBIC HistoVision and iDISCO , the tissue was processed and stained as previously described . The staining durations were 14 days for CUBIC HistoVision and 7 days for iDISCO (both using primary IgGs with secondary fluorophore-labeled Fab fragments). For SHANEL , the tissue was first delipidated with CHAPS/NMDEA solution (10% w/v CHAPS detergent and 25% w/v N -methyldiethanolamine in water) for 1 week, then further delipidated with dichloromethane/methanol as in iDISCO, then treated with 0.5 M acetic acid for 2 days, washed in water for 6 h repeated 2 times, and then treated with guanidinium solution (PBS with 4 M guanidinium chloride, 0.05 sodium acetate, 2% w/v Triton X-100) for 2 days, blocked in blocking buffer (1× PBS, 0.2% v/v Triton X-100, 10% v/v DMSO, 10% goat serum) for 1 day, and finally stained in antibody incubation buffer (1× PBS, 0.2% v/v Tween-20, 3% v/v DMSO, 3% v/v goat serum, 10 mg/L heparin sodium) using primary IgGs with secondary fluorophore-labeled Fab fragments for 7 days. For quantification, PV-positive cells were identified using a Laplacian of Gaussian filter, followed by intensity-based segmentation. These segmented masks allow the quantification of bulk- and cut-staining channel intensities, in addition to the distance transformation intensity, performed in MATLAB R2023a (MathWorks, US). For an ideal deep immunostaining, the bulk-immunostaining signals should be independent of the bulk-staining penetration distances computed with distance transform of the segmented tissue boundaries, and perfectly correlate with that of cut-immunostaining. This is often not the case, as “rimming” of bulk-staining signals inevitably occurs as a “shell” around the tissue due to more easily accessible antigens on the bulk-staining tissue surface. The rimming effect can be quantified by fitting a single-term exponential decay curve 1 [12pt]{minimal}
$$}}}\; {{{}}}}{{{{}}}\; {{{}}}}={e}^{- ({{{}}}\; {{{}}}\; {{{}}})}$$ bulk - staining intensity cut - staining intensity = e − τ ( bulk-staining penetration distance ) and evaluating the decay constant, tau (τ), across penetration depths, with τ → 0 + as we approach the ideal case. Screening chemicals for INSIHGT We first pre-screened the WCS by immunostaining for parvalbumin in 1 mm 3 of mouse cortex tissue cubes in the presence of WCS at 0.1 M, after 1 day of incubation at room temperature the staining solution was aspirated and 0.1 M corresponding cyclodextrin was added and incubated overnight. The tissue was then washed in PBSN for 15 min two times and cleared with the BABB method, and imaged. This procedure eliminated [B 12 Br 12 ] 2− , [B 12 I 12 ] 2− , and [PW 12 O 40 ] 3− (as cesium or sodium salts) as they do not give the correct immunostaining pattern or lead to tissue destruction. We tested [Fe(C 5 H 5 ) 2 ] + (as the hexafluorophosphate salt) for the sake of completion as a low-charge large-sized cation. To benchmark the ability in achieving deep and homogeneous immunostaining, the above benchmarking procedure was used. Mouse hemibrains were fixed, washed, and stained with 1 μg rabbit anti-parvalbumin antibody with 1 μg AlexaFluor 647-labeled donkey anti-rabbit secondary antibody Fab fragments in 0.1 M of the WCS. The staining proceeded for 1 day after which the solution was replaced with 0.1 M corresponding cyclodextrin (or its derivatives) and incubated overnight. The hemibrains were then washed in PBSN for 1 h two times, cut in the middle coronally and re-stained for parvalbumin using AlexaFluor 488-labeled secondary Fab fragments. The tissue was then washed, cleared with the BABB method, and imaged on the cut face using a confocal microscope. INSIHGT A detailed step-by-step protocol used in this study has been given below. As a general overview, tissues were typically fixed using formalin or 4% paraformaldehyde, thoroughly washed in PBSN, and pre-incubated overnight at 37 °C in INSIHGT buffer A. The tissues were then stained with a solution containing the desired antibodies, Fab fragments, lectins, and SBEβCD-complexed nucleic acid probes in INSIHGT buffer A, ensuring a final [B 12 H 12 ] 2− concentration of 0.25 M. Staining duration varied from 6 h to 10 days based on tissue size, antigen, and required homogeneity (please see the calculation of time t in the step-by-step protocol). Post-staining, the solution was aspirated and replaced with INSIHGT buffer B (0.25 M 2-hydroxypropyl-γ-cyclodextrin in PBS) without prior washing, followed by a minimum 6-h incubation with adequate shaking of the viscous buffer. After sufficient PBSN washing, tissues were ready for imaging or clearing. Over incubation for any steps up to 60 days was tolerable. After imaging, the antibodies can be eluted with 0.1 M sodium sulphite in INSIHGT buffer A at 37 °C overnight. Screening antibodies compatible with INSIHGT To test antibodies in a high-throughput manner, we compiled a list of antibodies, reviewed their tissue expression and staining patterns in the literature, and then obtained the respective tissues known to have positive staining. These tissue blocks or entire organs were then washed, dehydrated, delipidated, rehydrated, washed, and infiltrated with INSIHGT solution A as described in the INSIHGT protocol. These INSIHGT-infiltrated tissues were then cut into ~1 mm 3 tissue cubes and placed in a 96-well plate as indicated in the list, with each well containing 70 μl of 1x INSIHGT solution A. About 0.5 μg of the primary antibody to be tested was then added and 0.5 μg of the corresponding AlexaFluor 647 or AlexaFluor 594-conjugated secondary antibody Fab fragment. The AlexaFluor 647 and 594 fluorophores were chosen for to minimize interference from any tissue autofluorescence on the result interpretation. For a total volume and antibodies added two each well, an equal volume of 2x INSIHGT solution A was then added to ensure the final concentration of 1x INSIHGT solution A. The plate was then sealed and the staining was allowed to proceed in the dark overnight at room temperature. The tissues were then washed in INSIHGT solution B for 2 h, PBSN for 1 h for two times, and then dehydrated with through 15 min-incubation of 50% methanol, 100% methanol, and 100% methanol. The tissues were then cleared in BABB for 15 min and proceeded to imaging. The total fixed tissue-to-image time for the antibody compatibility test is <36 h. Comparison between 2D histological staining of post-INSIHGT and control tissues Mouse and human samples were pre-processed as described above. Tissues were divided into the post-INSIHGT treated group which underwent the INSIHGT protocol with 3 days of INSIHGT A incubation without the application of antibodies and 6 h of INSIHGT B incubation, plus BABB clearing, and the control group which was immersed in PBSN for an equivalent period of time. Both groups were immersed in 70% ethanol, preceded by the immersion in 100% ethanol for the post-INSIHGT group (which were in BABB), and in 50% ethanol for the control group (which were in PBSN). Tissues were then immersed in 100% ethanol, xylene, and paraffin as in the standard paraffin embedding process. The embedded tissues were cut into 5 μm (human) or 10 μm (mouse) sections followed 2D histological staining with special stains. Following standard protocols, H&E staining was performed on human brain and kidney, PAS staining was performed on human kidney, Alcian blue staining was performed on mouse colon, and Masson trichrome staining was performed on mouse kidney samples. Microscopy Confocal microscopy was performed using a Leica SP8 confocal microscope equipped with excitation lasers at 405 nm, 488 nm, 514 nm, 561 nm, 649 nm, with detection using a 10× (NA 0.4, Leica HC PL APO ×10/0.40 CS2) or a 40× oil-immersion (NA 1.30, Leica HC PL APO 40×/1.30 Oil CS2) objective and a tunable emission filter. A custom-built MesoSPIM v5.1 was used for light-sheet microscopy equipped with lasers at 405 nm, 488 nm, 514 nm, 561 nm, 633 nm, and 675 nm, with detection using an Olympus MVX-ZB10 zoom body with a magnification range from 0.63×–6.3×. The equipped emission filters were from AHF, including QuadLine Rejectionband ZET405/488/561/640, 440/50 ET Bandpass, 509/22 Brightline HC, 515/LP Brightline HC Longpass Filter, 542/27 BrightLine HC, 585/40 ET Bandpass, 594 LP Edge Basic Longpass Filter, 660/13 BrightLine HC, 633 LP Edge Basic Longpass Filter, and a 685/LP BrightLine HC Longpass Filter. Two-photon tomography was performed at 780 nm excitation using a 16× objective (NA 0.8, Nikon CFI75 LWD 16X W), equipped with four emission filters (ThorLabs 460-60, Semrock 525/50, Semrock 607/70, and Chroma ET 670/50). Basic image acquisition parameters for all microscopy images in this study were listed in Supplementary Table . RNA and DNA quality control Control and INSIHGT-treated samples following the 1 mm 3 treatment timeline were re-embedded in paraffin wax and sent for nucleic acid integrity, sequencing, and bioinformatics analysis services provided by the BGI Hongkong Tech Solution NGS Lab. RNA integrity number analysis was performed using the Qubit Fluorometer. Whole genome DNA quality analysis was performed using the Agilent 2100 Bioanalyzer system. Sequencing was performed using the DNBSEQ TM sequencing technology platform. For transcriptomic comparison, the total clean bases were 11.2 Gb and 10.97 Gb for the control and INSIHGT-treated samples, respectively. The clean reads ratio after filtering was 90.64% and 89.96%, respectively. For whole genome sequencing, The total clean bases were 114.5 Gb and 125.2 Gb for the control and INSIHGT-treated samples, respectively, with both samples having a clean data rate of 100% and a mapping rate of 99.96%. RNA FISH HCR with INSIHGT Our RNA FISH HCR protocol is largely adapted from Choi et al. . The post-INSIHGT samples were first fixed in 4% PFA for 1 day. The samples were then pre-incubated in pre-hybridization buffer until the tissue sank to the bottom, and hybridized in hybridization buffer at 37 °C overnight. The next day, the tissue was washed in probe wash buffer for 1 h two times at room temperature, pre-incubated in amplification buffer for 30 min, followed by HCR amplification by incubating in amplification buffer with the addition of 30 pmol of fluorescently-labeled HCR hairpins and incubated overnight at RT. Note that the HCR hairpins were snap-cooled (heated at 95 °C for 2 min and cooled to RT for 30 min) in 10 μL 5× SSC buffer before application to ensure hairpin structures are formed . The samples were then washed thoroughly in 500 μL probe wash buffer for 30 min × 3 times to mitigate non-specific binding and later subjected to confocal imaging. The HCR probes which hybridize on the mRNA targets were custom-designed following the approach by Choi et al. , as shown in Table , and were purchased from Integrated DNA Technologies. Image processing No penetration-related attenuation intensity adjustments were performed for all displayed images except for the 3D renderings (but not 2D cross-sectional views) in Fig. and Supplementary Movie to provide the best visualization of an internal signal. For samples imaged with two-photon tomography, we noticed a thin rim attributed to the heat produced during the gelatin embedding process (which we verified by repeating the staining and confirming its absence with light sheet microscopy). We employed an intensity transformation mask based on the exponent of the distance from the whole organ mask surface. Image segmentation was performed with Cellpose 2.0 for cells implemented in MATLAB R2023b or Python, or with simple intensity thresholding. Affine and non-linear image registration was performed in MATLAB R2023a or manually in Adobe After Effects 2020 using the mesh warp effect and time remapping for z -plane adjustment. Image stitching was performed either with ImageJ BigStitcher plugin or assisted manually with Adobe After Effects 2020 followed by tile allocation using custom-written scripts in MATLAB R2023a. 3D image visualization and Movie rendering were performed with Bitplane Imaris 9.1, which were done as raw data with brightness and contrast adjustments, except for the whole mouse brain imaged with two-photon tomography. To remove their slicing artifacts, we resliced the volume into x-z slices, performed a z-direction Gaussian blur, followed by a 2D Fourier transform and filtered out non-central frequency peaks before inverting the transform. Finally, a Richardson-Lucy deconvolution was performed with a point-spread function elongated in the x-z direction, and resliced back into x-y slices. Segmentation and analysis of podocyte-to-PEC microfilaments in mouse kidneys Podocyte-to-PEC microfilaments of 14 mouse kidneys were manually traced via the SNT plugin in ImageJ . Path properties of the tracings were then exported for further analysis using custom codes in MATLAB R2023a. Distance transforms were performed under manually curated glomerulus and Bowman space masks, such that each voxel value corresponds to the distance between that voxel and the nearest nonzero voxel of the Bowman space mask. Path displacement [12pt]{minimal}
$${d}_{{fil}}$$ d f i l was computed via Pythagoras theorem using the start and end coordinates of the filament. Minimal distance [12pt]{minimal}
$${d}_{ }$$ d min is defined as the voxel value difference between the start and end coordinates. Path length [12pt]{minimal}
$${d}_{{path}}$$ d p a t h is directly measured via SNT. Tortuosity is defined as [12pt]{minimal}
$${d}_{{path}}/{d}_{{fil}}$$ d p a t h / d f i l , skewness is defined as [12pt]{minimal}
$${d}_{{fil}}/{d}_{ }$$ d f i l / d min , and the angle of take-off is defined as the angle between the unit gradient vector of the distance transform and the unit path displacement vector. The geodesic distance [12pt]{minimal}
$${d}_{A}(p,q)$$ d A p , q between voxels [12pt]{minimal}
$$p,q A$$ p , q ∈ A is defined as the minimal of length L of path(s) P = ( p 1 , p 2 , …, p l ) connecting p , q , where A is the set of all voxels constituting the surface of the glomerular mask : 2 [12pt]{minimal}
$${d}_{A}(p,q)= \{L(P):{p}_{1}=p,{p}_{l}=q,P A\}$$ d A p , q = min { L P : p 1 = p , p l = q , P ⊆ A } Correlation statistics were then performed via GraphPad Prism version 8 for Windows, GraphPad Software, Boston, Massachusetts USA, www.graphpad.com . Tracing and statistical analysis for the human cerebellar neurofilament inclusions were performed analogously. Spatial orientation and fractional anisotropy visualization of human cerebellum neural and glial filaments To visualize cerebellar neural and glial fibers in their preferred orientations, we performed structure tensor analysis with orientation-based color-coding in 3D. In detail, let [12pt]{minimal}
$$G:{{}}^{3} {{}}_{+}{}{}$$ G : R 3 × R + R be a 3D Gaussian kernel with standard deviation [12pt]{minimal}
$$$$ σ : 3 [12pt]{minimal}
$$G(x,y,z, )=^{2})}^{}} (-^{2}+{y}^{2}+{z}^{2}}{2{ }^{2}})$$ G x , y , z , σ = 1 2 π σ 2 3 2 exp − x 2 + y 2 + z 2 2 σ 2 Define a 3D image as a function [12pt]{minimal}
$$I:{{}}^{{}}{}{}$$ I : R 3 R which outputs the spatial voxel values. The gradient [12pt]{minimal}
$${{{}}}{{{}}}:\,{{}}^{3} {{}}^{3}$$ ∇ I : R 3 → R 3 of [12pt]{minimal}
$$I$$ I at each voxel is obtained by convolving [12pt]{minimal}
$$I$$ I with the spatial derivatives of [12pt]{minimal}
$$G$$ G : 4 [12pt]{minimal}
$${{{}}}{{{}}}(x,y,z, )=(,,)$$ ∇ G x , y , z , σ = ∂ G ∂ x , ∂ G ∂ y , ∂ G ∂ z 5 [12pt]{minimal}
$${{{}}}{{{}}}=I * {{{}}}{{{}}}$$ ∇ I = I * ∇ G where [12pt]{minimal}
$$*$$ * denotes the convolution operation. Compute the structure tensor [12pt]{minimal}
$${{{}}}:{{}}^{3} {{}}^{3 3}$$ T : R 3 → R 3 × 3 as the outer product of [12pt]{minimal}
$${{{}}}{{{}}}$$ ∇ I with itself: 6 [12pt]{minimal}
$${{{}}}(x,y,z)={{{}}}{{{}}} {{{}}}{{{}}}$$ T x , y , z = ∇ I ⊗ ∇ I [12pt]{minimal}
$${{{}}}$$ T is then smoothed over a neighborhood [12pt]{minimal}
$$N$$ N via convolution with [12pt]{minimal}
$$G$$ G to give [12pt]{minimal}
$$}}}}$$ T ¯ : 7 [12pt]{minimal}
$$}}}}(x,y,z)=G * {{{}}}(x,y,z)$$ T ¯ x , y , z = G * T x , y , z 8 [12pt]{minimal}
$$}}}}(x,y,z)=[_{x}^{2} }_{N} & { {I}_{x}{I}_{y} }_{N} & { {I}_{x}{I}_{z} }_{N}\\ { {I}_{y}{I}_{x} }_{N} & { {I}_{y}^{2} }_{N} & { {I}_{y}{I}_{z} }_{N}\\ { {I}_{z}{I}_{x} }_{N} & { {I}_{z}{I}_{y} }_{N} & { {I}_{z}^{2} }_{N}]$$ T ¯ x , y , z = I x 2 N I x I y N I x I z N I y I x N I y 2 N I y I z N I z I x N I z I y N I z 2 N where [12pt]{minimal}
$${ }_{N}$$ ⋅ N represents the Gaussian-weighted smoothing over [12pt]{minimal}
$$N$$ N , . Eigendecomposition of [12pt]{minimal}
$$}}}}$$ T ¯ is then performed to define the shape (eigenvalues, [12pt]{minimal}
$$$$ λ ) and the orientation (eigenvectors, [12pt]{minimal}
$${{{{}}}}_{{{{}}}}$$ v e ) of the diffusion ellipsoid. The fractional anisotropy ( [12pt]{minimal}
$${FA}$$ F A ) is then computed from [12pt]{minimal}
$$$$ λ : 9 [12pt]{minimal}
$${FA}=_{1}-{ }_{2})}^{2}+{({ }_{2}-{ }_{3})}^{2}+{({ }_{3}-{ }_{1})}^{2}}{2({ }_{1}^{2}+{ }_{2}^{2}+{ }_{3}^{2})}}$$ F A = λ 1 − λ 2 2 + λ 2 − λ 3 2 + λ 3 − λ 1 2 2 λ 1 2 + λ 2 2 + λ 3 2 where [12pt]{minimal}
$${FA}$$ F A ranges from 0 (complete isotropic diffusion) to 1 (complete anisotropic diffusion) . The tertiary (least) eigenvalue-associated eigenvectors were then extracted for the 3-dimensional image volume, with the 4th dimension encoding the corresponding vector basis magnitudes. To visualize the orientation of fibers in the context of the image, the eigenvectors were intensity-modulated with both the fractional anisotropy and the original image voxel values, and represented as a 3D RGB stack for visualization in Imaris. Multi-round multiplexed 3D image processing and analysis As the images were acquired across multiple rounds on a confocal microscope, we encountered the issues of misalignment and z-step glitching due to piezoelectric motor errors. Hence, the tiles of images can neither be directly stitched nor registered across multiple rounds. A custom MATLAB code was written to manually remove all the z-step glitching, followed by matching the z-steps across multiple rounds aiding by using the time-remapping function in Adobe After Effects, with linear interpolation for the transformed z-substacks. The resulting glitch-removed, z-matched tiles were then rigid registered using the image registration application in MATLAB, followed by non-rigid registration for local matching. Finally, the registrated tiles were stitched for downstream processing. Before segmentation, all non-vessel channels underwent background subtraction. They were then summed to capture the full morphology of stained cells, followed by segmentation using Cellpose 2.0 . A custom model was trained and used based on 2D excerpts of the images until adequate segmentation accuracy was achieved by manual inspection. The final test image segmentation has a Dice Coefficient (or F1-score) of 0.9354 ± 0.0596 and Jaccard Index of 0.8824 ± 0.1023, provided as mean ± S.D. on six excerpted test images. Vessels were segmented based on their staining intensity, and a distance transform was used to obtain the distance from vessels for all voxels. The cell masks subsequently facilitated the acquisition of the statistics for all stained channels. UMAP was performed in MATLAB R2023a using the UMAP 4.4 , package in a nested manner, incorporating the means and standard deviations of all immunostaining intensities, as well as the distance to the nearest blood vessel. An initial UMAP (with “min_dist” = 0.05, “metric” = “euclidean”, and “n_neighbors” = 15) was applied to each image stack tile, followed by DBSCAN clustering (using the default value ε = 0.6) to eliminate the largest cluster based on cell count. The remaining cells were subjected to a second UMAP (with the same parameters), where another round of DBSCAN clustering (with the same parameters) yielded the final cell clusters for analysis. The choice of UMAP parameters was based on an online guide ( https://umap-learn.readthedocs.io/en/latest/api.html ) and visual inspection on the reasonable clustering results. Violin plots for each clustered cell type’s distance from neuropeptide Y-positive fibers were obtained by creating a distance transformation field from the segmented fibers. Segmented cell masks were used to compute the mean intensity value of the distance transformation field. The pairwise distances of the clustered cell types were obtained for the 30 nearest neighbors, followed by calculating the mean and SD for the coefficient of variation. The gramm package in MATLAB R2023a was used for plotting some of the graphs . Statistics and reproducibility For Fig. , Supplementary Figs. , , one-component exponential regression was applied for curve fitting, and Pearson’s correlation coefficient was computed for the scattered plot in Fig. . Two-sample unpaired t -test was employed for Supp. Fig. The staining and imaging experiments in Fig. – , were repeated with at least two independent samples in the same or similar condition with slight modifications, such as using similarly sized tissues of similar characteristics (especially for human samples), using different staining antibodies and marker choice, or staining durations. All the results were reliably reproduced in accordance with the expected outcome of the methods. No method was used to predetermine sample size. Reporting summary Further information on research design is available in the linked to this article.
For animal tissues, all experimental procedures were approved by the Animal Research Ethics Committee of the Chinese University of Hong Kong (CUHK) and were performed in accordance with the Guide for the Care and Use of Laboratory Animals (AEEC number 20-287-MIS). The housing of animals was provided by the Laboratory Animal Service Center of CUHK. For human tissues donated post-mortem, prior ethics approvals have been obtained and approved by the Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee (approval number 2022.137), with consent obtained from the donor and his family.
The antibodies utilized in this study were listed in Supplementary Table . All protein-conjugating fluorophores tested and their compatibility with INSIHGT were listed in Supplementary Table . Secondary Fab fragments or nanobodies were acquired from Jackson ImmunoResearch or Synaptic Systems, and all lectins were sourced from VectorLabs. Conjugation of secondary antibodies and lectins with fluorophores was achieved through N -hydroxysuccinimidyl (NHS) chemistry. The process was conducted at room temperature for a duration exceeding 16 h at antibody concentrations >3 mg/ml, using a tenfold molar excess of the reactive dye-NHS ester. Dodecahydro- closo -dodecaborate salts and other boron cluster compounds were procured from Katchem, while cyclodextrin derivatives were obtained from Cyclolab, Cyclodextrin Shop, or Sigma Aldrich. We noticed occasionally the chemicals involved in the INSIHGT process require purification. Specifically, for Na 2 [B 12 H 12 ], if insoluble flakes were noticed after dissolution in PBS, the solution was then acidified to pH 1 with concentrated hydrochloric acid, extracted with diethyl ether (Sigma Aldrich), and the organic solvent was removed and distilled off with a warm water bath. The residual H 2 B 12 H 12 was then dissolved in minimal amount of water, and neutralized with 1 M Na 2 CO 3 solution until pH 7 is reached with no further evanescence. The solution was then concentrated by distillation under vacuum and dried in an oven. For 2-hydroxypropyl-γ-cyclodextrin and sulfobutylether-β-cyclodextrin, if insoluble specks or dusts were noticed after dissolution in PBS, the solution was vacuum filtered through 0.22μm hydrophilic cellulose membrane filters (GSWP14250) using a Buchner funnel before use. A slight brownish-yellow discoloration of the resulting solution would not interfere with the INSIHGT results. For benzyl benzoate, if the solution is yellowish (possibly due to the impurities of fluorenone present), the solvent is poured into a metal bowl or glass crystallization dish and refrigerated to 4 ० C until crystallization begins. If no crystallization occurs, a small crystal seed of benzyl benzoate obtained by freezing the solvent at −20 ० C in a microcentrifuge tube can be put into the cooled solvent to kick-start the process. The crystals were then collected by vacuum filtration with air continuously drawn at room temperature until the crystals are white, which were warmed to 37 ० C to result in clear, colorless benzyl benzoate. If the resulting colorless benzyl benzoate is cloudy, 3 Å molecular sieves were added to the solvent to absorb the admixed water from condensation, before filtering off to result in a clear colorless benzyl benzoate. This purified benzyl benzoate is ready to constitute BABB clearing solution for imaging.
Adult male C57BL/6 were utilized. These mice were housed in a controlled environment (22–23 °C) with a 12-h light-dark cycle, provided by the Laboratory Animal Service Center of CUHK. Unrestricted access to a standard mouse diet and water was ensured, and the environment was maintained at <70% relative humidity. Tissues were perfusion formaldehyde-fixed and collected by post-mortem dissection. In the case of immunostaining for neurotransmitters where Immusmol antibodies were used, the tissues were perfusion-fixed with the STAINperfect™ immunostaining kit A (Immusmol) with the antibody staining steps replaced with those in our INSIHGT method. For human tissues, brain and kidney tissues donated post-mortem by a patient (aged 77 at the time of passing) were used in this study. Prior ethics approvals have been obtained and approved by the Joint Chinese University of Hong Kong-New Territories East Cluster Clinical Research Ethics Committee (approval number 2022.137), with consent from the donor and his family. Human dissection was performed by an anatomist (HML) after perfusion fixation with 4% paraformaldehyde via the femoral artery. The post-mortem delay to fixation and tissue harvesting was 4 weeks at −18 °C refrigeration, and the fixation duration was 1 week at room temperature. The corresponding organs were then harvested and stored in 1x PBS at room temperature until use.
4% PFA-fixed, 1mm-thick mouse cerebellum slices, 0.5 μg anti-parvalbumin antibody (Invitrogen, PA1-933), and 0.5 μg AlexaFluor 647-labeled Fab fragments of Donkey anti-Rabbit antibody (Jackson Immunoresearch 711-607-003) were used in this experiment to develop our method. Co-incubation of the secondary Fab fragment and primary antibody was utilized for 1-step immunostaining. All stainings were performed with an overnight immunostaining first stage at room temperature (unless specified otherwise) in various buffers, with subsequent recovery secondary stage at room temperature (unless specified otherwise) in various buffers, as detailed for each strategy below. The tissues were then washed in 1x PBSN, dehydrated with graded methanol, and cleared in BABB, before proceeding to imaging with confocal microscopy. For the SDS/αCD system, immunostaining was performed in a solution consisting of 10 mM sodium dodecylsulphate (SDS) in 1xPBS, while recovery was performed with a solution consisting of 10 mM αCD in 1x PBS. For the GnCl/GroEL+GroES system, immunostaining was performed in solution consisting of 6 M guanidinium chloride in 1x PBS, while recovery was performed with GroEL+GroES refolding buffer, consisting of 0.5 μM GroEL (MCLabs GEL-100), 1 μM GroES (MCLabs GES-100), 2.5 mM adenosine triphosphate, 20 mM Tris base, 300 mM NaCl, 10 mM MgSO 4 , 10 mM KCl, 1 mM tris(2-carboxylethyl)phosphine hydrochloride, 10% glycerol, with pH adjusted to 7.9 . For the sodium deoxycholate (SDC)/βCD system, immunostaining was performed in a solution consisting of 15 mM sodium deoxycholate (SDC) with 240 mM Tris base, 360 mM CAPS ( N -cyclohexyl-3-aminopropanesulfonic acid), with pH adjusted to 8, while recovery was performed with a solution consisting of 15 mM βCD with 240 mM Tris base, 360 mM CAPS, with pH adjusted to 8. For the Na 2 [B 12 H 12 ]/γCD system, immunostaining was performed in a solution consisting of 0.1 M Na 2 [B 12 H 12 ] in 1x PBS, while recovery was performed in a solution consisting of 0.1 M γCD in 1x PBS.
We designed a stringent benchmarking scheme for quantitative evaluation of antibody penetration depth and signal homogeneity across depth for comparison across existing deep immunostaining methods, based on our previously described principles (Supplementary Fig. ) The benchmarking experiment is carried out in two parts, the first part using a whole mouse hemisphere stained in bulk with anti-Parvalbumin (PV) antibodies with excess AlexaFluor 647-conjugated secondary Fab fragments—termed bulk-staining—after which the tissue is cut coronally at defined locations using a brain matrix and re-stained with anti-PV antibodies and AlexaFluor 488-conjugated secondary Fab fragments—termed cut-staining (Supplementary Fig. ). Hence, signals from bulk-staining can be distinguished easily from cut-staining and reveal different penetration depths of the two-staged immunostaining. We tested different deep immunostaining methods in the bulk-staining stage of the experiments, while the cut-staining was performed in 1× PBS with 0.1% Tween-20 as a conventional immunostaining buffer. The bulk-staining duration for INSIHGT was 24 h in benchmarking. All benchmarking samples were perfusion-fixed with 4% paraformaldehyde (PFA) in 1× PBS followed by post-fixation in 4% PFA overnight at 4 °C, except for SHIELD and mELAST samples where the SHIELD protocol was used. In addition, the final RI matching where the benzyl alcohol/benzyl benzoate (BABB) clearing method was universally employed to standardize the changes in tissue volumes and hence penetration distance adjustments. The standardized optical clearing avoids the variability in fluorescent quenching and tissue shrinkage/expansion introduced by different RI matching agents. For bulk-staining during our benchmarking experiment, we followed the published protocols except for eFLASH and mELAST due to the lack of specialized in-house equipment. For eFLASH , we stained the SHIELDed and SDS-delipidated tissue in the alkaline sodium deoxycholate buffer (240 mM Tris, 160 mM CAPS, 20% w/v D-sorbitol, 0.9% w/v sodium deoxycholate) and titrated-in acid-adjusting booster buffer (20% w/v D-sorbitol and 60 mM boric acid) hourly over 24 h to achieve a −0.1 ± 0.1 pH/h adjustment rate, using primary IgGs with secondary fluorophore-labeled Fab fragments. The tissue was then washed with 1× PBSTN (1× PBS, 1% v/v Triton X-100, and 0.02% w/v NaN 3 ) two times 3 h each before imaging. For mELAST , , , we stained the SHIELDed and SDS-delipidated tissue with the antibody and Fab fragments in 0.2 × PBSNaCh (0.2× PBS, 5% w/v NaCh and 0.02% w/v NaN 3 , 5% v/v normal donkey serum) first for 1 day at 37 °C without embedding the SHIELDed tissue in elastic gel nor compression/stretching, followed by adding Triton X-100 to a final concentration of ~5% and incubated for 1 more day. The tissue was then washed with 1× PBSTN 2 times 3 h each before imaging. For CUBIC HistoVision and iDISCO , the tissue was processed and stained as previously described . The staining durations were 14 days for CUBIC HistoVision and 7 days for iDISCO (both using primary IgGs with secondary fluorophore-labeled Fab fragments). For SHANEL , the tissue was first delipidated with CHAPS/NMDEA solution (10% w/v CHAPS detergent and 25% w/v N -methyldiethanolamine in water) for 1 week, then further delipidated with dichloromethane/methanol as in iDISCO, then treated with 0.5 M acetic acid for 2 days, washed in water for 6 h repeated 2 times, and then treated with guanidinium solution (PBS with 4 M guanidinium chloride, 0.05 sodium acetate, 2% w/v Triton X-100) for 2 days, blocked in blocking buffer (1× PBS, 0.2% v/v Triton X-100, 10% v/v DMSO, 10% goat serum) for 1 day, and finally stained in antibody incubation buffer (1× PBS, 0.2% v/v Tween-20, 3% v/v DMSO, 3% v/v goat serum, 10 mg/L heparin sodium) using primary IgGs with secondary fluorophore-labeled Fab fragments for 7 days. For quantification, PV-positive cells were identified using a Laplacian of Gaussian filter, followed by intensity-based segmentation. These segmented masks allow the quantification of bulk- and cut-staining channel intensities, in addition to the distance transformation intensity, performed in MATLAB R2023a (MathWorks, US). For an ideal deep immunostaining, the bulk-immunostaining signals should be independent of the bulk-staining penetration distances computed with distance transform of the segmented tissue boundaries, and perfectly correlate with that of cut-immunostaining. This is often not the case, as “rimming” of bulk-staining signals inevitably occurs as a “shell” around the tissue due to more easily accessible antigens on the bulk-staining tissue surface. The rimming effect can be quantified by fitting a single-term exponential decay curve 1 [12pt]{minimal}
$$}}}\; {{{}}}}{{{{}}}\; {{{}}}}={e}^{- ({{{}}}\; {{{}}}\; {{{}}})}$$ bulk - staining intensity cut - staining intensity = e − τ ( bulk-staining penetration distance ) and evaluating the decay constant, tau (τ), across penetration depths, with τ → 0 + as we approach the ideal case.
We first pre-screened the WCS by immunostaining for parvalbumin in 1 mm 3 of mouse cortex tissue cubes in the presence of WCS at 0.1 M, after 1 day of incubation at room temperature the staining solution was aspirated and 0.1 M corresponding cyclodextrin was added and incubated overnight. The tissue was then washed in PBSN for 15 min two times and cleared with the BABB method, and imaged. This procedure eliminated [B 12 Br 12 ] 2− , [B 12 I 12 ] 2− , and [PW 12 O 40 ] 3− (as cesium or sodium salts) as they do not give the correct immunostaining pattern or lead to tissue destruction. We tested [Fe(C 5 H 5 ) 2 ] + (as the hexafluorophosphate salt) for the sake of completion as a low-charge large-sized cation. To benchmark the ability in achieving deep and homogeneous immunostaining, the above benchmarking procedure was used. Mouse hemibrains were fixed, washed, and stained with 1 μg rabbit anti-parvalbumin antibody with 1 μg AlexaFluor 647-labeled donkey anti-rabbit secondary antibody Fab fragments in 0.1 M of the WCS. The staining proceeded for 1 day after which the solution was replaced with 0.1 M corresponding cyclodextrin (or its derivatives) and incubated overnight. The hemibrains were then washed in PBSN for 1 h two times, cut in the middle coronally and re-stained for parvalbumin using AlexaFluor 488-labeled secondary Fab fragments. The tissue was then washed, cleared with the BABB method, and imaged on the cut face using a confocal microscope.
A detailed step-by-step protocol used in this study has been given below. As a general overview, tissues were typically fixed using formalin or 4% paraformaldehyde, thoroughly washed in PBSN, and pre-incubated overnight at 37 °C in INSIHGT buffer A. The tissues were then stained with a solution containing the desired antibodies, Fab fragments, lectins, and SBEβCD-complexed nucleic acid probes in INSIHGT buffer A, ensuring a final [B 12 H 12 ] 2− concentration of 0.25 M. Staining duration varied from 6 h to 10 days based on tissue size, antigen, and required homogeneity (please see the calculation of time t in the step-by-step protocol). Post-staining, the solution was aspirated and replaced with INSIHGT buffer B (0.25 M 2-hydroxypropyl-γ-cyclodextrin in PBS) without prior washing, followed by a minimum 6-h incubation with adequate shaking of the viscous buffer. After sufficient PBSN washing, tissues were ready for imaging or clearing. Over incubation for any steps up to 60 days was tolerable. After imaging, the antibodies can be eluted with 0.1 M sodium sulphite in INSIHGT buffer A at 37 °C overnight.
To test antibodies in a high-throughput manner, we compiled a list of antibodies, reviewed their tissue expression and staining patterns in the literature, and then obtained the respective tissues known to have positive staining. These tissue blocks or entire organs were then washed, dehydrated, delipidated, rehydrated, washed, and infiltrated with INSIHGT solution A as described in the INSIHGT protocol. These INSIHGT-infiltrated tissues were then cut into ~1 mm 3 tissue cubes and placed in a 96-well plate as indicated in the list, with each well containing 70 μl of 1x INSIHGT solution A. About 0.5 μg of the primary antibody to be tested was then added and 0.5 μg of the corresponding AlexaFluor 647 or AlexaFluor 594-conjugated secondary antibody Fab fragment. The AlexaFluor 647 and 594 fluorophores were chosen for to minimize interference from any tissue autofluorescence on the result interpretation. For a total volume and antibodies added two each well, an equal volume of 2x INSIHGT solution A was then added to ensure the final concentration of 1x INSIHGT solution A. The plate was then sealed and the staining was allowed to proceed in the dark overnight at room temperature. The tissues were then washed in INSIHGT solution B for 2 h, PBSN for 1 h for two times, and then dehydrated with through 15 min-incubation of 50% methanol, 100% methanol, and 100% methanol. The tissues were then cleared in BABB for 15 min and proceeded to imaging. The total fixed tissue-to-image time for the antibody compatibility test is <36 h.
Mouse and human samples were pre-processed as described above. Tissues were divided into the post-INSIHGT treated group which underwent the INSIHGT protocol with 3 days of INSIHGT A incubation without the application of antibodies and 6 h of INSIHGT B incubation, plus BABB clearing, and the control group which was immersed in PBSN for an equivalent period of time. Both groups were immersed in 70% ethanol, preceded by the immersion in 100% ethanol for the post-INSIHGT group (which were in BABB), and in 50% ethanol for the control group (which were in PBSN). Tissues were then immersed in 100% ethanol, xylene, and paraffin as in the standard paraffin embedding process. The embedded tissues were cut into 5 μm (human) or 10 μm (mouse) sections followed 2D histological staining with special stains. Following standard protocols, H&E staining was performed on human brain and kidney, PAS staining was performed on human kidney, Alcian blue staining was performed on mouse colon, and Masson trichrome staining was performed on mouse kidney samples.
Confocal microscopy was performed using a Leica SP8 confocal microscope equipped with excitation lasers at 405 nm, 488 nm, 514 nm, 561 nm, 649 nm, with detection using a 10× (NA 0.4, Leica HC PL APO ×10/0.40 CS2) or a 40× oil-immersion (NA 1.30, Leica HC PL APO 40×/1.30 Oil CS2) objective and a tunable emission filter. A custom-built MesoSPIM v5.1 was used for light-sheet microscopy equipped with lasers at 405 nm, 488 nm, 514 nm, 561 nm, 633 nm, and 675 nm, with detection using an Olympus MVX-ZB10 zoom body with a magnification range from 0.63×–6.3×. The equipped emission filters were from AHF, including QuadLine Rejectionband ZET405/488/561/640, 440/50 ET Bandpass, 509/22 Brightline HC, 515/LP Brightline HC Longpass Filter, 542/27 BrightLine HC, 585/40 ET Bandpass, 594 LP Edge Basic Longpass Filter, 660/13 BrightLine HC, 633 LP Edge Basic Longpass Filter, and a 685/LP BrightLine HC Longpass Filter. Two-photon tomography was performed at 780 nm excitation using a 16× objective (NA 0.8, Nikon CFI75 LWD 16X W), equipped with four emission filters (ThorLabs 460-60, Semrock 525/50, Semrock 607/70, and Chroma ET 670/50). Basic image acquisition parameters for all microscopy images in this study were listed in Supplementary Table .
Control and INSIHGT-treated samples following the 1 mm 3 treatment timeline were re-embedded in paraffin wax and sent for nucleic acid integrity, sequencing, and bioinformatics analysis services provided by the BGI Hongkong Tech Solution NGS Lab. RNA integrity number analysis was performed using the Qubit Fluorometer. Whole genome DNA quality analysis was performed using the Agilent 2100 Bioanalyzer system. Sequencing was performed using the DNBSEQ TM sequencing technology platform. For transcriptomic comparison, the total clean bases were 11.2 Gb and 10.97 Gb for the control and INSIHGT-treated samples, respectively. The clean reads ratio after filtering was 90.64% and 89.96%, respectively. For whole genome sequencing, The total clean bases were 114.5 Gb and 125.2 Gb for the control and INSIHGT-treated samples, respectively, with both samples having a clean data rate of 100% and a mapping rate of 99.96%.
Our RNA FISH HCR protocol is largely adapted from Choi et al. . The post-INSIHGT samples were first fixed in 4% PFA for 1 day. The samples were then pre-incubated in pre-hybridization buffer until the tissue sank to the bottom, and hybridized in hybridization buffer at 37 °C overnight. The next day, the tissue was washed in probe wash buffer for 1 h two times at room temperature, pre-incubated in amplification buffer for 30 min, followed by HCR amplification by incubating in amplification buffer with the addition of 30 pmol of fluorescently-labeled HCR hairpins and incubated overnight at RT. Note that the HCR hairpins were snap-cooled (heated at 95 °C for 2 min and cooled to RT for 30 min) in 10 μL 5× SSC buffer before application to ensure hairpin structures are formed . The samples were then washed thoroughly in 500 μL probe wash buffer for 30 min × 3 times to mitigate non-specific binding and later subjected to confocal imaging. The HCR probes which hybridize on the mRNA targets were custom-designed following the approach by Choi et al. , as shown in Table , and were purchased from Integrated DNA Technologies.
No penetration-related attenuation intensity adjustments were performed for all displayed images except for the 3D renderings (but not 2D cross-sectional views) in Fig. and Supplementary Movie to provide the best visualization of an internal signal. For samples imaged with two-photon tomography, we noticed a thin rim attributed to the heat produced during the gelatin embedding process (which we verified by repeating the staining and confirming its absence with light sheet microscopy). We employed an intensity transformation mask based on the exponent of the distance from the whole organ mask surface. Image segmentation was performed with Cellpose 2.0 for cells implemented in MATLAB R2023b or Python, or with simple intensity thresholding. Affine and non-linear image registration was performed in MATLAB R2023a or manually in Adobe After Effects 2020 using the mesh warp effect and time remapping for z -plane adjustment. Image stitching was performed either with ImageJ BigStitcher plugin or assisted manually with Adobe After Effects 2020 followed by tile allocation using custom-written scripts in MATLAB R2023a. 3D image visualization and Movie rendering were performed with Bitplane Imaris 9.1, which were done as raw data with brightness and contrast adjustments, except for the whole mouse brain imaged with two-photon tomography. To remove their slicing artifacts, we resliced the volume into x-z slices, performed a z-direction Gaussian blur, followed by a 2D Fourier transform and filtered out non-central frequency peaks before inverting the transform. Finally, a Richardson-Lucy deconvolution was performed with a point-spread function elongated in the x-z direction, and resliced back into x-y slices.
Podocyte-to-PEC microfilaments of 14 mouse kidneys were manually traced via the SNT plugin in ImageJ . Path properties of the tracings were then exported for further analysis using custom codes in MATLAB R2023a. Distance transforms were performed under manually curated glomerulus and Bowman space masks, such that each voxel value corresponds to the distance between that voxel and the nearest nonzero voxel of the Bowman space mask. Path displacement [12pt]{minimal}
$${d}_{{fil}}$$ d f i l was computed via Pythagoras theorem using the start and end coordinates of the filament. Minimal distance [12pt]{minimal}
$${d}_{ }$$ d min is defined as the voxel value difference between the start and end coordinates. Path length [12pt]{minimal}
$${d}_{{path}}$$ d p a t h is directly measured via SNT. Tortuosity is defined as [12pt]{minimal}
$${d}_{{path}}/{d}_{{fil}}$$ d p a t h / d f i l , skewness is defined as [12pt]{minimal}
$${d}_{{fil}}/{d}_{ }$$ d f i l / d min , and the angle of take-off is defined as the angle between the unit gradient vector of the distance transform and the unit path displacement vector. The geodesic distance [12pt]{minimal}
$${d}_{A}(p,q)$$ d A p , q between voxels [12pt]{minimal}
$$p,q A$$ p , q ∈ A is defined as the minimal of length L of path(s) P = ( p 1 , p 2 , …, p l ) connecting p , q , where A is the set of all voxels constituting the surface of the glomerular mask : 2 [12pt]{minimal}
$${d}_{A}(p,q)= \{L(P):{p}_{1}=p,{p}_{l}=q,P A\}$$ d A p , q = min { L P : p 1 = p , p l = q , P ⊆ A } Correlation statistics were then performed via GraphPad Prism version 8 for Windows, GraphPad Software, Boston, Massachusetts USA, www.graphpad.com . Tracing and statistical analysis for the human cerebellar neurofilament inclusions were performed analogously.
To visualize cerebellar neural and glial fibers in their preferred orientations, we performed structure tensor analysis with orientation-based color-coding in 3D. In detail, let [12pt]{minimal}
$$G:{{}}^{3} {{}}_{+}{}{}$$ G : R 3 × R + R be a 3D Gaussian kernel with standard deviation [12pt]{minimal}
$$$$ σ : 3 [12pt]{minimal}
$$G(x,y,z, )=^{2})}^{}} (-^{2}+{y}^{2}+{z}^{2}}{2{ }^{2}})$$ G x , y , z , σ = 1 2 π σ 2 3 2 exp − x 2 + y 2 + z 2 2 σ 2 Define a 3D image as a function [12pt]{minimal}
$$I:{{}}^{{}}{}{}$$ I : R 3 R which outputs the spatial voxel values. The gradient [12pt]{minimal}
$${{{}}}{{{}}}:\,{{}}^{3} {{}}^{3}$$ ∇ I : R 3 → R 3 of [12pt]{minimal}
$$I$$ I at each voxel is obtained by convolving [12pt]{minimal}
$$I$$ I with the spatial derivatives of [12pt]{minimal}
$$G$$ G : 4 [12pt]{minimal}
$${{{}}}{{{}}}(x,y,z, )=(,,)$$ ∇ G x , y , z , σ = ∂ G ∂ x , ∂ G ∂ y , ∂ G ∂ z 5 [12pt]{minimal}
$${{{}}}{{{}}}=I * {{{}}}{{{}}}$$ ∇ I = I * ∇ G where [12pt]{minimal}
$$*$$ * denotes the convolution operation. Compute the structure tensor [12pt]{minimal}
$${{{}}}:{{}}^{3} {{}}^{3 3}$$ T : R 3 → R 3 × 3 as the outer product of [12pt]{minimal}
$${{{}}}{{{}}}$$ ∇ I with itself: 6 [12pt]{minimal}
$${{{}}}(x,y,z)={{{}}}{{{}}} {{{}}}{{{}}}$$ T x , y , z = ∇ I ⊗ ∇ I [12pt]{minimal}
$${{{}}}$$ T is then smoothed over a neighborhood [12pt]{minimal}
$$N$$ N via convolution with [12pt]{minimal}
$$G$$ G to give [12pt]{minimal}
$$}}}}$$ T ¯ : 7 [12pt]{minimal}
$$}}}}(x,y,z)=G * {{{}}}(x,y,z)$$ T ¯ x , y , z = G * T x , y , z 8 [12pt]{minimal}
$$}}}}(x,y,z)=[_{x}^{2} }_{N} & { {I}_{x}{I}_{y} }_{N} & { {I}_{x}{I}_{z} }_{N}\\ { {I}_{y}{I}_{x} }_{N} & { {I}_{y}^{2} }_{N} & { {I}_{y}{I}_{z} }_{N}\\ { {I}_{z}{I}_{x} }_{N} & { {I}_{z}{I}_{y} }_{N} & { {I}_{z}^{2} }_{N}]$$ T ¯ x , y , z = I x 2 N I x I y N I x I z N I y I x N I y 2 N I y I z N I z I x N I z I y N I z 2 N where [12pt]{minimal}
$${ }_{N}$$ ⋅ N represents the Gaussian-weighted smoothing over [12pt]{minimal}
$$N$$ N , . Eigendecomposition of [12pt]{minimal}
$$}}}}$$ T ¯ is then performed to define the shape (eigenvalues, [12pt]{minimal}
$$$$ λ ) and the orientation (eigenvectors, [12pt]{minimal}
$${{{{}}}}_{{{{}}}}$$ v e ) of the diffusion ellipsoid. The fractional anisotropy ( [12pt]{minimal}
$${FA}$$ F A ) is then computed from [12pt]{minimal}
$$$$ λ : 9 [12pt]{minimal}
$${FA}=_{1}-{ }_{2})}^{2}+{({ }_{2}-{ }_{3})}^{2}+{({ }_{3}-{ }_{1})}^{2}}{2({ }_{1}^{2}+{ }_{2}^{2}+{ }_{3}^{2})}}$$ F A = λ 1 − λ 2 2 + λ 2 − λ 3 2 + λ 3 − λ 1 2 2 λ 1 2 + λ 2 2 + λ 3 2 where [12pt]{minimal}
$${FA}$$ F A ranges from 0 (complete isotropic diffusion) to 1 (complete anisotropic diffusion) . The tertiary (least) eigenvalue-associated eigenvectors were then extracted for the 3-dimensional image volume, with the 4th dimension encoding the corresponding vector basis magnitudes. To visualize the orientation of fibers in the context of the image, the eigenvectors were intensity-modulated with both the fractional anisotropy and the original image voxel values, and represented as a 3D RGB stack for visualization in Imaris.
As the images were acquired across multiple rounds on a confocal microscope, we encountered the issues of misalignment and z-step glitching due to piezoelectric motor errors. Hence, the tiles of images can neither be directly stitched nor registered across multiple rounds. A custom MATLAB code was written to manually remove all the z-step glitching, followed by matching the z-steps across multiple rounds aiding by using the time-remapping function in Adobe After Effects, with linear interpolation for the transformed z-substacks. The resulting glitch-removed, z-matched tiles were then rigid registered using the image registration application in MATLAB, followed by non-rigid registration for local matching. Finally, the registrated tiles were stitched for downstream processing. Before segmentation, all non-vessel channels underwent background subtraction. They were then summed to capture the full morphology of stained cells, followed by segmentation using Cellpose 2.0 . A custom model was trained and used based on 2D excerpts of the images until adequate segmentation accuracy was achieved by manual inspection. The final test image segmentation has a Dice Coefficient (or F1-score) of 0.9354 ± 0.0596 and Jaccard Index of 0.8824 ± 0.1023, provided as mean ± S.D. on six excerpted test images. Vessels were segmented based on their staining intensity, and a distance transform was used to obtain the distance from vessels for all voxels. The cell masks subsequently facilitated the acquisition of the statistics for all stained channels. UMAP was performed in MATLAB R2023a using the UMAP 4.4 , package in a nested manner, incorporating the means and standard deviations of all immunostaining intensities, as well as the distance to the nearest blood vessel. An initial UMAP (with “min_dist” = 0.05, “metric” = “euclidean”, and “n_neighbors” = 15) was applied to each image stack tile, followed by DBSCAN clustering (using the default value ε = 0.6) to eliminate the largest cluster based on cell count. The remaining cells were subjected to a second UMAP (with the same parameters), where another round of DBSCAN clustering (with the same parameters) yielded the final cell clusters for analysis. The choice of UMAP parameters was based on an online guide ( https://umap-learn.readthedocs.io/en/latest/api.html ) and visual inspection on the reasonable clustering results. Violin plots for each clustered cell type’s distance from neuropeptide Y-positive fibers were obtained by creating a distance transformation field from the segmented fibers. Segmented cell masks were used to compute the mean intensity value of the distance transformation field. The pairwise distances of the clustered cell types were obtained for the 30 nearest neighbors, followed by calculating the mean and SD for the coefficient of variation. The gramm package in MATLAB R2023a was used for plotting some of the graphs .
For Fig. , Supplementary Figs. , , one-component exponential regression was applied for curve fitting, and Pearson’s correlation coefficient was computed for the scattered plot in Fig. . Two-sample unpaired t -test was employed for Supp. Fig. The staining and imaging experiments in Fig. – , were repeated with at least two independent samples in the same or similar condition with slight modifications, such as using similarly sized tissues of similar characteristics (especially for human samples), using different staining antibodies and marker choice, or staining durations. All the results were reliably reproduced in accordance with the expected outcome of the methods. No method was used to predetermine sample size.
Further information on research design is available in the linked to this article.
Supplementary Information Description of Additional Supplementary Files Supplementary Movie 1 Supplementary Movie 2 Supplementary Movie 3 Reporting Summary Transparent Peer Review file
Source data
|
A modified non-contrast MRI in the preoperative examination of periacetabular osteotomy at 3.0T | 1bd307d3-3823-4735-9e3f-a02e0aca4384 | 11783826 | Surgical Procedures, Operative[mh] | Development dysplasia of hip (DDH) is a condition where the hip joint does not properly form in early childhood and is the main cause of hip replacement in young people (about 21–29%) . In DDH, the insufficient local coverage and the abnormal stress conduction of the hip joint alter the hip biomechanics and overload the articular cartilage, causing the cartilage degeneration and labrum injury, eventually leading to secondary osteoarthritis . While the early stage of DDH progresses slowly, it develops much fast once osteoarthritis occurs. And total hip replacement (or total hip arthroplasty (THA)) may be required at the late stage of osteoarthritis . Because of the limited service time of the artificial hip prostheses, young patients may have to undergo a revision surgery throughout their lifetime, which not only makes the patients suffer from the surgical procedure, but also places a huge burden on medical systems and caregivers. Periacetabular osteotomy (PAO) has been widely used as an effective treatment for young DDH patients (age 40 and under) when the joint surface articular cartilage damage has not yet become advanced . PAO offers both symptomatic relief and allows for preservation of the natural hip joint, resulting in improved hip stability, femoral head coverage and joint biomechanics . PAO can significantly reduce (or eliminate) the hip symptoms, slow down the occurrence of osteoarthritis, and consequently, avoid or delay the occurrence of joint replacement surgery as well as the revision surgery . In DDH patients with symptoms such as hip pain, acetabular labrum injury reaches 90%. Previous studies suggested that the cause of labral injury includes the abnormal acetabular bone structure, leading to the increased acetabular edge shear force and the labral load . The labral injury affects the stability of the hip joint and is one of the causes of hip pain in DDH patients. It is necessary to explore and repair the labrum through hip joint incision when PAO is used to correct skeletal deformities . MRI (magnetic resonance imaging) hip arthrography has been shown a reliable method for perioperative diagnosis of labral lesions . To the best of our knowledge, direct-MRI arthrography (d-MRA) is a more accurate arthrography technique for the diagnosis of the hip acetabular labrum . Previous studies compared non-contrast hip MRI and d-MRA under 1.5T and 3.0T using arthroscopy and concluded that hip d-MRA was better in terms of diagnostic performance than non-contrast MRI . While recent study demonstrated that d-MRA performed better in diagnosing cartilage injury, it has been shown that conventional non-contrast MRI and d-MRA were comparable in the diagnoses of labral injury at 3.0T . To the best of our knowledge, preoperative imaging in PAO is rare, and there have not been any preoperative imaging comparisons performed between non-contrast MRI and dMRA. In our clinical practice, non-invasive techniques are always preferred as d-MRA is invasive and the contrast agent (Gd-DTPA) may potentially cause complications such as post-injection pain, bleeding, infection, as well as gadolinium-related toxicity . In this study, we aim to investigate whether the non-contrast MRI could be a replacement of d-MRA (or their difference) in adult DDH patients undergoing PAO preoperative examination at 3.0T.
Patients From December 2014 to December 2019, a retrospective study included 35 patients (38 hips) with DDH who underwent periacetabular osteotomy (PAO) at the Joint Surgery Department of our hospitals. Each patient underwent both the d-MRA as well as the modified non-contrast MRI examination of hip joint at 3.0T. Inclusion criteria DDH patients with lateral CE (center angle) angle < 20° and acetabular roof tilt angle > 10°. With only early osteoarthritis (Tönnis grade 0–2). Aged from 9 years (with the Y-shape epiphyseal closure) to 50 years old; Hartofilakidis type I DDH patients, and Hartofilakidis II DDH patients with repositionable hip joint confirmed by radiography. Exclusion criteria Hartofilakidis type II DDH patients who show non-positionable hip joint confirmed by radiography, and Hartofilakidis type III DDH patients (high dislocation, the femoral head is outside the acetabulum). Patients with severe osteoarthritis (Tönnis grade greater than 2). History of hip infection. Flat hip deformity (Sequel deformity of Perthes disease). According to the above criteria, a total of 35 patients with an age range of 9–41 years (mean age 25 years old), 4 males with 4 hips and 31 females with 34 hips, among which 22 were left hips and 16 were right hips. The age and gender were determined solely based on the inclusion and exclusion criteria. MRI and procedures All MRI examinations were performed on a Siemens MAGNETOM Trio 3.0T scanner (Siemens Healthcare, Germany). Modified non-contrast MRI and d-MRA Each patient first underwent the non-contrast MRI examination of the hip joint; the patient was in supine position with the lower limbs straight and feet close together. During imaging, earphones were provided to protect hearing. Imaging parameters were described in Table . The d-MRA was performed after the non-contrast MRI. The patient was in a supine position on the fluoroscopic bed, the affected knee joint and foot were slightly flexed, and the puncture point was therefore marked. The skin around the puncture area was disinfected and anesthetized, and a long puncture needle was inserted at a vertical angle. When the puncture needle reached the proximal bone surface of the femoral neck, the core of the puncture needle was pulled out, and then the syringe was connected to the puncture needle. The contrast agent Gd-DTPA (Gadopentetate meglumine) was injected into the joint cavity, and T1W MRI scans (as in Table ) of the hip joint was performed in about 30 min after injection. Arthroscopy was not included in this study because it would have increased the total time for patients. Diagnostic methods The acetabular labrum is divided into 12 o’clock positions (see Fig. ), the posterior border of the transverse acetabular ligament is 6 o’clock, the upper corresponding point is 12 o’clock, the anterior midpoint is 9 o’clock, and the posterior midpoint is 3 o’clock point. According to the examination position, the oblique sagittal, oblique coronal, and oblique axial orientations were obtained. Among them, the oblique sagittal orientation was mainly used to observe the front and back of the labrum, corresponding to 8–11 o’clock and 3–5 o’clock; the oblique coronal orientation was used to observe the outer and upper part of the labrum, which was indicated by the 11 − 3 o’clock direction; the oblique axial radiograph was mainly utilized to assist the two orientations above, such as to observe the bone of the acetabulum and the femoral head. The MRI diagnosis of acetabular labral injury was mainly based on the criteria proposed by Mintz et al. in 2005 : normal labrum imaging shows a uniform triangular low signal; labral degeneration typically shows higher signal in the labrum, involving the articular surface or joint capsule. The diagnosis of d-MRA was mainly based on the criteria proposed by Czerny : normal labrum imaging shows a uniform triangular signal at the acetabular rim; grade I injury hyperintensity does not involve the articular surface or joint capsule; grade II injury hyperintensity involves the articular surface; and the signal of grade III injury shows the separation of the labrum from the acetabular rim. In addition, MRI can reveal masses around the hip, such as soft tissue tumors, acetabular cysts, and paralabral cysts, which are characterized by hypointense on T1-weighted images and hyperintense on T2-weighted images . Data analysis A radiologist in orthopedic imaging first checked whether there was labral injury and labral varus based on the findings of imaging. Then the results were reviewed by a senior physician in orthopedic imaging examination and diagnosis. If disagreement happened, a chief physician will be involved in the discussion to reach a conclusion. The evaluation by only professional physician would help minimize the inconsistency for this specific disease. The accuracy of the non-contrast MRI was confirmed by the d-MRA and the subsequent surgical examinations, and was calculated by true positives, true negatives, false positives, and false negatives. From these results, positive predictive value, negative predictive value, overall sensitivity, specificity, and accuracy were calculated. For the detection of labral injury, the consistency of the results of the non-contrast MRI and the d-MRA was analyzed using Kappa statistics; and the IBM SPSS 26.0 software was used to perform the statistical analysis on the data using the following standards: When the K value is between 0.8 and 1, the diagnostic consistency is excellent. When the K value is 0.6 ~ 0.8, the diagnostic consistency is good. When the K value is between 0.4 and 0.6, the diagnostic consistency is moderate. When the K value is between 0.2 and 0.4, the diagnostic consistency is relatively poor. When the K value is between 0 and 0.2, the diagnostic consistency is poor.
From December 2014 to December 2019, a retrospective study included 35 patients (38 hips) with DDH who underwent periacetabular osteotomy (PAO) at the Joint Surgery Department of our hospitals. Each patient underwent both the d-MRA as well as the modified non-contrast MRI examination of hip joint at 3.0T. Inclusion criteria DDH patients with lateral CE (center angle) angle < 20° and acetabular roof tilt angle > 10°. With only early osteoarthritis (Tönnis grade 0–2). Aged from 9 years (with the Y-shape epiphyseal closure) to 50 years old; Hartofilakidis type I DDH patients, and Hartofilakidis II DDH patients with repositionable hip joint confirmed by radiography. Exclusion criteria Hartofilakidis type II DDH patients who show non-positionable hip joint confirmed by radiography, and Hartofilakidis type III DDH patients (high dislocation, the femoral head is outside the acetabulum). Patients with severe osteoarthritis (Tönnis grade greater than 2). History of hip infection. Flat hip deformity (Sequel deformity of Perthes disease). According to the above criteria, a total of 35 patients with an age range of 9–41 years (mean age 25 years old), 4 males with 4 hips and 31 females with 34 hips, among which 22 were left hips and 16 were right hips. The age and gender were determined solely based on the inclusion and exclusion criteria.
DDH patients with lateral CE (center angle) angle < 20° and acetabular roof tilt angle > 10°. With only early osteoarthritis (Tönnis grade 0–2). Aged from 9 years (with the Y-shape epiphyseal closure) to 50 years old; Hartofilakidis type I DDH patients, and Hartofilakidis II DDH patients with repositionable hip joint confirmed by radiography.
Hartofilakidis type II DDH patients who show non-positionable hip joint confirmed by radiography, and Hartofilakidis type III DDH patients (high dislocation, the femoral head is outside the acetabulum). Patients with severe osteoarthritis (Tönnis grade greater than 2). History of hip infection. Flat hip deformity (Sequel deformity of Perthes disease). According to the above criteria, a total of 35 patients with an age range of 9–41 years (mean age 25 years old), 4 males with 4 hips and 31 females with 34 hips, among which 22 were left hips and 16 were right hips. The age and gender were determined solely based on the inclusion and exclusion criteria.
All MRI examinations were performed on a Siemens MAGNETOM Trio 3.0T scanner (Siemens Healthcare, Germany). Modified non-contrast MRI and d-MRA Each patient first underwent the non-contrast MRI examination of the hip joint; the patient was in supine position with the lower limbs straight and feet close together. During imaging, earphones were provided to protect hearing. Imaging parameters were described in Table . The d-MRA was performed after the non-contrast MRI. The patient was in a supine position on the fluoroscopic bed, the affected knee joint and foot were slightly flexed, and the puncture point was therefore marked. The skin around the puncture area was disinfected and anesthetized, and a long puncture needle was inserted at a vertical angle. When the puncture needle reached the proximal bone surface of the femoral neck, the core of the puncture needle was pulled out, and then the syringe was connected to the puncture needle. The contrast agent Gd-DTPA (Gadopentetate meglumine) was injected into the joint cavity, and T1W MRI scans (as in Table ) of the hip joint was performed in about 30 min after injection. Arthroscopy was not included in this study because it would have increased the total time for patients.
Each patient first underwent the non-contrast MRI examination of the hip joint; the patient was in supine position with the lower limbs straight and feet close together. During imaging, earphones were provided to protect hearing. Imaging parameters were described in Table . The d-MRA was performed after the non-contrast MRI. The patient was in a supine position on the fluoroscopic bed, the affected knee joint and foot were slightly flexed, and the puncture point was therefore marked. The skin around the puncture area was disinfected and anesthetized, and a long puncture needle was inserted at a vertical angle. When the puncture needle reached the proximal bone surface of the femoral neck, the core of the puncture needle was pulled out, and then the syringe was connected to the puncture needle. The contrast agent Gd-DTPA (Gadopentetate meglumine) was injected into the joint cavity, and T1W MRI scans (as in Table ) of the hip joint was performed in about 30 min after injection. Arthroscopy was not included in this study because it would have increased the total time for patients.
The acetabular labrum is divided into 12 o’clock positions (see Fig. ), the posterior border of the transverse acetabular ligament is 6 o’clock, the upper corresponding point is 12 o’clock, the anterior midpoint is 9 o’clock, and the posterior midpoint is 3 o’clock point. According to the examination position, the oblique sagittal, oblique coronal, and oblique axial orientations were obtained. Among them, the oblique sagittal orientation was mainly used to observe the front and back of the labrum, corresponding to 8–11 o’clock and 3–5 o’clock; the oblique coronal orientation was used to observe the outer and upper part of the labrum, which was indicated by the 11 − 3 o’clock direction; the oblique axial radiograph was mainly utilized to assist the two orientations above, such as to observe the bone of the acetabulum and the femoral head. The MRI diagnosis of acetabular labral injury was mainly based on the criteria proposed by Mintz et al. in 2005 : normal labrum imaging shows a uniform triangular low signal; labral degeneration typically shows higher signal in the labrum, involving the articular surface or joint capsule. The diagnosis of d-MRA was mainly based on the criteria proposed by Czerny : normal labrum imaging shows a uniform triangular signal at the acetabular rim; grade I injury hyperintensity does not involve the articular surface or joint capsule; grade II injury hyperintensity involves the articular surface; and the signal of grade III injury shows the separation of the labrum from the acetabular rim. In addition, MRI can reveal masses around the hip, such as soft tissue tumors, acetabular cysts, and paralabral cysts, which are characterized by hypointense on T1-weighted images and hyperintense on T2-weighted images .
A radiologist in orthopedic imaging first checked whether there was labral injury and labral varus based on the findings of imaging. Then the results were reviewed by a senior physician in orthopedic imaging examination and diagnosis. If disagreement happened, a chief physician will be involved in the discussion to reach a conclusion. The evaluation by only professional physician would help minimize the inconsistency for this specific disease. The accuracy of the non-contrast MRI was confirmed by the d-MRA and the subsequent surgical examinations, and was calculated by true positives, true negatives, false positives, and false negatives. From these results, positive predictive value, negative predictive value, overall sensitivity, specificity, and accuracy were calculated. For the detection of labral injury, the consistency of the results of the non-contrast MRI and the d-MRA was analyzed using Kappa statistics; and the IBM SPSS 26.0 software was used to perform the statistical analysis on the data using the following standards: When the K value is between 0.8 and 1, the diagnostic consistency is excellent. When the K value is 0.6 ~ 0.8, the diagnostic consistency is good. When the K value is between 0.4 and 0.6, the diagnostic consistency is moderate. When the K value is between 0.2 and 0.4, the diagnostic consistency is relatively poor. When the K value is between 0 and 0.2, the diagnostic consistency is poor.
A total of 35 patients (38 hips) were included in this study, with age from 9 to 41 years old (mean age 25 years), 4 males (4 hips) and 31 females (34 hips). Among which, 22 hips were from the left side and 16 hips the right side. All the hip joints were in early stage of osteoarthritis (Tönnis grade 0, 1). Labral injury was found in 34 hips (incidence rate 89% (34/38)), which were all observed by the d-MRA. The non-contrast MRI was able to find labral injury in 33 hips, and all of which were in the anterior upper quadrant of the hip joint, see Fig. , the patient has obvious hip labrum injury. There were 10 labral varus cases in the total 38 hips (incidence rate: 26% (10/38)), example images were shown in Fig. , clear hip labral varus was shown. In addition, 9 of the 38 hips were found with acetabular cysts (incidence rate: 23% (9/38)), in which one hip also suffered from labral cyst and hip joint injury, see Fig. , the patient showed clear acetabular cyst combined with acetabular cartilage injury. Finally, two hips were found with labral cyst only, so the total incidence rate of labral cyst was 7% (3/38), see Fig. , which showed obvious labral cyst for this patient. The positive predictive value, negative predictive value, overall sensitivity, specificity, and accuracy of the non-contrast MRI for labral injury are shown in Table , which exhibits good consistency with the d-MRA in the evaluation of labrum with the Kappa statistics K = 0.623 (0.60 < K ≤ 0.80).
With the advancement of orthopaedics and osteotomy around the hip, PAO surgery has become mature and more reproducible. The Bernese-style periacetabular osteotomy is now the most popular posterior column-preserving hip acetabular reduction osteotomy . In China, it is estimated that 16.05 million adults suffer from DDH, which leads to significant medial costs to the patient families and caregivers. Patients with DDH often have severe acetabular bony deformities, peripheral soft tissue deformities, and anatomical deformities of the femur. Labrum injury and varus are the common pathological changes in hip dysplasia . Some researchers think that the treatment of labral injury and labral varus is a very important part of the treatment of DDH, because the increasing hypertrophic labrum and labral varus are very important pathological factors in DDH. Over the past decades, both the d-MRA and non-contrast MRI of the hip have been widely studied . Some studies reported that hip MRI was a power tool to detect lesions in the labrum of the hip joint , while others showed limited only limited capability . Contrast-enhanced MRI can effectively detect labral injury and cartilage injury of the hip joint, but it is difficult to perform the hip joint puncture technique, the technique has been greatly limited in clinical practice. In addition, it has been reported that Gd-DTPA may introduce inflection and toxicity. Therefore, the non-contrast MRI has great value in the hip imaging. Our results show that the non-contrast MRI at 3.0T of the hip has high accuracy in the assessment of acetabular cleft lip with a sensitivity of 94%, which is comparable to that of the d-MRA with a good consistency in the evaluation of the labrum of K = 0.623 (0.60 < K ≤ 0.80). We observed that most of the acetabular pathology scores in surgery showed normal in the anterior-superior quadrant, anterior-inferior, as well as the posterior-inferior quadrant. Some patients with hip dysplasia had both acetabular cysts and labral cysts. Compared with other previous studies of using conventional MRI, the increased accuracy in our study may be attributed to the specific imaging method employed in this study. In addition, the high-field-strength 3.0T imaging enabled the higher signal-to-noise ratio, higher spatial resolution, and/or reduced reduce scan time. The results of this study suggest that the non-contrast MRI 3.0T showed high accuracy in assessing acetabular labral injury and labral varus in DDH patients. Compared with hip arthrography MRI (d-MRA), the non-contrast MRI is more convenient to perform the preoperative examination of hip-preserving surgery in DDH patients, which has clinical significance for the preoperative examination in patients with hip dysplasia. There are a few limitations of this study. First, the sample size was small, only a total of 35 cases (38 hip joints) were included in this study. Second, all patients in this study had undergone hip-preserving surgery before developing into severe osteoarthritis, therefore the degree of the hip joint disease was relatively mild, which may lead to bias in the patient’s composition. Our future work will be performed with increased sample size, more balanced gender distribution, and less experienced examiners to reduce potential bias. Finally, although hip arthroscopy provides valuable validation, it is excluded from the study as the scan time of the imaging session is so long for patients. Nevertheless, we found that the modified non-contrast MRI is comparable to d-MRA at 3.0T in the preoperative examination of hip-preserving surgery patients with DDH; In DDH patients, the labral injury mostly occurs in the anterior upper quadrant; There may be acetabular and labral cysts existing in the DDH patients.
|
Reconciling functional differences in populations of neurons recorded with two-photon imaging and electrophysiology | aafa9c96-6fb3-4720-b1db-817513e65383 | 8285106 | Physiology[mh] | Systems neuroscience aims to explain how complex adaptive behaviors can arise from the interactions of many individual neurons. As a result, population recordings—which capture the activity of multiple neurons simultaneously—have become the foundational method for progress in this domain. Extracellular electrophysiology and calcium-dependent two-photon optical physiology are by far the most prevalent population recording techniques, due to their single-neuron resolution, ease of use, and scalability. Recent advances have made it possible to record simultaneously from thousands of neurons with electrophysiology or tens of thousands of neurons with calcium imaging . While insights gained from both methods have been invaluable to the field, it is clear that neither technique provides a completely faithful picture of the underlying neural activity. In this study, our goal is to better understand the inherent biases of each recording modality, and specifically how to appropriately compare results obtained with one method to those obtained with the other. Head-to-head comparisons of electrophysiology and imaging data are rare in the literature, but are critically important as the practical aspects of each method affect their suitability for different experimental questions. Since the expression of calcium indicators can be restricted to genetically defined cell types, imaging can easily target recordings to specific sub-populations . Similarly, the use of retro- or anterograde viral transfections to drive indicator expression allows imaging to target sub-populations defined by their projection patterns . The ability to identify genetically or projection-defined cell populations in electrophysiology experiments is far more limited . Both techniques have been adapted for chronic recordings, but imaging offers the ability to reliably return to the same neurons over many days without the need to implant bulky hardware . Furthermore, because imaging captures structural, in addition to functional, data, individual neurons can be precisely registered to tissue volumes from electron microscopy , in vitro brain slices , and potentially other ex vivo techniques such as in situ RNA profiling . In contrast, the sources of extracellular spike waveforms are very difficult to localize with sufficient precision to enable direct cross-modal registration. Inherent differences in the spatial sampling properties of electrophysiology and imaging are widely recognized, and influence what information can be gained from each method . Multi-photon imaging typically yields data in a single plane tangential to the cortical surface, and is limited to depths of <1 mm due to a combination of light scattering and absorption in tissue. While multi-plane and deep structure imaging are both areas of active research, imaging of most subcortical structures requires physical destruction of more superficial tissues . Extracellular electrophysiology, on the other hand, utilizes microelectrodes embedded in the tissue, and thus dense recordings are easiest to perform along a straight line, normal to the cortical surface, in order to minimize per-channel tissue displacement. Linear probes provide simultaneous access to neurons in both cortex and subcortical structures, but make it difficult to sample many neurons from the same cortical layer. The temporal resolutions of these two methodologies also differ in critical ways . Imaging is limited by the dwell time required to capture enough photons to distinguish physiological changes in fluorescence from noise , and the kinetics of calcium-dependent indicators additionally constrain the ability to temporally localize neural activity . While kilohertz-scale imaging has been achieved , most studies are based on data sampled at frame rates between 1 and 30 Hz. In contrast, extracellular electrophysiology requires sampling rates of 20 kHz or higher, in order to capture the action potential waveform shape that is essential for accurate spike sorting. High sampling rates allow extracellular electrophysiology to pin-point neural activity in time with sub-millisecond resolution, enabling analyses of fine-timescale synchronization across simultaneously recorded neural populations. The fact that electrophysiology can measure action potentials—what we believe to be the fundamental currency of neuronal communication and causation—bestows upon it a more basic ontological status than on calcium imaging, which captures an indirect measure of a neuron’s spike train. To date, there has been no comprehensive attempt to characterize how the choice of recording modality affects the inferred functional properties of neurons in sensory cortex. Our limited understanding of how scientific conclusions may be skewed by the recording modality represents the weakest link in the chain of information integration across the techniques available to neurophysiologists today. To address this, we took advantage of two recently collected large-scale datasets that sampled neural activity in mouse visual cortex using either two-photon calcium imaging or dense extracellular electrophysiology . These datasets were collected using standardized pipelines, such that the surgical methods, experimental steps, and physical geometry of the recording rigs were matched as closely as possible . The overall similarity of these Allen Brain Observatory pipelines eliminates many of the potential confounding factors that arise when comparing results from imaging and electrophysiology experiments. We note that this is not an attempt at calibration against ground truth data, but rather an attempt to reconcile results across two uniquely comprehensive datasets collected under highly standardized conditions. Our comparison focused on metrics that capture three fundamental features of neural responses to environmental stimuli: (1) responsiveness, (2) preference (i.e. the stimulus condition that maximizes the peak response), and (3) selectivity (i.e. sharpness of tuning). Responsiveness metrics characterize whether or not a particular stimulus type (e.g. drifting gratings) reproducibly elicits increased activity. For responsive neurons, preference metrics (e.g. preferred temporal frequency) determine which stimulus condition (out of a finite set) elicits the largest response, and serve as an indicator of a neuron’s functional specialization—for example, whether it responds preferentially to slow- or fast-moving stimuli. Lastly, selectivity metrics (e.g. orientation selectivity, lifetime sparseness) characterize a neuron’s ability to distinguish between particular exemplars within a stimulus class. All three of these features must be measured accurately in order to understand how stimuli are represented by individual neurons. We find that preference metrics are largely invariant across modalities. However, in this dataset, electrophysiology suggests that neurons show a higher degree of responsiveness, while imaging suggests that responsive neurons show a higher degree of selectivity. In the absence of steps taken to mitigate these differences, the two modalities will yield mutually incompatible conclusions about basic neural response properties. These differences could be reduced by lowering the amplitude threshold for valid ΔF/F events, applying a spikes-to-calcium forward model to the electrophysiology data , or sub-selection of neurons based either on event rate or by contamination level (the likelihood that signal from other neurons is misattributed to the neurons under consideration). This reconciliation reveals the respective biases of these two recording modalities, namely that extracellular electrophysiology predominantly captures the activity of highly active units while missing or merging low-firing-rate units, while calcium-indicator binding dynamics sparsify neural responses and supralinearly amplify spike bursts.
We compared the visual responses measured in the Allen Brain Observatory Visual Coding (‘imaging’) and Allen Brain Observatory Neuropixels (‘ephys’) datasets, publicly available through brain-map.org and the AllenSDK Python package. These datasets consist of recordings from neurons in six cortical visual areas (as well as subcortical areas in the Neuropixels dataset) in the awake, head-fixed mouse in response to a battery of passively viewed visual stimuli. For both datasets, the same drifting gratings, static gratings, natural scenes, and natural movie stimuli were shown . These stimuli were presented in a single 3 hr recording session for the ephys dataset. For the imaging dataset, these stimuli were divided across three separate 1 hr imaging sessions from the same group of neurons. In both ephys and imaging experiments, mice were free to run on a rotating disc, the motion of which was continuously recorded. The imaging dataset was collected using genetically encoded GCaMP6f under the control of specific Cre driver lines. These Cre drivers limit the calcium indicator expression to specific neuronal populations, including different excitatory and inhibitory populations found in specific cortical layers (see for details). The ephys dataset also made use of transgenic mice in addition to wild-type mice. These transgenic mice expressed either channelrhodopsin in specific inhibitory populations for identification using optotagging (see for details), or GCaMP6f in specific excitatory or inhibitory populations (see Materials and methods). Unlike in the imaging dataset, however, these transgenic tools did not determine which neurons could be recorded. We limited our comparative analysis to putative excitatory neurons from five cortical visual areas (V1, LM, AL, PM, and AM). In the case of the imaging data, we only included data from 10 excitatory Cre lines, while for ephys we limited our analysis to regular-spiking units by setting a threshold on the waveform duration (>0.4 ms). After this filtering step, we were left with 41,578 neurons from 170 mice in imaging, and 11,030 neurons from 52 mice in ephys. The total number of cells for each genotype, layer, and area is shown in . Calculating response magnitudes for both modalities In order to directly compare results from ephys and imaging, we first calculated the magnitude of each neuron’s response to individual trials, which were defined as the interval over which a stimulus was present on the screen. We computed a variety of metrics based on these response magnitudes, and compared the overall distributions of those metrics for all the neurons in each visual area. The methods for measuring these responses necessarily differ between modalities, as explained below. For the ephys dataset, stimulus-evoked responses were computed using the spike times identified by Kilosort2 . Kilosort2 uses information in the extracellularly recorded voltage traces to find templates that fit the spike waveform shapes of all the units in the dataset, and assigns a template to each spike. The process of ‘spike sorting’—regardless of the underlying algorithm—does not perfectly recover the true underlying spike times, and has the potential to miss spikes (false negatives) or assign spikes (or noise waveforms) to the wrong unit (false positives). The magnitude of the response for a given trial was determined by counting the total number of spikes (including false positives and excluding false negatives) that occured during the stimulation interval. This spike-rate–based analysis is the de facto standard for analyzing electrophysiology data, but it washes out information about bursting or other within-trial dynamics. For example, a trial that includes a four-spike burst will have the same apparent magnitude as a trial with four isolated spikes . Methods for determining response magnitudes for neurons in imaging datasets are less standardized, and deserve careful consideration. The most commonly used approach involves averaging the continuous, baseline-normalized fluorescence signal over the trial interval. This method relies on information that is closer to the raw data. However, it suffers the severe drawback that, due to the long decay time of calcium indicators, activity from one trial can contaminate the fluorescence trace during the next trial, especially when relatively short (<1 s) inter-stimulus intervals are used. To surmount this problem, one can attempt to determine the onset of abrupt changes in fluorescence and analyze these extracted ‘events,’ rather than the continuous trace. There are a variety of algorithms available for this purpose, including non-negative deconvolution , approaches that model calcium binding kinetics , and methods based on machine learning . For our initial comparison, we extracted events using the same method we applied to our previous analysis of the large-scale imaging dataset . This algorithm finds event times by reframing ℓ 0 -regularized deconvolution as a change point detection problem that has a mathematically guaranteed, globally optimal ‘exact’ solution (hereafter, ‘exact ℓ 0 ’; ; ). The algorithm includes a sparsity constraint (λ) that is calibrated to each neuron’s overall noise level. For the most part, the events that are detected from the 2P imaging data do not represent individual spikes, but rather are heavily biased towards indicating short bouts of high firing rate, for example bursting . There is, however, rich information contained in the amplitudes of these events, which have a non-linear—albeit on average monotonic—relationship with the underlying number of true spikes within a window. Therefore, in our population imaging dataset, we calculated the trial response magnitude by summing the amplitudes of events that occurred during the stimulation interval . In example trials for the same hypothetical neuron recorded with both modalities , the response magnitudes are equivalent from the perspective of electrophysiology. However, from the perspective of imaging, the trial that includes a spike burst (which results in a large influx of calcium) may have an order-of-magnitude larger response than a trial that only includes isolated spikes. Baseline metric comparison A comparison between individual neurons highlights the effect of differences in response magnitude calculation on visual physiology. A spike raster from a neuron in V1 recorded with electrophysiology appears much denser than the corresponding event raster for a separate neuron that was imaged in the same area . For each neuron, we computed responsiveness, preference, and selectivity metrics. We consider both neurons to be responsive to the drifting gratings stimulus class because they have a significant response (p < 0.05, compared to a distribution of activity taken during the epoch of spontaneous activity) on at least 25% of the trials of the preferred condition (the grating direction and temporal frequency that elicited the largest mean response) . Since these neurons were deemed responsive according to this criterion, their function was further characterized in terms of their preferred stimulus condition and their selectivity (a measure of tuning curve sharpness). We use lifetime sparseness as our primary selectivity metric, because it is a general metric that is applicable to every stimulus type. It reflects the distribution of responses of a neuron across some stimulus space (e.g. natural scenes or drifting gratings), equaling 0 if the neuron responds equivalently to all stimulus conditions, and one if the neuron only responds to a single condition. Across all areas and mouse lines, lifetime sparseness is highly correlated with more traditional selectivity metrics, such as drifting gratings orientation selectivity ( R = 0.8 for ephys, 0.79 for imaging; Pearson correlation), static gratings orientation selectivity ( R = 0.79 for ephys, 0.69 for imaging), and natural scenes image selectivity ( R = 0.85 for ephys, 0.95 for imaging). For our initial analysis, we sought to compare the results from ephys and imaging as they are typically analyzed in the literature , prior to any attempt at reconciliation. We will refer to these comparisons as ‘baseline comparisons’ in order to distinguish them from subsequent comparisons made after applying one or more transformations to the imaging and/or ephys datasets. We pooled responsiveness, preference, and selectivity metrics for all of the neurons in a given visual area across experiments, and quantified the disparity between the imaging and ephys distributions using Jensen–Shannon distance. This is the square root of the Jensen–Shannon divergence, which is a method of measuring the disparity between two probability distributions that is symmetric and always has a finite value . Jensen–Shannon distance is equal to 0 for perfectly overlapping distributions, and one for completely non-overlapping distributions, and falls in between these values for partially overlapping distributions. Across all areas and stimuli, the fraction of responsive neurons was higher in the ephys dataset than the imaging dataset . To quantify the difference between modalities, we computed the Jensen–Shannon distance for the distributions of response reliabilities, rather than the fraction of responsive neurons at the 25% threshold level. This is done to ensure that our results are not too sensitive to the specific responsiveness threshold we have chosen. We found tuning preferences to be consistent between the two modalities, including preferred temporal frequency , preferred direction , preferred orientation , and preferred spatial frequency . This was based on the qualitative similarity of their overall distributions, as well as their low values of Jensen–Shannon distance. Selectivity metrics, such as lifetime sparseness , orientation selectivity , and direction selectivity , were consistently higher in imaging than ephys. Controlling for laminar sampling bias and running behavior To control for potential high-level variations across the imaging and ephys experimental preparations, we first examined the effect of laminar sampling bias. For example, the ephys dataset contained more neurons in layer 5, due to the presence of large, highly active cells in this layer. The imaging dataset, on the other hand, had more neurons in layer 4 due to the preponderance of layer 4 Cre lines included in the dataset . After resampling each dataset to match layer distributions ( , see Materials and methods for details), we saw very little change in the overall distributions of responsiveness, preference, and selectivity metrics , indicating that laminar sampling biases are likely not a key cause of the differences we observed between the modalities. We next sought to quantify the influence of behavioral differences on our comparison. As running and other motor behavior can influence visually evoked responses , could modality-specific behavioral differences contribute to the discrepancies in the response metrics? In our datasets, mice tend to spend a larger fraction of time running in the ephys experiments, perhaps because of the longer experiment duration, which may be further confounded by genotype-specific differences in running behavior . Within each modality, running had a similar impact on visual response metrics. On average, units in ephys and neurons in imaging have slightly lower responsiveness during periods of running versus non-running , but slightly higher selectivity . To control for the effect of running, we sub-sampled our imaging experiments in order to match the overall distribution of running fraction to the ephys data . This transformation had a negligible impact on responsiveness, selectivity, and preference metrics . From this analysis we conclude that, at least for the datasets examined here, behavioral differences do not account for the differences in functional properties inferred from imaging and ephys. Impact of event detection on functional metrics We sought to determine whether our approach to extracting events from the 2P data could explain between-modality differences in responsiveness and selectivity. Prior work has shown that scientific conclusions can depend on both the method of event extraction and the chosen parameters . However, the impact of different algorithms on functional metrics has yet to be assessed in a systematic way. To address this shortcoming, we first compared two event detection algorithms, exact ℓ 0 and unpenalized non-negative deconvolution (NND), another event extraction method that performs well on a ground truth dataset of simultaneous two-photon imaging and loose patch recordings from primary visual cortex . Care was taken to ensure that the characteristics of the ground truth imaging data matched those of our large-scale population recordings in terms of their imaging resolution, frame rate, and noise levels, which implicitly accounted for differences in laser power across experiments. The correlation between the ground truth firing rate and the overall event amplitude within a given time bin is a common way of assessing event extraction performance . Both algorithms performed equally well in terms of their ability to predict the instantaneous firing rate (for 100 ms bins, exact ℓ 0 r = 0.48 ± 0.23; NND r = 0.50 ± 0.24; p = 0.1, Wilcoxon signed-rank test). However, this metric does not capture all of the relevant features of the event time series. In particular, it ignores the rate of false positive events that are detected in the absence of a true underlying spike . We found that the exact ℓ 0 method, which includes a built-in sparsity constraint, had a low rate of false positives (8 ± 2% in 100 ms bins, N = 32 ground truth recordings), whereas NND had a much higher rate (21 ± 4%; p = 8e-7, Wilcoxon signed-rank test). Small-amplitude false positive events have very little impact on the overall correlation between the ground truth spike rate and the extracted events, so parameter optimization does not typically penalize such events. However, we reasoned that the summation of many false positive events could have a noticeable impact on response magnitudes averaged over trials. Because these events always have a positive sign, they cannot be canceled out by low-amplitude negative deflections of similar magnitude, as would occur when analyzing ΔF/F directly. We tested the impact of applying a minimum-amplitude threshold to the event time series obtained via NND. If the cumulative event amplitude within a given time bin (100 ms) did not exceed a threshold (set to a multiple of each neuron’s estimated noise level), the events in that window were removed. As expected, this procedure resulted in almost no change in the correlation between event amplitude and the ground truth firing rate . However, it had a noticeable impact on both the average response magnitude within a given time window , as well as the false positive rate . Applying the same thresholding procedure to an example neuron from our population imaging dataset demonstrates how low-amplitude events can impact a cell’s apparent selectivity level. The prevalence of such events differs between the two compared approaches to event extraction, the exact ℓ 0 method used in and NND with no regularization . When summing event amplitudes over many drifting gratings presentations, the difference in background rate has a big impact on the measured value of global orientation selectivity (gOSI), starting from 0.91 when using the exact ℓ 0 and dropping to 0.45 when using NND. However, these differences could be reconciled simply by setting a threshold to filter out low-amplitude events. Extracting events from a population dataset ( N = 3095 Slc17a7+ neurons from V1) using NND resulted in much lower measured overall selectivity levels, even lower than for electrophysiology . Thresholding out events at multiples of each neuron’s noise level (σ) raised selectivity levels; a threshold between 3 and 4σ brought the selectivity distribution closest to the ephys distribution, while a threshold between 4 and 5σ resulted in selectivity that roughly matched that obtained with exact ℓ 0 . The rate of low-amplitude events also affected responsiveness metrics . Responsiveness was highest when all detected events were included, matching or slightly exceeding the levels measured with ephys. Again, applying an amplitude threshold between 4 and 5σ brought responsiveness to the level originally measured with exact ℓ 0 . By imposing the same noise-based threshold on the minimum event size that we originally computed to determine optimal regularization strength (essentially performing a post-hoc regularization), we are able to reconcile the results obtained with NND with those obtained via exact ℓ 0 . This analysis demonstrates that altering the 2P event extraction methodology represents one possible avenue for reconciling results from imaging and ephys. However, as different parameters were needed to reconcile either selectivity or responsiveness, and the optimal parameters further depend on the presented stimulus class, this cannot be the whole story. Furthermore, relying only on 2P event extraction parameters to reconcile results across modalities implies that the ephys data is itself unbiased, and all we need to do is adjust our imaging analysis pipeline until our metrics match. Because we know this is not the case, we explored the potential impacts of additional factors on the discrepancies between our ephys and imaging datasets. Controlling for transgene expression Given that imaging (but not ephys) approaches fundamentally require the expression of exogenous proteins (e.g. Cre, tTA, and GCaMP6f in the case of our transgenic mice) in specific populations of neurons, we sought to determine whether such foreign transgenes, expressed at relatively high levels, could alter the underlying physiology of the neural population. All three proteins have been shown to have neurotoxic effects under certain conditions , and calcium indicators, which by design bind intracellular calcium, can additionally interfere with cellular signaling pathways. To examine whether the expression of these genes could explain the differences in functional properties inferred from imaging and ephys experiments, we performed electrophysiology in mice that expressed GCaMP6f under the control of specific Cre drivers. We collected data from mice with GCaMP6f expressed in dense excitatory lines (Cux2 and Slc17a7) or in sparse inhibitory lines (Vip and Sst), and compared the results to those obtained from wild-type mice . On average, we recorded 45.9 ± 7.5 neurons per area in 17 wild-type mice, and 55.8 ± 15.6 neurons per area in 19 GCaMP6f transgenic mice . The distribution of firing rates of recorded neurons in mice from all Cre lines was similar to the distribution for units in wild-type mice . Because some GCaMP mouse lines have been known to exhibit aberrant seizure-like activity , we wanted to check whether spike bursts were more prevalent in these mice. We detected bursting activity using the LogISI method, which identifies bursts in a spike train based on an adaptive inter-spike interval threshold . The dense excitatory Cre lines showed a slight increase in burst fraction (the fraction of all spikes that participate in bursts) compared to wild-type mice . This minor increase in burstiness, however, was not associated with changes in responsiveness or selectivity metrics that could account for the baseline differences between the ephys and imaging datasets. The fraction of responsive neurons was not lower in the GCaMP6f mice, as it was for the imaging dataset—in fact, in some visual areas there was an increase in responsiveness in the GCaMP6f mice compared to wild-type . In addition, the distribution of selectivities was largely unchanged between wild-type and GCaMP6f mice . Thus, while there may be subtle differences in the underlying physiology of GCaMP6f mice, particularly in the dense excitatory lines, those differences cannot explain the large discrepancies in visual response metrics derived from ephys or imaging. Forward-modeling synthetic imaging data from experimental ephys data Given the substantial differences between the properties of extracellularly recorded spikes and events extracted from fluorescence traces , and the potential impact of event extraction parameters on derived functional metrics , we hypothesized that transforming spike trains into simulated calcium events could reconcile some of the baseline differences in response metrics we have observed. The inverse transformation—converting fluorescence events into synthetic spike times—is highly under-specified, due to the reduced temporal resolution of calcium imaging . To implement the spikes-to-calcium transformation, we used MLSpike, a biophysically inspired forward model . MLSpike explicitly considers the cooperative binding between GCaMP and calcium to generate synthetic ΔF/F fluorescence traces using the spike trains for each unit recorded with ephys as input. We extracted events from these traces using the same exact ℓ 0 -regularized detection algorithm applied to our experimental imaging data, and used these events as inputs to our functional metrics calculations . A subset of the free parameters in the MLSpike model (e.g. ΔF/F rise time, Hill parameter, saturation parameter, and normalized resting calcium concentration) were fit to simultaneously acquired loose patch and two-photon-imaging recordings from layer 2/3 of mouse visual cortex . Additionally, three parameters were calibrated on the fluorescence traces from the imaging dataset to capture the neuron-to-neuron variance of these parameters: the average amplitude of a fluorescence transient in response to a spike burst ( A ), the decay time of the fluorescence transients ( τ ), and the level of Gaussian noise in the signal ( σ ) . For our initial characterization, we selected parameter values based on the mode of the overall distribution from the imaging dataset. The primary consequence of the forward model was to ‘sparsify’ each neuron’s response by washing out single spikes while non-linearly boosting the amplitude of ‘bursty’ spike sequences with short inter-spike intervals. When responses were calculated on the ephys spike train, a trial containing a 4-spike burst within a 250 ms window would have the same magnitude as a trial with four isolated spikes across the 2 s trial. After the forward model, however, the burst would be transformed into an event with a magnitude many times greater than the events associated with isolated spikes, due to the nonlinear relationship between spike counts and the resulting calcium-dependent fluorescence. This effect can be seen in stimulus-locked raster plots for the same neuron before and after applying the forward model . What effects does this transformation have on neurons’ inferred functional properties? Applying the forward model plus event extraction to the ephys data did not systematically alter the fraction of responsive units in the dataset. While 8% of neurons switched from being responsive to drifting gratings to unresponsive, or vice versa, they did so in approximately equal numbers . The forward model did not improve the match between the distributions of response reliabilities (our responsiveness metric) for any stimulus type . The forward model similarly had a negligible impact on preference metrics; for example, only 14% of neurons changed their preferred temporal frequency after applying the forward model , and the overall distribution of preferred temporal frequencies still matched that from the imaging experiments . In contrast, nearly all neurons increased their selectivity after applying the forward model . Overall, the distribution of lifetime sparseness to drifting gratings became more similar to—but still did not completely match—the imaging distribution across all areas . The average Jensen–Shannon distance between the ephys and imaging distributions was 0.41 before applying the forward model, compared to 0.14 afterward (mean bootstrapped distance between the sub-samples of the imaging distribution = 0.064; p < 0.001 for all areas, since 1000 bootstrap samples never exceeded the true Jensen–Shannon distance; see Materials and methods for details). These results imply that the primary effects of the forward model—providing a supralinear boost to the ‘amplitude’ of spike bursts, and thresholding out single spike events—can account for baseline differences in selectivity, but not responsiveness, between ephys and imaging. To assess whether the discrepancies between the imaging and ephys distributions of responsiveness and selectivity metrics could be further reduced by using a different set of forward model parameters, we brute-force sampled 1000 different parameter combinations for one ephys session, using 10 values each for amplitude, decay time, and noise level , spanning the entire range of parameters calibrated on the experimental imaging data. The fraction of responsive neurons did not change as a function of forward model parameters, except for the lowest values of amplitude and noise level, where it decreased substantially . This parameter combination (A ≤ 0.0015, sigma ≤ 0.03) was observed in less than 1% of actual neurons recorded with two-photon imaging, so it cannot account for differences in responsiveness between the two modalities. Both the difference between the median lifetime sparseness for imaging and ephys, as well as the Jensen–Shannon distance between the full ephys and imaging lifetime sparseness distributions, were near the global minimum for the parameter values we initially used . It is conceivable that the inability of the forward model to fully reconcile differences in responsiveness and selectivity was due to the fact that we applied the same parameters across all neurons of the ephys dataset, without considering their genetically defined cell type. To test for cell-type-specific differences in forward model parameters, we examined the distributions of amplitude, decay time, and noise level for individual excitatory Cre lines used in the imaging dataset. The distributions of parameter values across genotypes were largely overlapping, with the exception of increasing noise levels for some of the deeper populations (e.g. Tlx3-Cre in layer 5, and Ntsr1-Cre_GN220 in layer 6) and an abundance of low-amplitude neurons in the Fezf2-CreER population . Given that higher noise levels and lower amplitudes did not improve the correspondence between the ephys and imaging metric distributions, we concluded that selecting parameter values for individual neurons based on their most likely cell type would not change our results. Furthermore, we saw no correlation between responsiveness or selectivity metrics in imaged neurons and their calibrated amplitude, decay time, or noise level . Effect of ephys selection bias We next sought to determine whether electrophysiology’s well-known selection bias in favor of more active neurons could account for the differences between modalities. Whereas calcium imaging can detect the presence of all neurons in the field of view that express a fluorescent indicator, ephys cannot detect neurons unless they fire action potentials. This bias is exacerbated by the spike sorting process, which requires a sufficient number of spikes in order to generate an accurate template of each neuron’s waveform. Spike sorting algorithms can also mistakenly merge spikes from nearby neurons into a single ‘unit’ or allow background activity to contaminate a spike train, especially when spike waveforms generated by one neuron vary over time, for example due to the adaptation that occurs during a burst. These issues all result in an apparent activity level increase in ephys recordings. In addition, assuming a 50-μm ‘listening radius’ for the probes (radius of half-cylinder around the probe where the neurons’ spike amplitude is sufficiently above noise to trigger detection) , the average yield of 116 regular-spiking units/probe (prior to QC filtering) would imply a density of 42,000 neurons/mm 3 , much lower than the known density of ~90,000 neurons/mm 3 for excitatory cells in mouse visual cortex . If the ephys dataset is biased toward recording neurons with higher firing rates, it may be more appropriate to compare it with only the most active neurons in the imaging dataset. To test this, we systematically increased the event rate threshold for the imaged neurons, so the remaining neurons used for comparison were always in the upper quantile of mean event rate. Applying this filter increased the overall fraction of responsive neurons in the imaging dataset, such that the experimental imaging and synthetic imaging distributions had the highest similarity when between 7 and 39% of the most active imaged neurons were included (V1: 39%, LM: 34%, AL: 25%, PM: 7%, AM: 14%) . This indicates that more active neurons tend to be more responsive to our visual stimuli, which could conceivably account for the discrepancy in overall responsiveness between the two modalities. However, applying this event rate threshold actually increased the differences between the selectivity distributions, as the most active imaged neurons were also more selective . Thus, sub-selection of imaged neurons based on event rate was not sufficient to fully reconcile the differences between ephys and imaging. Performing the same analysis using sub-selection based on the coefficient of variation, an alternative measure of response reliability, yielded qualitatively similar results . If the ephys dataset includes spike trains that are contaminated with spurious spikes from one or more nearby neurons then it may help to compare our imaging results only to the least contaminated neurons from the ephys dataset. Our initial QC process excluded units with an inter-spike interval (ISI) violations score ( , see Materials and methods for definition) above 0.5, to remove highly contaminated units, but while the presence of refractory period violations implies contamination, the absence of such violations does not imply error-free clustering, so some contamination may remain. We systematically decreased our tolerance for ISI-violating ephys neurons, so the remaining neurons used for comparison were always in the lower quantile of contamination level. For the most restrictive thresholds, where there was zero detectable contamination in the original spike trains (ISI violations score = 0), the match between the synthetic imaging and experimental imaging selectivity and responsiveness distributions was maximized . This indicates that, unsurprisingly, contamination by neighboring neurons (as measured by ISI violations score) reduces selectivity and increases responsiveness. Therefore, the inferred functional properties are most congruent across modalities when the ephys analysis includes a stringent threshold on the maximum allowable contamination level. Results across stimulus types The previous results have primarily focused on the drifting gratings stimulus, but we observe similar effects for all of the stimulus types shared between the imaging and ephys datasets. summarizes the impact of each transformation we performed, either before or after applying the forward model, for drifting gratings, static gratings, natural scenes, and natural movies. Across all stimulus types, the forward model had very little impact on responsiveness. Instead, sub-selecting the most active neurons from our imaging experiments using an event-rate filter rendered the shape of the distributions the most similar. For the stimulus types for which we could measure preference across small number of categories (temporal frequency of drifting gratings and spatial frequency of static gratings), no data transformations were able to improve the overall match between the ephys and imaging distributions, as they were already very similar in the baseline comparison. For selectivity metrics (lifetime sparseness), applying the forward model played the biggest role in improving cross-modal similarity, although there was a greater discrepancy between the resulting distributions for static gratings, natural scenes, and natural movies than there was for drifting gratings. Filtering ephys neurons based on ISI violations further reduced the Jensen–Shannon distance, but it still remained well above zero. This indicates that the transformations we employed could not fully reconcile observed differences in selectivity distributions between ephys and imaging. Lessons for future comparisons Our study shows that population-level functional metrics computed from imaging and electrophysiology experiments can display systematic biases. What are the most important takeaways that should be considered for those performing similar comparisons? Preference metrics are similar across modalities At least for the cell population we considered (putative excitatory neurons from all layers of visual cortex), preference metrics (such as preferred temporal frequency, preferred spatial frequency, and preferred direction) were largely similar between imaging and electrophysiology . Because these are categorical metrics defined as the individual condition (out of a finite set) that evokes the strongest mean response, they are robust to the choice of calcium event extraction method and also remain largely invariant to the application of a spikes-to-calcium forward model to electrophysiology data. One caveat to keep in mind is that when imaging from more specific populations (e.g. using a transgenic line that limits imaging to a specific subtype of neuron in a specific layer), electrophysiology experiments may yield conflicting preference metrics unless the sample is carefully matched across modalities (e.g. by genetically tagging electrically recorded neurons with a light-sensitive opsin). Differences in responsiveness metrics largely stem from ephys selection bias In our original comparison, a larger fraction of ephys units were found to be responsive to every stimulus type ; this did not change after applying a spikes-to-calcium forward model . We believe this is primarily due to two factors: Extracellular electrophysiology cannot detect neurons if they fire few or no action potentials within the sampling period. Unresponsive neurons with low backround firing rates fall into this category, and are therefore not counted in the ‘yield’ of an ephys experiment. Most or all ephys ‘units’ include some fraction of contaminating spikes from nearby neurons with similar waveform shapes. Contamination is more likely to occur during periods of high waveform variability, for example during burst-dependent spike amplitude adaptation. Because bursts are prevalent when a neuron responds to a stimulus, stimulus-evoked spikes are the ones that are most likely to contaminate the spike train of an unresponsive cell. How should these biases be accounted for? Most importantly, when comparing responsiveness, or analyses that build on responsiveness, between ephys and imaging experiments, only the cleanest, least contaminated ephys units should be included (based on their ISI violations or another purity metric). In addition, one can reduce differences in responsiveness by filtering out the least active neurons from imaging experiments, or simply by using a higher responsiveness threshold for ephys than for imaging. For example, one could use a sliding threshold to find the point where the overall rate of responsive neurons is matched between the two modalities, and perform subsequent comparisons using this threshold. It should also be noted that the method of calcium event detection can also affect responsiveness metrics; with a more permissive event detection threshold, for instance, the population appears more responsive . However, it is clear that a lower threshold leads to a higher fraction of false-positive events, as is shown using ground truth data , and this increases the probability that noise in the underlying fluorescence will contaminate the results. As background fluorescnece is most variable when nearby neurons or processes respond to their respective preferred stimulus condition, additionally detected events are likely to be an overfitting of stimulus-correlated noise . Selectivity metrics are highly sensitive to the parameters used for calcium event extraction Differences in selectivity (or tuning curve sharpness) are the most difficult to compare across modalities. This is because most commonly used selectivity metrics take into account the ratio between the peak and baseline response, and the relative size of these responses is highly influenced by the rate and size of ‘background’ events. When counting spikes in electrophysiology, the largest and smallest responses typically fall within the same order of magnitude; with imaging, however, calcium event amplitudes can easily vary over several orders of magnitude. In addition, the specific method used for calcium event detection can have a big impact on background event rate. Because these events always have a positive amplitude, events detected around the noise floor cannot be cancelled out by equivalently sized negative events. In principle, one could try to match selectivity across modalities by tuning the parameters of the event extraction algorithm. However, this is not recommended, because it obfuscates real biases in the data (such as the sparsifying effect of calcium indicators) and can lead to inconsistencies (e.g. it fails to consistently match both selectivity and responsiveness across stimulus classes). Instead, as a more principled way to compare selectivity between imaging and ephys experiments, we recommend the use of a spikes-to-calcium forward model, with the same event extraction algorithm applied to both the real and the synthetic calcium traces . Inter-modal differences in running speed, laminar sampling patterns, and transgenic mouse lines do not substantially bias functional metrics Our ephys recordings included a higher fraction of neurons from layer 5 than our imaging experiments , while mice from imaging experiments were less active runners . Furthermore, typical ephys experiments do not use transgenic mice that express calcium indicators, while this is common for imaging. Correcting these biases did not appreciably change the population-level functional metrics. However, it must be noted that we used nearly identical behavioral apparatuses, habituation protocols, and stimulus sets between modalities. When comparing across studies with methods that were not as carefully matched, behavioral differences may have a larger influence on the results. Interpreting higher order analyses in light of our findings The differences in responsiveness and selectivity metrics computed from the ephys and imaging datasets suggest that functional properties of neurons in the mouse visual cortex can appear to be dependent on the choice of recording modality. These effects extend beyond responsiveness and selectivity, as higher order analyses often build on these more fundamental metrics. Here, we focus on a representative example, a functional classification scheme based on responses to four stimulus types (drifting gratings, static gratings, natural scenes, and natural movies), which, in our previous work, revealed distinct response classes in the imaging dataset . We aim to illustrate, using this recent example, that the effect of recording modality on responsiveness goes far beyond merely impacting how many neurons might be included in an analysis but may also propagate to conclusions we draw about the functional properties of neuronal (sub)populations. Our original study, based on imaging data alone, revealed that only ~10% of the neurons in the imaging dataset responded reliably to all four stimuli, while the largest class of neurons contained those that did not respond reliably to any of the stimuli. This classification result suggested that many neurons in the mouse visual cortex are not well described by classical models and may respond to more intricate visual or non-visual features. Here, we perform an analogous analysis on the ephys dataset to show how this classification is impacted by modality specific biases. As in our original study, we performed unsupervised clustering on the 4 x N matrix of response reliabilities for each unit’s preferred condition for each stimulus class, where N represents the number of units . The resulting clusters were assigned class labels based on whether their mean response reliability was above an initial threshold of 25% (to match the percentage we used in our previous analysis of the imaging dataset). After labeling each cluster, we calculated the fraction of units belonging to each functional class. We averaged over 100 different clustering initializations to obtain the average class membership of units across runs . Running this analysis on the ephys dataset ‘naively’, seemed to reveal a very different landscape of functional classes. In stark contrast to the published imaging results, for ephys, the largest class (~40% of the units) contained units that responded reliably to all four classes of stimuli, while the class that did not respond reliably to any of the stimuli was empty. This is consistent with the observation that responsiveness is higher in the ephys dataset for each stimulus type . To account for this bias, we systematically raised the responsiveness threshold used to group clusters into functional classes for the ephys dataset and found that the distribution of classes became more similar to the distribution of classes for the imaging dataset . A threshold of 40% response reliability for the ephys dataset minimized the Jensen–Shannon distance between the distributions, rendering the class assignments in ephys remarkably similar to those for the imaging dataset . The class labels for each neuron reflect the pattern of cross-stimulus response reliability, and as such provide an indication of its ‘meta-preference’ for different stimulus types. Thus, once we account for the generally higher levels of responsiveness seen in the ephys dataset, we observe similar meta-preferences and thus functional organization as for the imaging dataset. This example highlights how recording-modality-specific biases can affect higher-order conclusions about functional properties, and how a fundamental understanding of such biases can be leveraged to explain and resolve apparent contradictions.
In order to directly compare results from ephys and imaging, we first calculated the magnitude of each neuron’s response to individual trials, which were defined as the interval over which a stimulus was present on the screen. We computed a variety of metrics based on these response magnitudes, and compared the overall distributions of those metrics for all the neurons in each visual area. The methods for measuring these responses necessarily differ between modalities, as explained below. For the ephys dataset, stimulus-evoked responses were computed using the spike times identified by Kilosort2 . Kilosort2 uses information in the extracellularly recorded voltage traces to find templates that fit the spike waveform shapes of all the units in the dataset, and assigns a template to each spike. The process of ‘spike sorting’—regardless of the underlying algorithm—does not perfectly recover the true underlying spike times, and has the potential to miss spikes (false negatives) or assign spikes (or noise waveforms) to the wrong unit (false positives). The magnitude of the response for a given trial was determined by counting the total number of spikes (including false positives and excluding false negatives) that occured during the stimulation interval. This spike-rate–based analysis is the de facto standard for analyzing electrophysiology data, but it washes out information about bursting or other within-trial dynamics. For example, a trial that includes a four-spike burst will have the same apparent magnitude as a trial with four isolated spikes . Methods for determining response magnitudes for neurons in imaging datasets are less standardized, and deserve careful consideration. The most commonly used approach involves averaging the continuous, baseline-normalized fluorescence signal over the trial interval. This method relies on information that is closer to the raw data. However, it suffers the severe drawback that, due to the long decay time of calcium indicators, activity from one trial can contaminate the fluorescence trace during the next trial, especially when relatively short (<1 s) inter-stimulus intervals are used. To surmount this problem, one can attempt to determine the onset of abrupt changes in fluorescence and analyze these extracted ‘events,’ rather than the continuous trace. There are a variety of algorithms available for this purpose, including non-negative deconvolution , approaches that model calcium binding kinetics , and methods based on machine learning . For our initial comparison, we extracted events using the same method we applied to our previous analysis of the large-scale imaging dataset . This algorithm finds event times by reframing ℓ 0 -regularized deconvolution as a change point detection problem that has a mathematically guaranteed, globally optimal ‘exact’ solution (hereafter, ‘exact ℓ 0 ’; ; ). The algorithm includes a sparsity constraint (λ) that is calibrated to each neuron’s overall noise level. For the most part, the events that are detected from the 2P imaging data do not represent individual spikes, but rather are heavily biased towards indicating short bouts of high firing rate, for example bursting . There is, however, rich information contained in the amplitudes of these events, which have a non-linear—albeit on average monotonic—relationship with the underlying number of true spikes within a window. Therefore, in our population imaging dataset, we calculated the trial response magnitude by summing the amplitudes of events that occurred during the stimulation interval . In example trials for the same hypothetical neuron recorded with both modalities , the response magnitudes are equivalent from the perspective of electrophysiology. However, from the perspective of imaging, the trial that includes a spike burst (which results in a large influx of calcium) may have an order-of-magnitude larger response than a trial that only includes isolated spikes.
A comparison between individual neurons highlights the effect of differences in response magnitude calculation on visual physiology. A spike raster from a neuron in V1 recorded with electrophysiology appears much denser than the corresponding event raster for a separate neuron that was imaged in the same area . For each neuron, we computed responsiveness, preference, and selectivity metrics. We consider both neurons to be responsive to the drifting gratings stimulus class because they have a significant response (p < 0.05, compared to a distribution of activity taken during the epoch of spontaneous activity) on at least 25% of the trials of the preferred condition (the grating direction and temporal frequency that elicited the largest mean response) . Since these neurons were deemed responsive according to this criterion, their function was further characterized in terms of their preferred stimulus condition and their selectivity (a measure of tuning curve sharpness). We use lifetime sparseness as our primary selectivity metric, because it is a general metric that is applicable to every stimulus type. It reflects the distribution of responses of a neuron across some stimulus space (e.g. natural scenes or drifting gratings), equaling 0 if the neuron responds equivalently to all stimulus conditions, and one if the neuron only responds to a single condition. Across all areas and mouse lines, lifetime sparseness is highly correlated with more traditional selectivity metrics, such as drifting gratings orientation selectivity ( R = 0.8 for ephys, 0.79 for imaging; Pearson correlation), static gratings orientation selectivity ( R = 0.79 for ephys, 0.69 for imaging), and natural scenes image selectivity ( R = 0.85 for ephys, 0.95 for imaging). For our initial analysis, we sought to compare the results from ephys and imaging as they are typically analyzed in the literature , prior to any attempt at reconciliation. We will refer to these comparisons as ‘baseline comparisons’ in order to distinguish them from subsequent comparisons made after applying one or more transformations to the imaging and/or ephys datasets. We pooled responsiveness, preference, and selectivity metrics for all of the neurons in a given visual area across experiments, and quantified the disparity between the imaging and ephys distributions using Jensen–Shannon distance. This is the square root of the Jensen–Shannon divergence, which is a method of measuring the disparity between two probability distributions that is symmetric and always has a finite value . Jensen–Shannon distance is equal to 0 for perfectly overlapping distributions, and one for completely non-overlapping distributions, and falls in between these values for partially overlapping distributions. Across all areas and stimuli, the fraction of responsive neurons was higher in the ephys dataset than the imaging dataset . To quantify the difference between modalities, we computed the Jensen–Shannon distance for the distributions of response reliabilities, rather than the fraction of responsive neurons at the 25% threshold level. This is done to ensure that our results are not too sensitive to the specific responsiveness threshold we have chosen. We found tuning preferences to be consistent between the two modalities, including preferred temporal frequency , preferred direction , preferred orientation , and preferred spatial frequency . This was based on the qualitative similarity of their overall distributions, as well as their low values of Jensen–Shannon distance. Selectivity metrics, such as lifetime sparseness , orientation selectivity , and direction selectivity , were consistently higher in imaging than ephys.
To control for potential high-level variations across the imaging and ephys experimental preparations, we first examined the effect of laminar sampling bias. For example, the ephys dataset contained more neurons in layer 5, due to the presence of large, highly active cells in this layer. The imaging dataset, on the other hand, had more neurons in layer 4 due to the preponderance of layer 4 Cre lines included in the dataset . After resampling each dataset to match layer distributions ( , see Materials and methods for details), we saw very little change in the overall distributions of responsiveness, preference, and selectivity metrics , indicating that laminar sampling biases are likely not a key cause of the differences we observed between the modalities. We next sought to quantify the influence of behavioral differences on our comparison. As running and other motor behavior can influence visually evoked responses , could modality-specific behavioral differences contribute to the discrepancies in the response metrics? In our datasets, mice tend to spend a larger fraction of time running in the ephys experiments, perhaps because of the longer experiment duration, which may be further confounded by genotype-specific differences in running behavior . Within each modality, running had a similar impact on visual response metrics. On average, units in ephys and neurons in imaging have slightly lower responsiveness during periods of running versus non-running , but slightly higher selectivity . To control for the effect of running, we sub-sampled our imaging experiments in order to match the overall distribution of running fraction to the ephys data . This transformation had a negligible impact on responsiveness, selectivity, and preference metrics . From this analysis we conclude that, at least for the datasets examined here, behavioral differences do not account for the differences in functional properties inferred from imaging and ephys.
We sought to determine whether our approach to extracting events from the 2P data could explain between-modality differences in responsiveness and selectivity. Prior work has shown that scientific conclusions can depend on both the method of event extraction and the chosen parameters . However, the impact of different algorithms on functional metrics has yet to be assessed in a systematic way. To address this shortcoming, we first compared two event detection algorithms, exact ℓ 0 and unpenalized non-negative deconvolution (NND), another event extraction method that performs well on a ground truth dataset of simultaneous two-photon imaging and loose patch recordings from primary visual cortex . Care was taken to ensure that the characteristics of the ground truth imaging data matched those of our large-scale population recordings in terms of their imaging resolution, frame rate, and noise levels, which implicitly accounted for differences in laser power across experiments. The correlation between the ground truth firing rate and the overall event amplitude within a given time bin is a common way of assessing event extraction performance . Both algorithms performed equally well in terms of their ability to predict the instantaneous firing rate (for 100 ms bins, exact ℓ 0 r = 0.48 ± 0.23; NND r = 0.50 ± 0.24; p = 0.1, Wilcoxon signed-rank test). However, this metric does not capture all of the relevant features of the event time series. In particular, it ignores the rate of false positive events that are detected in the absence of a true underlying spike . We found that the exact ℓ 0 method, which includes a built-in sparsity constraint, had a low rate of false positives (8 ± 2% in 100 ms bins, N = 32 ground truth recordings), whereas NND had a much higher rate (21 ± 4%; p = 8e-7, Wilcoxon signed-rank test). Small-amplitude false positive events have very little impact on the overall correlation between the ground truth spike rate and the extracted events, so parameter optimization does not typically penalize such events. However, we reasoned that the summation of many false positive events could have a noticeable impact on response magnitudes averaged over trials. Because these events always have a positive sign, they cannot be canceled out by low-amplitude negative deflections of similar magnitude, as would occur when analyzing ΔF/F directly. We tested the impact of applying a minimum-amplitude threshold to the event time series obtained via NND. If the cumulative event amplitude within a given time bin (100 ms) did not exceed a threshold (set to a multiple of each neuron’s estimated noise level), the events in that window were removed. As expected, this procedure resulted in almost no change in the correlation between event amplitude and the ground truth firing rate . However, it had a noticeable impact on both the average response magnitude within a given time window , as well as the false positive rate . Applying the same thresholding procedure to an example neuron from our population imaging dataset demonstrates how low-amplitude events can impact a cell’s apparent selectivity level. The prevalence of such events differs between the two compared approaches to event extraction, the exact ℓ 0 method used in and NND with no regularization . When summing event amplitudes over many drifting gratings presentations, the difference in background rate has a big impact on the measured value of global orientation selectivity (gOSI), starting from 0.91 when using the exact ℓ 0 and dropping to 0.45 when using NND. However, these differences could be reconciled simply by setting a threshold to filter out low-amplitude events. Extracting events from a population dataset ( N = 3095 Slc17a7+ neurons from V1) using NND resulted in much lower measured overall selectivity levels, even lower than for electrophysiology . Thresholding out events at multiples of each neuron’s noise level (σ) raised selectivity levels; a threshold between 3 and 4σ brought the selectivity distribution closest to the ephys distribution, while a threshold between 4 and 5σ resulted in selectivity that roughly matched that obtained with exact ℓ 0 . The rate of low-amplitude events also affected responsiveness metrics . Responsiveness was highest when all detected events were included, matching or slightly exceeding the levels measured with ephys. Again, applying an amplitude threshold between 4 and 5σ brought responsiveness to the level originally measured with exact ℓ 0 . By imposing the same noise-based threshold on the minimum event size that we originally computed to determine optimal regularization strength (essentially performing a post-hoc regularization), we are able to reconcile the results obtained with NND with those obtained via exact ℓ 0 . This analysis demonstrates that altering the 2P event extraction methodology represents one possible avenue for reconciling results from imaging and ephys. However, as different parameters were needed to reconcile either selectivity or responsiveness, and the optimal parameters further depend on the presented stimulus class, this cannot be the whole story. Furthermore, relying only on 2P event extraction parameters to reconcile results across modalities implies that the ephys data is itself unbiased, and all we need to do is adjust our imaging analysis pipeline until our metrics match. Because we know this is not the case, we explored the potential impacts of additional factors on the discrepancies between our ephys and imaging datasets.
Given that imaging (but not ephys) approaches fundamentally require the expression of exogenous proteins (e.g. Cre, tTA, and GCaMP6f in the case of our transgenic mice) in specific populations of neurons, we sought to determine whether such foreign transgenes, expressed at relatively high levels, could alter the underlying physiology of the neural population. All three proteins have been shown to have neurotoxic effects under certain conditions , and calcium indicators, which by design bind intracellular calcium, can additionally interfere with cellular signaling pathways. To examine whether the expression of these genes could explain the differences in functional properties inferred from imaging and ephys experiments, we performed electrophysiology in mice that expressed GCaMP6f under the control of specific Cre drivers. We collected data from mice with GCaMP6f expressed in dense excitatory lines (Cux2 and Slc17a7) or in sparse inhibitory lines (Vip and Sst), and compared the results to those obtained from wild-type mice . On average, we recorded 45.9 ± 7.5 neurons per area in 17 wild-type mice, and 55.8 ± 15.6 neurons per area in 19 GCaMP6f transgenic mice . The distribution of firing rates of recorded neurons in mice from all Cre lines was similar to the distribution for units in wild-type mice . Because some GCaMP mouse lines have been known to exhibit aberrant seizure-like activity , we wanted to check whether spike bursts were more prevalent in these mice. We detected bursting activity using the LogISI method, which identifies bursts in a spike train based on an adaptive inter-spike interval threshold . The dense excitatory Cre lines showed a slight increase in burst fraction (the fraction of all spikes that participate in bursts) compared to wild-type mice . This minor increase in burstiness, however, was not associated with changes in responsiveness or selectivity metrics that could account for the baseline differences between the ephys and imaging datasets. The fraction of responsive neurons was not lower in the GCaMP6f mice, as it was for the imaging dataset—in fact, in some visual areas there was an increase in responsiveness in the GCaMP6f mice compared to wild-type . In addition, the distribution of selectivities was largely unchanged between wild-type and GCaMP6f mice . Thus, while there may be subtle differences in the underlying physiology of GCaMP6f mice, particularly in the dense excitatory lines, those differences cannot explain the large discrepancies in visual response metrics derived from ephys or imaging.
Given the substantial differences between the properties of extracellularly recorded spikes and events extracted from fluorescence traces , and the potential impact of event extraction parameters on derived functional metrics , we hypothesized that transforming spike trains into simulated calcium events could reconcile some of the baseline differences in response metrics we have observed. The inverse transformation—converting fluorescence events into synthetic spike times—is highly under-specified, due to the reduced temporal resolution of calcium imaging . To implement the spikes-to-calcium transformation, we used MLSpike, a biophysically inspired forward model . MLSpike explicitly considers the cooperative binding between GCaMP and calcium to generate synthetic ΔF/F fluorescence traces using the spike trains for each unit recorded with ephys as input. We extracted events from these traces using the same exact ℓ 0 -regularized detection algorithm applied to our experimental imaging data, and used these events as inputs to our functional metrics calculations . A subset of the free parameters in the MLSpike model (e.g. ΔF/F rise time, Hill parameter, saturation parameter, and normalized resting calcium concentration) were fit to simultaneously acquired loose patch and two-photon-imaging recordings from layer 2/3 of mouse visual cortex . Additionally, three parameters were calibrated on the fluorescence traces from the imaging dataset to capture the neuron-to-neuron variance of these parameters: the average amplitude of a fluorescence transient in response to a spike burst ( A ), the decay time of the fluorescence transients ( τ ), and the level of Gaussian noise in the signal ( σ ) . For our initial characterization, we selected parameter values based on the mode of the overall distribution from the imaging dataset. The primary consequence of the forward model was to ‘sparsify’ each neuron’s response by washing out single spikes while non-linearly boosting the amplitude of ‘bursty’ spike sequences with short inter-spike intervals. When responses were calculated on the ephys spike train, a trial containing a 4-spike burst within a 250 ms window would have the same magnitude as a trial with four isolated spikes across the 2 s trial. After the forward model, however, the burst would be transformed into an event with a magnitude many times greater than the events associated with isolated spikes, due to the nonlinear relationship between spike counts and the resulting calcium-dependent fluorescence. This effect can be seen in stimulus-locked raster plots for the same neuron before and after applying the forward model . What effects does this transformation have on neurons’ inferred functional properties? Applying the forward model plus event extraction to the ephys data did not systematically alter the fraction of responsive units in the dataset. While 8% of neurons switched from being responsive to drifting gratings to unresponsive, or vice versa, they did so in approximately equal numbers . The forward model did not improve the match between the distributions of response reliabilities (our responsiveness metric) for any stimulus type . The forward model similarly had a negligible impact on preference metrics; for example, only 14% of neurons changed their preferred temporal frequency after applying the forward model , and the overall distribution of preferred temporal frequencies still matched that from the imaging experiments . In contrast, nearly all neurons increased their selectivity after applying the forward model . Overall, the distribution of lifetime sparseness to drifting gratings became more similar to—but still did not completely match—the imaging distribution across all areas . The average Jensen–Shannon distance between the ephys and imaging distributions was 0.41 before applying the forward model, compared to 0.14 afterward (mean bootstrapped distance between the sub-samples of the imaging distribution = 0.064; p < 0.001 for all areas, since 1000 bootstrap samples never exceeded the true Jensen–Shannon distance; see Materials and methods for details). These results imply that the primary effects of the forward model—providing a supralinear boost to the ‘amplitude’ of spike bursts, and thresholding out single spike events—can account for baseline differences in selectivity, but not responsiveness, between ephys and imaging. To assess whether the discrepancies between the imaging and ephys distributions of responsiveness and selectivity metrics could be further reduced by using a different set of forward model parameters, we brute-force sampled 1000 different parameter combinations for one ephys session, using 10 values each for amplitude, decay time, and noise level , spanning the entire range of parameters calibrated on the experimental imaging data. The fraction of responsive neurons did not change as a function of forward model parameters, except for the lowest values of amplitude and noise level, where it decreased substantially . This parameter combination (A ≤ 0.0015, sigma ≤ 0.03) was observed in less than 1% of actual neurons recorded with two-photon imaging, so it cannot account for differences in responsiveness between the two modalities. Both the difference between the median lifetime sparseness for imaging and ephys, as well as the Jensen–Shannon distance between the full ephys and imaging lifetime sparseness distributions, were near the global minimum for the parameter values we initially used . It is conceivable that the inability of the forward model to fully reconcile differences in responsiveness and selectivity was due to the fact that we applied the same parameters across all neurons of the ephys dataset, without considering their genetically defined cell type. To test for cell-type-specific differences in forward model parameters, we examined the distributions of amplitude, decay time, and noise level for individual excitatory Cre lines used in the imaging dataset. The distributions of parameter values across genotypes were largely overlapping, with the exception of increasing noise levels for some of the deeper populations (e.g. Tlx3-Cre in layer 5, and Ntsr1-Cre_GN220 in layer 6) and an abundance of low-amplitude neurons in the Fezf2-CreER population . Given that higher noise levels and lower amplitudes did not improve the correspondence between the ephys and imaging metric distributions, we concluded that selecting parameter values for individual neurons based on their most likely cell type would not change our results. Furthermore, we saw no correlation between responsiveness or selectivity metrics in imaged neurons and their calibrated amplitude, decay time, or noise level .
We next sought to determine whether electrophysiology’s well-known selection bias in favor of more active neurons could account for the differences between modalities. Whereas calcium imaging can detect the presence of all neurons in the field of view that express a fluorescent indicator, ephys cannot detect neurons unless they fire action potentials. This bias is exacerbated by the spike sorting process, which requires a sufficient number of spikes in order to generate an accurate template of each neuron’s waveform. Spike sorting algorithms can also mistakenly merge spikes from nearby neurons into a single ‘unit’ or allow background activity to contaminate a spike train, especially when spike waveforms generated by one neuron vary over time, for example due to the adaptation that occurs during a burst. These issues all result in an apparent activity level increase in ephys recordings. In addition, assuming a 50-μm ‘listening radius’ for the probes (radius of half-cylinder around the probe where the neurons’ spike amplitude is sufficiently above noise to trigger detection) , the average yield of 116 regular-spiking units/probe (prior to QC filtering) would imply a density of 42,000 neurons/mm 3 , much lower than the known density of ~90,000 neurons/mm 3 for excitatory cells in mouse visual cortex . If the ephys dataset is biased toward recording neurons with higher firing rates, it may be more appropriate to compare it with only the most active neurons in the imaging dataset. To test this, we systematically increased the event rate threshold for the imaged neurons, so the remaining neurons used for comparison were always in the upper quantile of mean event rate. Applying this filter increased the overall fraction of responsive neurons in the imaging dataset, such that the experimental imaging and synthetic imaging distributions had the highest similarity when between 7 and 39% of the most active imaged neurons were included (V1: 39%, LM: 34%, AL: 25%, PM: 7%, AM: 14%) . This indicates that more active neurons tend to be more responsive to our visual stimuli, which could conceivably account for the discrepancy in overall responsiveness between the two modalities. However, applying this event rate threshold actually increased the differences between the selectivity distributions, as the most active imaged neurons were also more selective . Thus, sub-selection of imaged neurons based on event rate was not sufficient to fully reconcile the differences between ephys and imaging. Performing the same analysis using sub-selection based on the coefficient of variation, an alternative measure of response reliability, yielded qualitatively similar results . If the ephys dataset includes spike trains that are contaminated with spurious spikes from one or more nearby neurons then it may help to compare our imaging results only to the least contaminated neurons from the ephys dataset. Our initial QC process excluded units with an inter-spike interval (ISI) violations score ( , see Materials and methods for definition) above 0.5, to remove highly contaminated units, but while the presence of refractory period violations implies contamination, the absence of such violations does not imply error-free clustering, so some contamination may remain. We systematically decreased our tolerance for ISI-violating ephys neurons, so the remaining neurons used for comparison were always in the lower quantile of contamination level. For the most restrictive thresholds, where there was zero detectable contamination in the original spike trains (ISI violations score = 0), the match between the synthetic imaging and experimental imaging selectivity and responsiveness distributions was maximized . This indicates that, unsurprisingly, contamination by neighboring neurons (as measured by ISI violations score) reduces selectivity and increases responsiveness. Therefore, the inferred functional properties are most congruent across modalities when the ephys analysis includes a stringent threshold on the maximum allowable contamination level.
The previous results have primarily focused on the drifting gratings stimulus, but we observe similar effects for all of the stimulus types shared between the imaging and ephys datasets. summarizes the impact of each transformation we performed, either before or after applying the forward model, for drifting gratings, static gratings, natural scenes, and natural movies. Across all stimulus types, the forward model had very little impact on responsiveness. Instead, sub-selecting the most active neurons from our imaging experiments using an event-rate filter rendered the shape of the distributions the most similar. For the stimulus types for which we could measure preference across small number of categories (temporal frequency of drifting gratings and spatial frequency of static gratings), no data transformations were able to improve the overall match between the ephys and imaging distributions, as they were already very similar in the baseline comparison. For selectivity metrics (lifetime sparseness), applying the forward model played the biggest role in improving cross-modal similarity, although there was a greater discrepancy between the resulting distributions for static gratings, natural scenes, and natural movies than there was for drifting gratings. Filtering ephys neurons based on ISI violations further reduced the Jensen–Shannon distance, but it still remained well above zero. This indicates that the transformations we employed could not fully reconcile observed differences in selectivity distributions between ephys and imaging.
Our study shows that population-level functional metrics computed from imaging and electrophysiology experiments can display systematic biases. What are the most important takeaways that should be considered for those performing similar comparisons?
At least for the cell population we considered (putative excitatory neurons from all layers of visual cortex), preference metrics (such as preferred temporal frequency, preferred spatial frequency, and preferred direction) were largely similar between imaging and electrophysiology . Because these are categorical metrics defined as the individual condition (out of a finite set) that evokes the strongest mean response, they are robust to the choice of calcium event extraction method and also remain largely invariant to the application of a spikes-to-calcium forward model to electrophysiology data. One caveat to keep in mind is that when imaging from more specific populations (e.g. using a transgenic line that limits imaging to a specific subtype of neuron in a specific layer), electrophysiology experiments may yield conflicting preference metrics unless the sample is carefully matched across modalities (e.g. by genetically tagging electrically recorded neurons with a light-sensitive opsin).
In our original comparison, a larger fraction of ephys units were found to be responsive to every stimulus type ; this did not change after applying a spikes-to-calcium forward model . We believe this is primarily due to two factors: Extracellular electrophysiology cannot detect neurons if they fire few or no action potentials within the sampling period. Unresponsive neurons with low backround firing rates fall into this category, and are therefore not counted in the ‘yield’ of an ephys experiment. Most or all ephys ‘units’ include some fraction of contaminating spikes from nearby neurons with similar waveform shapes. Contamination is more likely to occur during periods of high waveform variability, for example during burst-dependent spike amplitude adaptation. Because bursts are prevalent when a neuron responds to a stimulus, stimulus-evoked spikes are the ones that are most likely to contaminate the spike train of an unresponsive cell. How should these biases be accounted for? Most importantly, when comparing responsiveness, or analyses that build on responsiveness, between ephys and imaging experiments, only the cleanest, least contaminated ephys units should be included (based on their ISI violations or another purity metric). In addition, one can reduce differences in responsiveness by filtering out the least active neurons from imaging experiments, or simply by using a higher responsiveness threshold for ephys than for imaging. For example, one could use a sliding threshold to find the point where the overall rate of responsive neurons is matched between the two modalities, and perform subsequent comparisons using this threshold. It should also be noted that the method of calcium event detection can also affect responsiveness metrics; with a more permissive event detection threshold, for instance, the population appears more responsive . However, it is clear that a lower threshold leads to a higher fraction of false-positive events, as is shown using ground truth data , and this increases the probability that noise in the underlying fluorescence will contaminate the results. As background fluorescnece is most variable when nearby neurons or processes respond to their respective preferred stimulus condition, additionally detected events are likely to be an overfitting of stimulus-correlated noise .
Differences in selectivity (or tuning curve sharpness) are the most difficult to compare across modalities. This is because most commonly used selectivity metrics take into account the ratio between the peak and baseline response, and the relative size of these responses is highly influenced by the rate and size of ‘background’ events. When counting spikes in electrophysiology, the largest and smallest responses typically fall within the same order of magnitude; with imaging, however, calcium event amplitudes can easily vary over several orders of magnitude. In addition, the specific method used for calcium event detection can have a big impact on background event rate. Because these events always have a positive amplitude, events detected around the noise floor cannot be cancelled out by equivalently sized negative events. In principle, one could try to match selectivity across modalities by tuning the parameters of the event extraction algorithm. However, this is not recommended, because it obfuscates real biases in the data (such as the sparsifying effect of calcium indicators) and can lead to inconsistencies (e.g. it fails to consistently match both selectivity and responsiveness across stimulus classes). Instead, as a more principled way to compare selectivity between imaging and ephys experiments, we recommend the use of a spikes-to-calcium forward model, with the same event extraction algorithm applied to both the real and the synthetic calcium traces .
Our ephys recordings included a higher fraction of neurons from layer 5 than our imaging experiments , while mice from imaging experiments were less active runners . Furthermore, typical ephys experiments do not use transgenic mice that express calcium indicators, while this is common for imaging. Correcting these biases did not appreciably change the population-level functional metrics. However, it must be noted that we used nearly identical behavioral apparatuses, habituation protocols, and stimulus sets between modalities. When comparing across studies with methods that were not as carefully matched, behavioral differences may have a larger influence on the results.
The differences in responsiveness and selectivity metrics computed from the ephys and imaging datasets suggest that functional properties of neurons in the mouse visual cortex can appear to be dependent on the choice of recording modality. These effects extend beyond responsiveness and selectivity, as higher order analyses often build on these more fundamental metrics. Here, we focus on a representative example, a functional classification scheme based on responses to four stimulus types (drifting gratings, static gratings, natural scenes, and natural movies), which, in our previous work, revealed distinct response classes in the imaging dataset . We aim to illustrate, using this recent example, that the effect of recording modality on responsiveness goes far beyond merely impacting how many neurons might be included in an analysis but may also propagate to conclusions we draw about the functional properties of neuronal (sub)populations. Our original study, based on imaging data alone, revealed that only ~10% of the neurons in the imaging dataset responded reliably to all four stimuli, while the largest class of neurons contained those that did not respond reliably to any of the stimuli. This classification result suggested that many neurons in the mouse visual cortex are not well described by classical models and may respond to more intricate visual or non-visual features. Here, we perform an analogous analysis on the ephys dataset to show how this classification is impacted by modality specific biases. As in our original study, we performed unsupervised clustering on the 4 x N matrix of response reliabilities for each unit’s preferred condition for each stimulus class, where N represents the number of units . The resulting clusters were assigned class labels based on whether their mean response reliability was above an initial threshold of 25% (to match the percentage we used in our previous analysis of the imaging dataset). After labeling each cluster, we calculated the fraction of units belonging to each functional class. We averaged over 100 different clustering initializations to obtain the average class membership of units across runs . Running this analysis on the ephys dataset ‘naively’, seemed to reveal a very different landscape of functional classes. In stark contrast to the published imaging results, for ephys, the largest class (~40% of the units) contained units that responded reliably to all four classes of stimuli, while the class that did not respond reliably to any of the stimuli was empty. This is consistent with the observation that responsiveness is higher in the ephys dataset for each stimulus type . To account for this bias, we systematically raised the responsiveness threshold used to group clusters into functional classes for the ephys dataset and found that the distribution of classes became more similar to the distribution of classes for the imaging dataset . A threshold of 40% response reliability for the ephys dataset minimized the Jensen–Shannon distance between the distributions, rendering the class assignments in ephys remarkably similar to those for the imaging dataset . The class labels for each neuron reflect the pattern of cross-stimulus response reliability, and as such provide an indication of its ‘meta-preference’ for different stimulus types. Thus, once we account for the generally higher levels of responsiveness seen in the ephys dataset, we observe similar meta-preferences and thus functional organization as for the imaging dataset. This example highlights how recording-modality-specific biases can affect higher-order conclusions about functional properties, and how a fundamental understanding of such biases can be leveraged to explain and resolve apparent contradictions.
In this study, we have compared response metrics derived from mouse visual cortex excitatory populations collected under highly standardized conditions, but using two different recording modalities. Overall, we observe similar stimulus preferences across the two datasets (e.g. preferred temporal frequencies within each visual area), but we see systematic differences in responsiveness and selectivity. Prior to any attempt at reconciliation, electrophysiological recordings showed a higher fraction of units with stimulus-driven activity, while calcium imaging showed higher selectivity (sharper tuning) among responsive neurons. Our comparison of 2P event detection methods showed that the rate of small-amplitude events can influence inferred functional metrics. While the prevalence of false positives in ground truth data has been analyzed previously , this metric is not typically used to optimize event detection parameters. Instead, correlation with ground truth rate is preferred . However, this measure does not account for changes in background rate associated with the prevalence of small-amplitude events. These events always have a positive magnitude, are often triggered by noise, and can dramatically affect measured selectivity levels. In fact, most neurons have selectivity levels near zero when all events detected by non-negative deconvolution are included in the analysis . Measured responsiveness, on the other hand, was highest when all events were included . While this could indicate a greater sensitivity to true underlying spikes, it could also result from contamination by fluctuations in background fluorescence during the visual stimulation interval. Because of confounds such as these, we found it more informative to carry out our comparison on ephys data that has been transformed by the forward model and processed using the same event extraction method as the experimental imaging data. Notably, the forward model boosted selectivity of the ephys data due to the sparsifying effect of calcium dynamics and the exact ℓ 0 event extraction step. As large, burst-dependent calcium transients can have amplitudes several orders of magnitude above the median amplitude , this causes the response to the preferred stimulus condition to be weighted more heavily in selectivity calculations than non-preferred conditions. Similarly, isolated spikes during non-preferred conditions can be virtually indistinguishable from noise when viewed through the lens of the forward model. When the same trials are viewed through the lens of electrophysiology, however, spike counts increase more or less linearly, leading to the appearance of lower selectivity. Unexpectedly, the forward model did not change responsiveness metrics. We initially hypothesized that the lower responsiveness in the imaging dataset was due to the fact that single-spike events are often not translated into detectable calcium transients; in ground truth recordings, around 75% of 100 ms bins with actual spikes do not result in any events detected by the exact ℓ 0 method with the default sparsity constraint . Instead, our observation of unchanged responsiveness following the forward model suggests that differences between modalities are more likely due to electrophysiological sampling bias—that is extracellular recordings missing small or low-firing rate neurons, or merging spike trains from nearby cells. In order to reconcile some of these differences, we could either apply a very strict threshold on ISI violations score to the ephys dataset, or remove between 61 and 93% of the least active neurons from the imaging dataset . This should serve as a cautionary tale for anyone estimating the fraction of neurons in an area that appear to increase their firing rate in response to environmental or behavioral events. Without careful controls, contamination from other neurons in the vicinity can make this fraction appear artificially high. While it may be possible to minimize these impurities with improved spike-sorting algorithms, there will always be neurons that even the best algorithms will not be able to distinguish in the face of background noise. Differences in laminar sampling and running behavior between the two modalities had almost no effect in our comparisons. The transgenic expression of GCaMP6f in specific neural populations also did not impact the distributions of functional metrics. Finally, the initial parameters chosen for the forward model produced metric distributions that were close to the optimum match over the realistic parameter space. Therefore, we conclude that the primary contribution to differences in the considered ephys and imaging metrics comes from (1) the intrinsic nature of the spikes-to-calcium transfer function (2) the selection bias of extracellular electrophysiology recordings. Even after accounting for these known factors to the best of our ability, the overall population from our imaging experiments still displayed higher selectivity than its ephys counterpart . What could account for these remaining differences? One possibility is that there may be residual undetected contamination in our ephys recordings. An ISI violations score of 0 does not guarantee that there is no contamination, just that we are not able to measure it using this metric. Sampling the tissue more densely (i.e. increasing the spatial resolution of spike waveforms) or improving spike sorting methods could reduce this issue. Another possibility is that ‘missed’ spikes—especially those at the end of a burst—could result in reduced amplitudes for the simulated calcium transients. In addition, if the in vivo spikes-to-calcium transfer function is non-stationary, there could be stimulus-dependent changes in calcium concentration that are not captured by a forward model that takes a spike train as its only input. Simultaneous cell-attached electrophysiology and two-photon imaging experiments have demonstrated the existence of ‘prolonged depolarization events’ (PDEs) in some neurons that result in very large increases in calcium concentration, but that are indistinguishable from similar burst events in extracellular recordings . One potential limitation of our approach is that we have only imaged the activity of transgenically expressed calcium indicators, rather than indicators expressed using a viral approach. In addition, the vast majority of our imaging data comes from mice expressing GCaMP6f, with only a small number of GCaMP6s neurons recorded. While we would ideally want to perform the same experiments with viral expression and GCaMP6s, this would require an expensive multi-year effort of similar magnitude to the one that produced our existing imaging dataset. Instead, we have chosen to simulate the effects of these alternative conditions. In our analysis of the forward model parameter sweep, functional metrics remain relatively constant for a wide range of amplitude and decay time parameters . The full range of this sweep includes decay times that are consistent with GCaMP6s, and event amplitudes that are consistent with viral expression. The forward models currently available in the literature are of comparable power, in that their most complex instantiations allow for non-instantaneous fluorescence rise as well as for a non-linear relationship between calcium concentration and fluorescence. To the best of our knowledge, none of these forward models explicitly model nonstationarities in the spike-to-calcium transfer function. Moreover, all currently available models suffer from the drawback that fits to simultaneously recorded ground truth data yield significant variance in model parameters across neurons . We strove to mitigate this shortcoming by showing that brute-force exploration of MLSpike model parameter space could not significantly improve the match between real and synthetic imaging data. Another potential confound is elevated activity around an implanted probe, which was characterized in a recent study . By performing calcium imaging around a silicon probe that was recently introduced into the brain, the authors found increased intracellular calcium lasting for at least 30 min after implantation. If this is true in our Neuropixels recordings, it could at least partially account for the higher overall firing rates and responsiveness in electrophysiology compared to imaging. Careful simultaneous measurements will be required in order to account for this relative activity increase. While results obtained from ephys and imaging are sometimes treated as if they were interchangeable from a scientific standpoint, in actuality they each provide related but fundamentally different perspectives on the underlying neural activity. Extracellular electrophysiology tells us—with sub-millisecond temporal resolution—about a neuron’s spiking output. Calcium imaging doesn't measure the outgoing action potentials directly, but rather the impact of input and output signals on a neuron’s internal state, in terms of increases in calcium concentration that drive various downstream pathways . While voltage-dependent fluorescent indicators may offer the best of both worlds, there are substantial technical hurdles to employing them on comparably large scales . Thus, in order to correctly interpret existing and forthcoming datasets, we must account for the inherent biases of these two recording modalities. A recent study comparing matched populations recorded with electrophysiology or imaging emphasized differences in the temporal profiles of spike trains and calcium-dependent fluorescence responses . The authors found that event-extraction algorithms that convert continuous ΔF/F traces to putative spike times could not recapitulate the temporal profiles measured with electrophysiology; on the other hand, a forward model that transformed spike times to synthetic ΔF/F traces could make their electrophysiology results appear more like those from the imaging experiments. Their conclusions were primarily based on metrics derived from the evolution of firing rates or ΔF/F ratios over the course of a behavioral trial. However, there are other types of functional metrics that are not explicitly dependent on temporal factors, such as responsiveness, which cannot be reconciled using a forward model alone. It is worth emphasizing that we are not suggesting that researchers use the methods we describe here to attempt to make all of their imaging data more similar to electrophysiological data, or vice versa. Since no one single method is intrinsically superior to the other, doing so would merely introduce additional biases. Instead, we recommend that readers examine how sensitive their chosen functional metrics, and, by extension, their derived scientific conclusions are to (1) signal contamination during spike detection and sorting (for electrophysiology data), (2) application of a spike-to-calcium forward model (for electrophysiology data), (3) filtering by event rate (for imaging data), and (4) false positives introduced during event detection (for imaging data). If the chosen functional metrics are found to be largely insensitive to the above transformations, then results can be compared directly across studies that employ different recording modalities. Otherwise, the sensitivity analysis can be used as a means of establishing bounds on the magnitude and direction of expected modality-related discrepancies. More work is needed to understand the detailed physiological underpinning of the modality-specific differences we have observed. One approach, currently underway at the Allen Institute and elsewhere, is to carry out recordings with extremely high-density silicon probes (Neuropixels Ultra), over 10x denser (in terms of the number of electrodes per unit of area) than the Neuropixels 1.0 probes used in this study. Such probes can capture each spike waveform with 100 or more electrodes, making it easier to disambiguate waveforms from nearby neurons, and making it less likely that neurons with small somata would evade detection or that their waveforms would be mistaken for those of other units. These experiments should make it easier to quantify the selection bias of extracellular electrophysiology, as well as the degree to which missed neurons and contaminated spike trains have influenced the results of the current study. In addition, experiments in which silicon probes are combined with two-photon imaging—either through interleaved sampling, spike-triggered image acquisition, or improved artifact removal techniques—could provide more direct ground-truth information about the relationship between extracellular electrophysiology and calcium imaging. Overall, our comparison highlights the value of large-scale, standardized datasets. The fact that functional metrics are sensitive not only to experimental procedures but also to data processing steps and cell inclusion criteria , makes it difficult to directly compare results across studies. Having access to ephys and imaging datasets collected under largely identical conditions allowed us to rule out a number of potential confounds, such as laminar sampling bias and inter-modality behavioral differences. And due to the technical difficulties of scaling simultaneous ephys/imaging experiments, these will, for the foreseeable future, continue to complement and validate large-scale unimodal datasets, rather than replace them. Ultimately, the goal of this work is not to establish the superiority of any one recording modality in absolute terms, since their complementary strengths ensure they will each remain essential to scientific progress for many years to come. Instead, we want to establish guidelines for properly interpreting the massive amounts of data that have been or will be collected using either modality. From this study, we have learned that extracellular electrophysiology likely overestimates the fraction of neurons that elevate their activity in response to visual stimuli, in a manner that is consistent with the effects of selection bias and contamination. The apparent differences in selectivity underscore the fact that one must carefully consider the impact of data processing steps (such as event extraction from fluorescence time series), as well as what each modality is actually measuring. Selectivity metrics based on spike counts (the neuron’s outputs) will almost always be lower than selectivity metrics based on calcium concentrations (the neuron’s internal state). Even with this in mind, however, we cannot fully reproduce the observed levels of calcium-dependent selectivity using spike times alone—suggesting that a neuron’s internal state may contain stimulus-specific information that is not necessarily reflected in its outputs. In summary, we have shown that reconciling results across modalities is not straightforward, due to biases that are introduced at the level of the data processing steps, the spatial characteristics of the recording hardware, and the physical signals being measured. We have attempted to account for these biases by (1) altering 2P event detection parameters, (2) applying sub-selection to account for built-in spatial biases, and (3) simulating calcium signals via a forward model. We have shown that functional metrics are sensitive to all of these biases, which makes it difficult to determine which ones are most impactful. For example, to what degree does the bias of ephys for higher-firing-rate units stem from the recording hardware versus the spike sorting procedure? It is possible that more spatially precise sampling, or modifications to the spike sorting algorithm, will reduce this bias in the future. Similarly, how much of the 2P imaging bias stems from the calcium indicator versus the event extraction algorithm? New indicators (such as GCaMP8) may ameliorate some of these problems, as could further optimization of the event extraction step. In the end, these two modalities provide two imperfect yet complementary lenses on the underlying neural activity, and their respective strengths and limitations must be understood when interpreting the recorded activities across experiments.
Previously released data We used two-photon calcium imaging recordings from the Allen Brain Observatory Visual Coding dataset ( ; 2016 Allen Institute for Brain Science, available from observatory. brain-map.org ). This dataset consists of calcium fluorescence time series from 63,521 neurons in six different cortical areas across 14 different transgenic lines. Neurons were imaged for three separate sessions (A, B, and C), each of which used a different visual stimulus set . Our analysis was limited to neurons in five areas (V1, LM, AL, PM, and AM) and 10 lines expressing GCaMP6f in excitatory neurons, and which were present in either session A, session B, or both (total of 41,578 neurons). We used extracellular electrophysiological recordings from the Allen Brain Observatory Neuropixels dataset ( ; 2019 Allen Institute for Brain Science, available from portal.brain-map.org/explore/circuits/visual-coding-neuropixels ). This dataset consists of spike trains from 99,180 ‘units’ (putative neurons with varying degrees of completeness and contamination) from 58 mice in a variety of cortical and subcortical structures. We limited our analysis to 31 sessions that used the ‘Brain Observatory 1.1’ stimulus set and units (hereafter, ‘neurons’) from five visual cortical areas (V1, LM, AL, PM, and AM) that displayed ‘regular spiking’ action potential waveforms (peak-to-trough interval > 0.4 ms). Only neurons that passed the following quality control thresholds were included: presence ratio > 0.9 (fraction of the recording session during which spikes are detected), amplitude cutoff < 0.1 (estimate of the fraction of missed spikes), and ISI violations score < 0.5 (estimate of the relative rate of contaminating spikes). After these filtering steps, there were 5917 neurons for analysis. Neuropixels recordings in GCaMP6f mice We collected a novel electrophysiology dataset from transgenic mice expressing GCaMP6f, as well as additional wild-type mice. Experiments were conducted in accordance with PHS Policy on Humane Care and Use of Laboratory Animals and approved by the Allen Institute’s Institutional Animal Care and Use Committee under protocols 1409 (‘A scalable data generation pipeline for creation of a mouse Cortical Activity Map’), 1706 (‘Brain Observatory: Optical Physiology’), and 1805 (‘Protocol for in vivo electrophysiology of mouse brain’). The procedures closely followed those described in and are summarized below. Mice were maintained in the Allen Institute animal facility and used in accordance with protocols approved by the Allen Institute’s Institutional Animal Care and Use Committee. Five genotypes were used: wild-type C57BL/6J mice purchased from Jackson Laboratories ( n = 2) or Vip-IRES-Cre;Ai148 ( n = 3), Sst-IRES-Cre;Ai148 ( n = 6), Slc17a7-IRES2-Cre;Camk2a-tTA;Ai93 ( n = 3), and Cux2-CreERT2;Camk2a-tTA;Ai93 ( n = 3) mice bred in-house. Following surgery, mice were single-housed and maintained on a reverse 12 hr light cycle. All experiments were performed during the dark cycle. At around age P80, mice were implanted with a titanium headframe. In the same procedure, a 5 mm diameter piece of skull was removed over visual cortex, followed by a durotomy. The skull was replaced with a circular glass coverslip coated with a layer of silicone to reduce adhesion to the brain surface. On the day of recording (at least four weeks after the initial surgery), the glass coverslip was removed and replaced with a plastic insertion window containing holes aligned to six cortical visual areas, identified via intrinsic signal imaging . An agarose mixture was injected underneath the window and allowed to solidify. This mixture was optimized to be firm enough to stabilize the brain with minimal probe drift, but pliable enough to allow the probes to pass through without bending. At the end of this procedure, mice were returned to their home cages for 1–2 hr prior to the recording session. All recordings were carried out in head-fixed mice using Neuropixels 1.0 probes ( ; available from neuropixels.org ) mounted on 3-axis stages from New Scale Technologies (Victor, NY). These probes have 383 recording sites oriented in a checkerboard pattern on a 70 μm wide x 10 mm long shank, with 20 µm vertical spacing. Data streams from each electrode were acquired at 30 kHz (spike band) and 2.5 kHz (LFP band) using the Open Ephys GUI . Gain settings of 500x and 250x were used for the spike band and LFP band, respectively. Recordings were referenced to a large, low-impedance electrode at the tip of each probe. Pre-processing, spike sorting, and quality control methods were identical to those used for the previously released dataset (code available at https://github.com/alleninstitute/ecephys_spike_sorting (copy archived at ; swh:1:rev:995842e4ec67e9db1b7869d885b97317012337db ) and https://github.com/MouseLand/Kilosort (copy archived at ; swh:1:rev:db3a3353d9a374ea2f71674bbe443be21986c82c )). Filtering by brain region (V1, LM, AL, PM, and AM), waveform width (>0.4 ms), and QC metrics (presence ratio > 0.9, amplitude cutoff < 0.1, ISI violations score < 0.5) yielded 5113 neurons for analysis. For all analyses except for those in , neurons from this novel dataset were grouped with those from the previously released dataset, for a total of 11,030 neurons. Neurons were registered to 3D brain volumes obtained with an open-source optical projection tomography system ( https://github.com/alleninstitute/AIBSOPT , copy archived at ; swh:1:rev:e38af7e25651fe7517dcf7ca3d38676e3c9c211e ). Brains were first cleared using a variant of the iDISCO method , then imaged with white light (for internal structure) or green light (to visualize probe tracks labeled with fluorescent dye). Reconstructed volumes were mapped to the Mouse Common Coordinate Framework (CCFv3) by matching key points in the original brain to corresponding points in a template volume. Finally, probe tracks were manually traced and warped into the CCFv3 space, and electrodes were aligned to structural boundaries based on physiological landmarks . Visual stimuli Analysis was limited to epochs of drifting gratings, static gratings, natural scenes, or natural movie stimuli, which were shown with identical parameters across the two-photon imaging and electrophysiology experiments. Visual stimuli were generated using custom scripts based on PsychoPy and were displayed using an ASUS PA248Q LCD monitor, 1920 x 1200 pixels in size (21.93’ wide, 60 Hz refresh rate). Stimuli were presented monocularly, and the monitor was positioned 15 cm from the mouse’s right eye and spanned 120° x 95° of visual space prior to stimulus warping. Each monitor was gamma corrected and had a mean luminance of 50 cd/m 2 . To account for the close viewing angle of the mouse, a spherical warping was applied to all stimuli to ensure that the apparent size, speed, and spatial frequency were constant across the monitor as seen from the mouse’s perspective. The drifting gratings stimulus consisted of a full-field sinusoidal grating at 80% contrast presented for 2 s, followed by a 1 s mean luminance gray period. Five temporal frequencies (1, 2, 4, 8, 15 Hz), eight different directions (separated by 45°), and one spatial frequency (0.04 cycles per degree) were used. Each grating condition was presented 15 times in random order. The static gratings stimulus consisted of a full field sinusoidal grating at 80% contrast that was flashed for 250 ms, with no intervening gray period. Five spatial frequencies (0.02, 0.04, 0.08, 0.16, 0.32 cycles per degree), four phases (0, 0.25, 0.5, 0.75), and six orientations (separated by 30°) were used. Each grating condition was presented approximately 50 times in random order. The natural scenes stimulus consisted of 118 natural images taken from the Berkeley Segmentation Dataset , the van Hateren Natural Image Dataset , and the McGill Calibrated Colour Image Database . The images were presented in grayscale and were contrast normalized and resized to 1174 x 918 pixels. The images were presented in a random order for 0.25 s each, with no intervening gray period. Two natural movie clips were taken from the opening scene of the movie Touch of Evil . Natural Movie One was a 30 s clips repeated 20 or 30 times (2 or 3 blocks of 10), while Natural Movie Three was a 120 s clip repeated 10 times (2 blocks of 5). All clips were contrast normalized and were presented in grayscale at 30 fps. Spikes-to-calcium forward model All synthetic fluorescence traces were computed using MLSpike using the third model described in that paper. This version models the supra-linear behavior of the calcium fluorescence response function in the most physiological manner (out of the three models compared) by (1) explicitly accounting for cooperative binding between calcium and the indicator via the Hill equation and (2) including an explicit rise time, τ ON . The model had seven free parameters: decay time ( τ ), unitary response amplitude ( A ), noise level ( σ ), Hill exponent ( n ), ΔF/F rise time ( τ ON ), saturation ( γ ), and baseline calcium concentration ( c 0 ). The last four parameters were fit on a ground truth dataset comprising 14 Emx1-Ai93 (from nine individual neurons across two mice) and 17 Cux2-Ai93 recordings (from 11 individual neurons across two mice), each between 120 s and 310 s in duration, with simultaneous cell-attached electrophysiology and two-photon imaging (noise-matched to the imaging dataset) : n = 2.42, τ ON = 0.0034, γ = 0.0021, and c 0 = 0.46. Reasonable values for the first three parameters were established by applying the MLSpike autocalibration function to all neurons recorded in the imaging dataset, computing a histogram for each parameter, and choosing the value corresponding to the peak of the histogram, which yielded τ = 0.359, A = 0.021, and σ = 0.047. To convert spike times to synthetic fluorescence traces, MATLAB code publicly released by Deneux et al. ( https://github.com/MLspike ) was wrapped into a Python (v3.6.7) module via the MATLAB Library Compiler SDK, and run in parallel on a high-performance compute cluster. ℓ 0 -regularized event extraction Prior to computing response metrics, the normalized fluorescence traces for both the experimental and synthetic imaging data were passed through an ℓ 0 event detection algorithm that identified the onset time and magnitude of transients , using a revised version of this algorithm available at github.com/jewellsean/FastLZeroSpikeInference . The half-time of the transient decay was assumed to be fixed at 315 ms. To avoid overfitting small-amplitude false-positive events to noise in the fluorescence trace, the ℓ 0 -regularization was adjusted for each neuron such that the smallest detected events were at least 200% of the respective noise floor (computed as the robust standard deviation of the noise via the noise_std Python function from the allensdk.brain_observatory.dff module) using an iterative algorithm. All subsequent analyses were performed on these events, rather than continuous fluorescence time series. Non-nonegative deconvolution (NND) For the comparisons shown in , we also extracted events via non-negative deconvolution (NND), using the Python implementation included in Suite2p . Prior to extracting events with NND, we upsampled the 30 Hz ΔF/F traces to 150 Hz using scipy.signal.resample_poly because in our hands NND performed substantially better on upsampled data . In another benchmarking paper , data were also upsampled to 100 Hz ‘for ease of comparison’. To filter out low-amplitude events, we first scaled the NND event amplitudes by the maximum value of the original ΔF/F trace for that neuron, which allowed us to define an event magnitude threshold as a function of the noise level detected in the ΔF/F trace. We then binned events in 100 ms intervals by defining the bin’s magnitude as the sum over the magnitudes of all events that fell in that bin. We set the value of a bin to zero if its overall event magnitude was below a threshold value that was an integer multiple (0 ≤ n ≤ 10) of each neuron’s robust standard deviation of the noise (σ). Visual response metrics All response metrics were calculated from data stored in NWB 2.0 files using Python code in a custom branch of the AllenSDK ( github.com/jsiegle/AllenSDK/tree/ophys-ephys ), which relies heavily on NumPy , SciPy (SciPy 1.0 ), Matplotlib , Pandas , xarray , and scikit-learn open-source libraries. For a given imaging stimulus presentation, the response magnitude for one neuron was defined as the summed amplitude of all of the events occurring between the beginning and end of the presentation. For a given ephys stimulus presentation, the response magnitude for one neuron was defined as the number of spikes occurring between the beginning and end of the presentation. Otherwise, the analysis code used for the two modalities was identical. Responsiveness To determine whether a neuron was responsive to a given stimulus type, the neuron’s response to its preferred condition was compared to a distribution of its activity during the nearest epoch of mean-luminance gray screen (the ‘spontaneous’ interval). This distribution was assembled by randomly selecting 1000 intervals with the same duration of each presentation for that stimulus type (drifting gratings = 2 s, static gratings = 0.25 s, natural scenes = 0.25 s, natural movies = 1/30 s). The preferred condition is the stimulus condition (e.g. a drifting grating with a particular direction and temporal frequency) that elicited the largest mean response. The response reliability was defined as the percentage of preferred condition trials with a response magnitude larger than 95% of spontaneous intervals. A neuron was deemed responsive to a particular stimulus type if its response reliability was greater than 25%. Selectivity and preference metrics were only analyzed for responsive neurons. Selectivity The selectivity of a neuron’s responses within a stimulus type was measured using a lifetime sparseness metric . Lifetime sparseness is defined as: 1 − 1 n ⋅ ( ∑ i = 1 n r i ) 2 ⋅ ( ∑ i = 1 n r i 2 ) − 1 1 − 1 n where n is the total number of conditions, and r i represents the response magnitude for condition i . If a neuron has a non-zero response to only one condition (maximally selective response), its lifetime sparseness will be 1. If a neuron responds equally to all conditions (no selectivity), its lifetime sparseness will be 0. Importantly, lifetime sparseness is a nonparametric statistic that considers a neuron’s selectivity across all possible stimulus conditions within a stimulus type, rather than conditions that vary only one parameter (e.g. orientation selectivity). For that reason, it is applicable to any stimulus type. Preference For all stimulus types, the preferred condition was defined as the condition (or frame, in the case of natural movies) that elicited the largest mean response across all presentations. For drifting gratings, the preferred temporal frequency was defined as the temporal frequency that elicited the largest mean response (averaged across directions). For static gratings, the preferred spatial frequency was defined as the spatial frequency that elicited the largest mean response (averaged across orientations and phases). Matching layer distributions Neurons in the imaging dataset were assigned to layers based on the depth of the imaging plane (<200 µm = L2/3, 200–325 µm = L4, 325–500 µm = L5, >500 µm = L6), or the mouse Cre line (Nr5a1-Cre and Scnn1a-Tg3-Cre neurons were always considered to be L4). Neurons in the ephys dataset were assigned to layers after mapping their position to the Common Coordinate Framework version 3 . CCFv3 coordinates were used as indices into the template volume in order to extract layer labels for each cortical unit (see for details of the mapping procedure). To test for an effect of laminar sampling bias, L6 neurons were first removed from both datasets. Next, since the ephys dataset always had the highest fraction of neurons L5, neurons from L2/3 and L4 of the imaging dataset were randomly sub-sampled to match the relative fraction of ephys neurons from those layers. The final resampled layer distributions are shown in . Burst metrics Bursts were detected using the LogISI method . Peaks in the histogram of the log-adjusted inter-spike intervals (ISI) were identified, and the largest peak corresponding to an ISI of less than 50 ms was set as the intra-burst peak. In the absence of such a peak, no bursts were found. Minima between intra-burst peak and subsequent peaks were found, and a void parameter, representing peak separability, was calculated for each minimum. The ISI value for the first minimum where the void parameter exceeds a default threshold of 0.7 was used as the maxISI -cutoff for burst detection. Bursts were then defined as a series of >three spikes with ISIs less than maxISI. If no cutoff was found, or if maxISI > 50 ms, burst cores were found with <50-ms ISI, and any spikes within maxISI of burst edges were included. R code provided with a comparative review of bursting methods ( ; https://github.com/ellesec/burstanalysis ) was wrapped into Python (v.3.6.7) using the rpy2 interface ( https://rpy2.github.io ), and run in parallel on a high-performance compute cluster. Statistical comparisons Jensen–Shannon distance was used to quantify the disparity between the distributions of metrics from imaging and ephys. This is the square root of the Jensen–Shannon divergence, also known as the total divergence to the mean, which, while derived from the Kullback–Leibler divergence, has the advantage of being symmetric and always has a finite value. The Jensen–Shannon distance constitutes a true mathematical metric that satisfies the triangle inequality. . We used the implementation from the SciPy library (SciPy 1.0 ; scipy.spatial.jensenshannon). For selectivity and responsiveness metrics, Jensen–Shannon distance was calculated between histograms with 10 equal-sized bins between 0 and 1. For preference metrics, Jensen–Shannon distance was calculated between the preferred condition histograms, with unit spacing between the conditions. To compute p values for Jensen–Shannon distances, we used a bootstrap procedure that randomly sub-sampled metric values from one modality, and calculated the distance between these intra-modal distributions. We repeated this procedure 1000 times, in order to calculate the probability that the true inter-modality distance would be less than the distance between the distributions of two non-overlapping intra-modality samples. The Pearson correlation coefficient (scipy.stats.pearsonr) was used to quantify the correlation between two variables. The Mann–Whitney U test (scipy.stats.ranksums) was used to test for differences in running speed or running fraction between the imaging and ephys datasets. Clustering of response reliabilities We performed a clustering analysis using the response reliabilities by stimulus for each neuron (defined as the percentage of significant trials to the neuron’s preferred stimulus condition), across drifting gratings, static gratings, natural scenes, and natural movies. We combined the reliabilities for natural movies by taking the maximum reliability over Natural Movie One and Natural Movie Three. This resulted in a set of four reliabilities for each neuron (for drifting gratings, static gratings, natural movies, and natural scenes). We performed a Gaussian Mixture Model clustering on these reliabilities for cluster numbers from 1 to 50, using the average Bayesian Information Criterion on held-out data with four-fold cross validation to select the optimal number of clusters. Once the optimal model was selected, we labeled each cluster according to its profile of responsiveness (i.e. the average reliability across all neurons in the cluster to drifting gratings, static gratings, etc.), defining these profiles as ‘classes’. For each neuron, we predicted its cluster membership using the optimal model, and then the class membership using a predefined responsiveness threshold. We repeated this process 100 times to estimate the robustness of the clustering and derive uncertainties for the number of cells belonging to each class.
We used two-photon calcium imaging recordings from the Allen Brain Observatory Visual Coding dataset ( ; 2016 Allen Institute for Brain Science, available from observatory. brain-map.org ). This dataset consists of calcium fluorescence time series from 63,521 neurons in six different cortical areas across 14 different transgenic lines. Neurons were imaged for three separate sessions (A, B, and C), each of which used a different visual stimulus set . Our analysis was limited to neurons in five areas (V1, LM, AL, PM, and AM) and 10 lines expressing GCaMP6f in excitatory neurons, and which were present in either session A, session B, or both (total of 41,578 neurons). We used extracellular electrophysiological recordings from the Allen Brain Observatory Neuropixels dataset ( ; 2019 Allen Institute for Brain Science, available from portal.brain-map.org/explore/circuits/visual-coding-neuropixels ). This dataset consists of spike trains from 99,180 ‘units’ (putative neurons with varying degrees of completeness and contamination) from 58 mice in a variety of cortical and subcortical structures. We limited our analysis to 31 sessions that used the ‘Brain Observatory 1.1’ stimulus set and units (hereafter, ‘neurons’) from five visual cortical areas (V1, LM, AL, PM, and AM) that displayed ‘regular spiking’ action potential waveforms (peak-to-trough interval > 0.4 ms). Only neurons that passed the following quality control thresholds were included: presence ratio > 0.9 (fraction of the recording session during which spikes are detected), amplitude cutoff < 0.1 (estimate of the fraction of missed spikes), and ISI violations score < 0.5 (estimate of the relative rate of contaminating spikes). After these filtering steps, there were 5917 neurons for analysis.
We collected a novel electrophysiology dataset from transgenic mice expressing GCaMP6f, as well as additional wild-type mice. Experiments were conducted in accordance with PHS Policy on Humane Care and Use of Laboratory Animals and approved by the Allen Institute’s Institutional Animal Care and Use Committee under protocols 1409 (‘A scalable data generation pipeline for creation of a mouse Cortical Activity Map’), 1706 (‘Brain Observatory: Optical Physiology’), and 1805 (‘Protocol for in vivo electrophysiology of mouse brain’). The procedures closely followed those described in and are summarized below. Mice were maintained in the Allen Institute animal facility and used in accordance with protocols approved by the Allen Institute’s Institutional Animal Care and Use Committee. Five genotypes were used: wild-type C57BL/6J mice purchased from Jackson Laboratories ( n = 2) or Vip-IRES-Cre;Ai148 ( n = 3), Sst-IRES-Cre;Ai148 ( n = 6), Slc17a7-IRES2-Cre;Camk2a-tTA;Ai93 ( n = 3), and Cux2-CreERT2;Camk2a-tTA;Ai93 ( n = 3) mice bred in-house. Following surgery, mice were single-housed and maintained on a reverse 12 hr light cycle. All experiments were performed during the dark cycle. At around age P80, mice were implanted with a titanium headframe. In the same procedure, a 5 mm diameter piece of skull was removed over visual cortex, followed by a durotomy. The skull was replaced with a circular glass coverslip coated with a layer of silicone to reduce adhesion to the brain surface. On the day of recording (at least four weeks after the initial surgery), the glass coverslip was removed and replaced with a plastic insertion window containing holes aligned to six cortical visual areas, identified via intrinsic signal imaging . An agarose mixture was injected underneath the window and allowed to solidify. This mixture was optimized to be firm enough to stabilize the brain with minimal probe drift, but pliable enough to allow the probes to pass through without bending. At the end of this procedure, mice were returned to their home cages for 1–2 hr prior to the recording session. All recordings were carried out in head-fixed mice using Neuropixels 1.0 probes ( ; available from neuropixels.org ) mounted on 3-axis stages from New Scale Technologies (Victor, NY). These probes have 383 recording sites oriented in a checkerboard pattern on a 70 μm wide x 10 mm long shank, with 20 µm vertical spacing. Data streams from each electrode were acquired at 30 kHz (spike band) and 2.5 kHz (LFP band) using the Open Ephys GUI . Gain settings of 500x and 250x were used for the spike band and LFP band, respectively. Recordings were referenced to a large, low-impedance electrode at the tip of each probe. Pre-processing, spike sorting, and quality control methods were identical to those used for the previously released dataset (code available at https://github.com/alleninstitute/ecephys_spike_sorting (copy archived at ; swh:1:rev:995842e4ec67e9db1b7869d885b97317012337db ) and https://github.com/MouseLand/Kilosort (copy archived at ; swh:1:rev:db3a3353d9a374ea2f71674bbe443be21986c82c )). Filtering by brain region (V1, LM, AL, PM, and AM), waveform width (>0.4 ms), and QC metrics (presence ratio > 0.9, amplitude cutoff < 0.1, ISI violations score < 0.5) yielded 5113 neurons for analysis. For all analyses except for those in , neurons from this novel dataset were grouped with those from the previously released dataset, for a total of 11,030 neurons. Neurons were registered to 3D brain volumes obtained with an open-source optical projection tomography system ( https://github.com/alleninstitute/AIBSOPT , copy archived at ; swh:1:rev:e38af7e25651fe7517dcf7ca3d38676e3c9c211e ). Brains were first cleared using a variant of the iDISCO method , then imaged with white light (for internal structure) or green light (to visualize probe tracks labeled with fluorescent dye). Reconstructed volumes were mapped to the Mouse Common Coordinate Framework (CCFv3) by matching key points in the original brain to corresponding points in a template volume. Finally, probe tracks were manually traced and warped into the CCFv3 space, and electrodes were aligned to structural boundaries based on physiological landmarks .
Analysis was limited to epochs of drifting gratings, static gratings, natural scenes, or natural movie stimuli, which were shown with identical parameters across the two-photon imaging and electrophysiology experiments. Visual stimuli were generated using custom scripts based on PsychoPy and were displayed using an ASUS PA248Q LCD monitor, 1920 x 1200 pixels in size (21.93’ wide, 60 Hz refresh rate). Stimuli were presented monocularly, and the monitor was positioned 15 cm from the mouse’s right eye and spanned 120° x 95° of visual space prior to stimulus warping. Each monitor was gamma corrected and had a mean luminance of 50 cd/m 2 . To account for the close viewing angle of the mouse, a spherical warping was applied to all stimuli to ensure that the apparent size, speed, and spatial frequency were constant across the monitor as seen from the mouse’s perspective. The drifting gratings stimulus consisted of a full-field sinusoidal grating at 80% contrast presented for 2 s, followed by a 1 s mean luminance gray period. Five temporal frequencies (1, 2, 4, 8, 15 Hz), eight different directions (separated by 45°), and one spatial frequency (0.04 cycles per degree) were used. Each grating condition was presented 15 times in random order. The static gratings stimulus consisted of a full field sinusoidal grating at 80% contrast that was flashed for 250 ms, with no intervening gray period. Five spatial frequencies (0.02, 0.04, 0.08, 0.16, 0.32 cycles per degree), four phases (0, 0.25, 0.5, 0.75), and six orientations (separated by 30°) were used. Each grating condition was presented approximately 50 times in random order. The natural scenes stimulus consisted of 118 natural images taken from the Berkeley Segmentation Dataset , the van Hateren Natural Image Dataset , and the McGill Calibrated Colour Image Database . The images were presented in grayscale and were contrast normalized and resized to 1174 x 918 pixels. The images were presented in a random order for 0.25 s each, with no intervening gray period. Two natural movie clips were taken from the opening scene of the movie Touch of Evil . Natural Movie One was a 30 s clips repeated 20 or 30 times (2 or 3 blocks of 10), while Natural Movie Three was a 120 s clip repeated 10 times (2 blocks of 5). All clips were contrast normalized and were presented in grayscale at 30 fps.
All synthetic fluorescence traces were computed using MLSpike using the third model described in that paper. This version models the supra-linear behavior of the calcium fluorescence response function in the most physiological manner (out of the three models compared) by (1) explicitly accounting for cooperative binding between calcium and the indicator via the Hill equation and (2) including an explicit rise time, τ ON . The model had seven free parameters: decay time ( τ ), unitary response amplitude ( A ), noise level ( σ ), Hill exponent ( n ), ΔF/F rise time ( τ ON ), saturation ( γ ), and baseline calcium concentration ( c 0 ). The last four parameters were fit on a ground truth dataset comprising 14 Emx1-Ai93 (from nine individual neurons across two mice) and 17 Cux2-Ai93 recordings (from 11 individual neurons across two mice), each between 120 s and 310 s in duration, with simultaneous cell-attached electrophysiology and two-photon imaging (noise-matched to the imaging dataset) : n = 2.42, τ ON = 0.0034, γ = 0.0021, and c 0 = 0.46. Reasonable values for the first three parameters were established by applying the MLSpike autocalibration function to all neurons recorded in the imaging dataset, computing a histogram for each parameter, and choosing the value corresponding to the peak of the histogram, which yielded τ = 0.359, A = 0.021, and σ = 0.047. To convert spike times to synthetic fluorescence traces, MATLAB code publicly released by Deneux et al. ( https://github.com/MLspike ) was wrapped into a Python (v3.6.7) module via the MATLAB Library Compiler SDK, and run in parallel on a high-performance compute cluster.
0 -regularized event extraction Prior to computing response metrics, the normalized fluorescence traces for both the experimental and synthetic imaging data were passed through an ℓ 0 event detection algorithm that identified the onset time and magnitude of transients , using a revised version of this algorithm available at github.com/jewellsean/FastLZeroSpikeInference . The half-time of the transient decay was assumed to be fixed at 315 ms. To avoid overfitting small-amplitude false-positive events to noise in the fluorescence trace, the ℓ 0 -regularization was adjusted for each neuron such that the smallest detected events were at least 200% of the respective noise floor (computed as the robust standard deviation of the noise via the noise_std Python function from the allensdk.brain_observatory.dff module) using an iterative algorithm. All subsequent analyses were performed on these events, rather than continuous fluorescence time series.
For the comparisons shown in , we also extracted events via non-negative deconvolution (NND), using the Python implementation included in Suite2p . Prior to extracting events with NND, we upsampled the 30 Hz ΔF/F traces to 150 Hz using scipy.signal.resample_poly because in our hands NND performed substantially better on upsampled data . In another benchmarking paper , data were also upsampled to 100 Hz ‘for ease of comparison’. To filter out low-amplitude events, we first scaled the NND event amplitudes by the maximum value of the original ΔF/F trace for that neuron, which allowed us to define an event magnitude threshold as a function of the noise level detected in the ΔF/F trace. We then binned events in 100 ms intervals by defining the bin’s magnitude as the sum over the magnitudes of all events that fell in that bin. We set the value of a bin to zero if its overall event magnitude was below a threshold value that was an integer multiple (0 ≤ n ≤ 10) of each neuron’s robust standard deviation of the noise (σ).
All response metrics were calculated from data stored in NWB 2.0 files using Python code in a custom branch of the AllenSDK ( github.com/jsiegle/AllenSDK/tree/ophys-ephys ), which relies heavily on NumPy , SciPy (SciPy 1.0 ), Matplotlib , Pandas , xarray , and scikit-learn open-source libraries. For a given imaging stimulus presentation, the response magnitude for one neuron was defined as the summed amplitude of all of the events occurring between the beginning and end of the presentation. For a given ephys stimulus presentation, the response magnitude for one neuron was defined as the number of spikes occurring between the beginning and end of the presentation. Otherwise, the analysis code used for the two modalities was identical. Responsiveness To determine whether a neuron was responsive to a given stimulus type, the neuron’s response to its preferred condition was compared to a distribution of its activity during the nearest epoch of mean-luminance gray screen (the ‘spontaneous’ interval). This distribution was assembled by randomly selecting 1000 intervals with the same duration of each presentation for that stimulus type (drifting gratings = 2 s, static gratings = 0.25 s, natural scenes = 0.25 s, natural movies = 1/30 s). The preferred condition is the stimulus condition (e.g. a drifting grating with a particular direction and temporal frequency) that elicited the largest mean response. The response reliability was defined as the percentage of preferred condition trials with a response magnitude larger than 95% of spontaneous intervals. A neuron was deemed responsive to a particular stimulus type if its response reliability was greater than 25%. Selectivity and preference metrics were only analyzed for responsive neurons. Selectivity The selectivity of a neuron’s responses within a stimulus type was measured using a lifetime sparseness metric . Lifetime sparseness is defined as: 1 − 1 n ⋅ ( ∑ i = 1 n r i ) 2 ⋅ ( ∑ i = 1 n r i 2 ) − 1 1 − 1 n where n is the total number of conditions, and r i represents the response magnitude for condition i . If a neuron has a non-zero response to only one condition (maximally selective response), its lifetime sparseness will be 1. If a neuron responds equally to all conditions (no selectivity), its lifetime sparseness will be 0. Importantly, lifetime sparseness is a nonparametric statistic that considers a neuron’s selectivity across all possible stimulus conditions within a stimulus type, rather than conditions that vary only one parameter (e.g. orientation selectivity). For that reason, it is applicable to any stimulus type. Preference For all stimulus types, the preferred condition was defined as the condition (or frame, in the case of natural movies) that elicited the largest mean response across all presentations. For drifting gratings, the preferred temporal frequency was defined as the temporal frequency that elicited the largest mean response (averaged across directions). For static gratings, the preferred spatial frequency was defined as the spatial frequency that elicited the largest mean response (averaged across orientations and phases).
To determine whether a neuron was responsive to a given stimulus type, the neuron’s response to its preferred condition was compared to a distribution of its activity during the nearest epoch of mean-luminance gray screen (the ‘spontaneous’ interval). This distribution was assembled by randomly selecting 1000 intervals with the same duration of each presentation for that stimulus type (drifting gratings = 2 s, static gratings = 0.25 s, natural scenes = 0.25 s, natural movies = 1/30 s). The preferred condition is the stimulus condition (e.g. a drifting grating with a particular direction and temporal frequency) that elicited the largest mean response. The response reliability was defined as the percentage of preferred condition trials with a response magnitude larger than 95% of spontaneous intervals. A neuron was deemed responsive to a particular stimulus type if its response reliability was greater than 25%. Selectivity and preference metrics were only analyzed for responsive neurons.
The selectivity of a neuron’s responses within a stimulus type was measured using a lifetime sparseness metric . Lifetime sparseness is defined as: 1 − 1 n ⋅ ( ∑ i = 1 n r i ) 2 ⋅ ( ∑ i = 1 n r i 2 ) − 1 1 − 1 n where n is the total number of conditions, and r i represents the response magnitude for condition i . If a neuron has a non-zero response to only one condition (maximally selective response), its lifetime sparseness will be 1. If a neuron responds equally to all conditions (no selectivity), its lifetime sparseness will be 0. Importantly, lifetime sparseness is a nonparametric statistic that considers a neuron’s selectivity across all possible stimulus conditions within a stimulus type, rather than conditions that vary only one parameter (e.g. orientation selectivity). For that reason, it is applicable to any stimulus type.
For all stimulus types, the preferred condition was defined as the condition (or frame, in the case of natural movies) that elicited the largest mean response across all presentations. For drifting gratings, the preferred temporal frequency was defined as the temporal frequency that elicited the largest mean response (averaged across directions). For static gratings, the preferred spatial frequency was defined as the spatial frequency that elicited the largest mean response (averaged across orientations and phases).
Neurons in the imaging dataset were assigned to layers based on the depth of the imaging plane (<200 µm = L2/3, 200–325 µm = L4, 325–500 µm = L5, >500 µm = L6), or the mouse Cre line (Nr5a1-Cre and Scnn1a-Tg3-Cre neurons were always considered to be L4). Neurons in the ephys dataset were assigned to layers after mapping their position to the Common Coordinate Framework version 3 . CCFv3 coordinates were used as indices into the template volume in order to extract layer labels for each cortical unit (see for details of the mapping procedure). To test for an effect of laminar sampling bias, L6 neurons were first removed from both datasets. Next, since the ephys dataset always had the highest fraction of neurons L5, neurons from L2/3 and L4 of the imaging dataset were randomly sub-sampled to match the relative fraction of ephys neurons from those layers. The final resampled layer distributions are shown in .
Bursts were detected using the LogISI method . Peaks in the histogram of the log-adjusted inter-spike intervals (ISI) were identified, and the largest peak corresponding to an ISI of less than 50 ms was set as the intra-burst peak. In the absence of such a peak, no bursts were found. Minima between intra-burst peak and subsequent peaks were found, and a void parameter, representing peak separability, was calculated for each minimum. The ISI value for the first minimum where the void parameter exceeds a default threshold of 0.7 was used as the maxISI -cutoff for burst detection. Bursts were then defined as a series of >three spikes with ISIs less than maxISI. If no cutoff was found, or if maxISI > 50 ms, burst cores were found with <50-ms ISI, and any spikes within maxISI of burst edges were included. R code provided with a comparative review of bursting methods ( ; https://github.com/ellesec/burstanalysis ) was wrapped into Python (v.3.6.7) using the rpy2 interface ( https://rpy2.github.io ), and run in parallel on a high-performance compute cluster.
Jensen–Shannon distance was used to quantify the disparity between the distributions of metrics from imaging and ephys. This is the square root of the Jensen–Shannon divergence, also known as the total divergence to the mean, which, while derived from the Kullback–Leibler divergence, has the advantage of being symmetric and always has a finite value. The Jensen–Shannon distance constitutes a true mathematical metric that satisfies the triangle inequality. . We used the implementation from the SciPy library (SciPy 1.0 ; scipy.spatial.jensenshannon). For selectivity and responsiveness metrics, Jensen–Shannon distance was calculated between histograms with 10 equal-sized bins between 0 and 1. For preference metrics, Jensen–Shannon distance was calculated between the preferred condition histograms, with unit spacing between the conditions. To compute p values for Jensen–Shannon distances, we used a bootstrap procedure that randomly sub-sampled metric values from one modality, and calculated the distance between these intra-modal distributions. We repeated this procedure 1000 times, in order to calculate the probability that the true inter-modality distance would be less than the distance between the distributions of two non-overlapping intra-modality samples. The Pearson correlation coefficient (scipy.stats.pearsonr) was used to quantify the correlation between two variables. The Mann–Whitney U test (scipy.stats.ranksums) was used to test for differences in running speed or running fraction between the imaging and ephys datasets.
We performed a clustering analysis using the response reliabilities by stimulus for each neuron (defined as the percentage of significant trials to the neuron’s preferred stimulus condition), across drifting gratings, static gratings, natural scenes, and natural movies. We combined the reliabilities for natural movies by taking the maximum reliability over Natural Movie One and Natural Movie Three. This resulted in a set of four reliabilities for each neuron (for drifting gratings, static gratings, natural movies, and natural scenes). We performed a Gaussian Mixture Model clustering on these reliabilities for cluster numbers from 1 to 50, using the average Bayesian Information Criterion on held-out data with four-fold cross validation to select the optimal number of clusters. Once the optimal model was selected, we labeled each cluster according to its profile of responsiveness (i.e. the average reliability across all neurons in the cluster to drifting gratings, static gratings, etc.), defining these profiles as ‘classes’. For each neuron, we predicted its cluster membership using the optimal model, and then the class membership using a predefined responsiveness threshold. We repeated this process 100 times to estimate the robustness of the clustering and derive uncertainties for the number of cells belonging to each class.
|
Invasive versus conservative strategies for non-ST-elevation acute coronary syndrome in the elderly: an updated systematic review and meta-analysis of randomized controlled trials | 5e765755-33db-42eb-9aed-34e544844d66 | 11823017 | Surgical Procedures, Operative[mh] | The initial management of non-ST-segment elevation acute coronary syndrome (NSTE-ACS) traditionally follows one of two pathways: a routine invasive strategy involving inpatient coronary angiography with potential revascularization, or a conservative strategy utilizing optimal medical therapy with selective angiography based on clinical indicators . While the routine invasive approach has demonstrated a reduction in composite ischemic events in the general population, its benefits must be weighed against increased risks of periprocedural complications and bleeding, particularly as it has not shown a clear mortality benefit in meta-analyses . This risk–benefit balance becomes particularly crucial in older adults, who represent an increasing proportion of NSTE-ACS presentations and face unique challenges. These patients typically present with more complex coronary anatomy, greater comorbidity burden, and higher baseline risks for both adverse cardiovascular outcomes and procedural complications . Despite these distinct characteristics, current guidelines largely extrapolate recommendations from younger populations, as elderly patients have been historically underrepresented in or excluded from major cardiovascular trials . Earlier meta-analyses of studies focusing specifically on elderly patients predominantly suggest more favorable outcomes with an invasive strategy regarding reducing recurrent myocardial infarction (MI) and the need for urgent revascularization. However, the findings of these studies on mortality and bleeding events are inconsistent and inconclusive . A recent individual patient data meta-analysis of 6 RCTs (1,479 patients) found lower rates of recurrent MI and urgent revascularization within the first year with an invasive strategy, though the composite of all-cause mortality and MI showed no difference between approaches . The evidence base has recently expanded with new data, including a large open-label RCT enrolling 1,518 patients and extended follow-up data from previously published trials . This expanding evidence landscape, coupled with persistent uncertainties, demands a fresh evaluation of management strategies for elderly NSTE-ACS patients. Our meta-analysis synthesizes this comprehensive dataset to provide contemporary guidance for this high-risk population, where optimal treatment selection remains a critical clinical challenge. This systematic review and meta-analysis followed a prospectively registered protocol (PROSPERO: CRD42024609066) detailing our methodology, eligibility criteria, and outcomes of interest. We conducted and reported our analysis according to the Cochrane Handbook and the Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) 2020 guidelines, respectively . Search strategy We conducted a comprehensive literature search across four databases—PubMed, Embase, Web of Science, and Scopus—to identify RCTs or subanalysis of RCTs published up to October 1st, 2024, that evaluated initial management approaches in elderly (≥ 70 years old) patients with NSTE-ACS. Our search strategy combined MeSH terms and free-text keywords relevant to the research question, including terms related to invasive and conservative strategies, outcomes, and older populations. The detailed search syntax used for each database is provided in the Supplementary Materials. Additionally, we manually searched the reference list of eligible articles and prior systematic reviews (i.e., backward citation tracking) and recent publications that have cited to the included studies (i.e., forward citation tracking) to ensure no eligible study has been missed. Study selection and eligibility criteria Two reviewers (E.K. and A.A.) independently screened the retrieved records with their titles and abstracts against the eligibility criteria. The full texts of potentially eligible records then were scrutinized by two investigators in duplicate. At each stage, any disagreements between the reviewers were firstly resolved through discussion and then by the adjudication of a third reviewer (A.H.) if consensus could not be reached. Only peer-reviewed, publicated RCTs or subanalyses of RCTs that investigated the comparative efficacy and safety of invasive versus conservative strategies in elderly patients with NSTE-ACS were included. Reviews, editorials, case reports, case series, conference papers, pre-proofs, pre-prints, and observational studies were excluded from the analysis. The co-primary outcomes of interest were all-cause mortality and cardiovascular death. The secondary efficacy and safety outcomes included MI, stroke, revascularization, decompensated heart failure, and bleeding events. Data extraction A standardized data extraction form was created to collect relevant details from each included study systematically. The two reviewers (A.G.J. and F.Y.) independently extracted data, including RCT name, first author name, publication year, study population characteristics (country, gender, comorbidities, and medical profile), incidence of all-cause mortality, cardiovascular/cardiac death, MI, revascularization, decompensated heart failure, and bleeding events in each study arm. Any discrepancies in extracted data were discussed to reach a consensus. Risk of bias assessment A.G.J. and E.H. evaluated the methodological quality of the research using the Cochrane Risk of Bias 2 (RoB 2) tool for randomized trials . This tool assesses bias across five domains: (1) bias arising from the randomization process, (2) bias due to deviations from intended interventions, (3) bias due to missing outcome data, (4) bias in the measurement of the outcome, and (5) bias in the selection of the reported result. Each domain was judged as "low risk of bias," "some concerns," or "high risk of bias," and an overall risk of bias judgment was assigned based on these domain-level assessments. Inconsistencies were addressed with the assistance of a third reviewer (A.H.). Publication bias was not assessed, as the number of included studies in each analysis did not exceed 10, rendering the results unreliable . Statistical analysis Our analysis employed two complementary statistical approaches. First, a random-effects model with the DerSimonian-Laird method was used to calculate risk ratios (RR) with corresponding 95% confidence intervals (CI) for the outcomes. For analyses including at least 5 studies, the 95% prediction intervals (PI) were also calculated to estimate the expected range of true effects in future studies. For this approach, sensitivity analyses were performed using the "leave-one-out" method to assess if omitting any of the included studies could change the results significantly. Also, a separate sensitivity analysis for bleeding outcomes was performed, including studies using TIMI bleeding definitions to address heterogeneity in bleeding outcomes. Additionally, we conducted a subgroup analysis of all outcomes for octogenarians (≥ 80 years) and meta-regression analyses to explore the relationship between mean age and treatment effects. Second, we conducted time-to-event analyses using hazard ratios (HR) by combining data from two individual patient data meta-analyses by Kotanidis et al. and Damman et al. with a newly published large RCT (SENIOR-RITA by Kunadian et al.) . The results of the studies were combined using the generic inverse variance method. Effect estimates were considered statistically significant when p -value < 0.05, indicated by their respective 95% CI not encompassing the null value. Heterogeneity was quantified using I 2 statistics, with I 2 > 50% considered to represent significant heterogeneity. Tests for assessing the publication bias were not conducted since less than 10 studies were included for analysis. All the analyses reported in this meta-analysis were undertaken in R Software version 4.3.2 using “meta” and “metafor” packages. We conducted a comprehensive literature search across four databases—PubMed, Embase, Web of Science, and Scopus—to identify RCTs or subanalysis of RCTs published up to October 1st, 2024, that evaluated initial management approaches in elderly (≥ 70 years old) patients with NSTE-ACS. Our search strategy combined MeSH terms and free-text keywords relevant to the research question, including terms related to invasive and conservative strategies, outcomes, and older populations. The detailed search syntax used for each database is provided in the Supplementary Materials. Additionally, we manually searched the reference list of eligible articles and prior systematic reviews (i.e., backward citation tracking) and recent publications that have cited to the included studies (i.e., forward citation tracking) to ensure no eligible study has been missed. Two reviewers (E.K. and A.A.) independently screened the retrieved records with their titles and abstracts against the eligibility criteria. The full texts of potentially eligible records then were scrutinized by two investigators in duplicate. At each stage, any disagreements between the reviewers were firstly resolved through discussion and then by the adjudication of a third reviewer (A.H.) if consensus could not be reached. Only peer-reviewed, publicated RCTs or subanalyses of RCTs that investigated the comparative efficacy and safety of invasive versus conservative strategies in elderly patients with NSTE-ACS were included. Reviews, editorials, case reports, case series, conference papers, pre-proofs, pre-prints, and observational studies were excluded from the analysis. The co-primary outcomes of interest were all-cause mortality and cardiovascular death. The secondary efficacy and safety outcomes included MI, stroke, revascularization, decompensated heart failure, and bleeding events. A standardized data extraction form was created to collect relevant details from each included study systematically. The two reviewers (A.G.J. and F.Y.) independently extracted data, including RCT name, first author name, publication year, study population characteristics (country, gender, comorbidities, and medical profile), incidence of all-cause mortality, cardiovascular/cardiac death, MI, revascularization, decompensated heart failure, and bleeding events in each study arm. Any discrepancies in extracted data were discussed to reach a consensus. A.G.J. and E.H. evaluated the methodological quality of the research using the Cochrane Risk of Bias 2 (RoB 2) tool for randomized trials . This tool assesses bias across five domains: (1) bias arising from the randomization process, (2) bias due to deviations from intended interventions, (3) bias due to missing outcome data, (4) bias in the measurement of the outcome, and (5) bias in the selection of the reported result. Each domain was judged as "low risk of bias," "some concerns," or "high risk of bias," and an overall risk of bias judgment was assigned based on these domain-level assessments. Inconsistencies were addressed with the assistance of a third reviewer (A.H.). Publication bias was not assessed, as the number of included studies in each analysis did not exceed 10, rendering the results unreliable . Our analysis employed two complementary statistical approaches. First, a random-effects model with the DerSimonian-Laird method was used to calculate risk ratios (RR) with corresponding 95% confidence intervals (CI) for the outcomes. For analyses including at least 5 studies, the 95% prediction intervals (PI) were also calculated to estimate the expected range of true effects in future studies. For this approach, sensitivity analyses were performed using the "leave-one-out" method to assess if omitting any of the included studies could change the results significantly. Also, a separate sensitivity analysis for bleeding outcomes was performed, including studies using TIMI bleeding definitions to address heterogeneity in bleeding outcomes. Additionally, we conducted a subgroup analysis of all outcomes for octogenarians (≥ 80 years) and meta-regression analyses to explore the relationship between mean age and treatment effects. Second, we conducted time-to-event analyses using hazard ratios (HR) by combining data from two individual patient data meta-analyses by Kotanidis et al. and Damman et al. with a newly published large RCT (SENIOR-RITA by Kunadian et al.) . The results of the studies were combined using the generic inverse variance method. Effect estimates were considered statistically significant when p -value < 0.05, indicated by their respective 95% CI not encompassing the null value. Heterogeneity was quantified using I 2 statistics, with I 2 > 50% considered to represent significant heterogeneity. Tests for assessing the publication bias were not conducted since less than 10 studies were included for analysis. All the analyses reported in this meta-analysis were undertaken in R Software version 4.3.2 using “meta” and “metafor” packages. A PRISMA flow diagram outlines the study selection process and results (Fig. ). Our comprehensive database search identified 2941 records screened for duplicates, leaving 2224 studies for title/abstract review. We excluded 2158 papers at this stage as it was clear from the title and abstract that the topic or outcomes were irrelevant to this review or methodologically did not fit the eligibility criteria. The full texts of the remaining 66 articles were assessed for eligibility based on the predefined criteria. The details for excluded studies after reviewing full-texts are available in Table S1. Following a full-text review, 14 publications derived from 11 randomized controlled trials met the inclusion criteria for quantitative synthesis. These publications comprised: five independent trials specifically designed for elderly patients (represented by five publications) , two dedicated elderly trials with both primary results and extended follow-up analyses (4 publications) , one secondary analysis of elderly subgroup data from a general population trial (1 publication) , and one patient-level pooled analysis of elderly participants from three independent RCTs (FRISC II , RITA 3 , and ICTUS ) known collectively as FIR trials (1 publication) . Study characteristics Study characteristics and patient population Our systematic review identified 11 randomized controlled trials published between 2000 and 2024, enrolling a total of 4114 elderly patients with NSTE-ACS. The sample sizes varied considerably, from 106 patients in the MOSCA trial to 1,518 patients in the SENIOR-RITA trial . These trials were conducted across multiple European and North American countries. One noticeable variation among these RCTs is the age threshold defining “elderly,” which ranged from ≥ 70 to ≥ 80 years. Three trials—After Eighty , the 80 + study , and RINCAL —specifically focused on octogenarians, while others employed lower age thresholds. Nevertheless, the approximate mean age of the total included population in this analysis is over 80 and provides a representative sample of elderly patients, enhancing the generalizability of our findings. Cardiovascular risk profiles and comorbidities As shown in Table , cardiovascular risk profiles and comorbidity patterns varied widely across studies. Hypertension prevalence ranged from 59% in the After Eighty study to 92% in the MOSCA-FRAIL trial. Diabetes mellitus prevalence showed similar variation, from 15% in FIR trials to 56% in MOSCA-FRAIL. Prior MI was common across studies (27–44%), with the highest rates in MOSCA and lowest in the RINCAL. Renal dysfunction prevalence ranged markedly, from 21% in SENIOR-RITA to 69% in the 80 + study. Atrial fibrillation prevalence showed moderate variability (13–27%), highest in MOSCA-FRAIL and lowest in the Italian Elderly ACS study. Previous revascularization rates also differed, with prior PCI ranging from 4 to 31% and CABG from 3 to 18%. These differences in comorbidity profiles likely reflect variations in inclusion criteria and recruitment strategies across trials. While earlier trials, like TACTICS–TIMI 18 and FIR trials, employed broader inclusion criteria, more recent trials incorporated specific geriatric assessments . The MOSCA trial uniquely focused on patients with multiple comorbidities, requiring at least two major comorbidities for inclusion . Notably, the MOSCA-FRAIL and SENIOR-RITA trials systematically assessed frailty, with SENIOR-RITA also evaluating cognitive function . Procedural characteristics and management strategies Recent trials showed notable procedural advancements, particularly with increased radial access rates (> 80% in SENIOR-RITA and After Eighty), which may have influenced bleeding complications . As shown in Tables and , the variability in the timing and approach to invasive management was also observed. The allowed delay in the timing of angiography in invasive arms ranged from a maximum of 48 h in the TACTICS–TIMI 18 trial up to 7 days in SENIOR-RITA and FRISC II , with most trials mandating 72 h limit. Revascularization rates in these arms spanned 50% to 62% of randomized patients. Conservative arms showed distinct differences in cross-over criteria for angiography, and all trials allowed for refractory symptoms or clinical deterioration. However, thresholds varied, leading to coronary angiography rates from 0% in After Eighty to 49% in the TACTICS–TIMI 18 trial, with subsequent revascularization rates ranging from 0 to 32% . These differences likely stemmed from varying definitions of conservative and invasive strategies, criteria for medical therapy failure, and thresholds for rescue angiography. As outlined in Table , follow-up durations also varied, ranging from a minimum of 6 months to a median of 5.3 years . Unfortunately, both the 80 + study and RINCAL were terminated prematurely due to recruitment challenges. Clinical endpoint definitions and assessment The definition of MI evolved over time, with earlier trials using older universal definitions of MI, while more recent trials like SENIOR-RITA employed the Fourth Universal Definition . The bleeding outcome definition had some levels of heterogeneity across the studies, as the classification of bleeding outcomes was according to the Bleeding Academic Research Consortium (BARC) definition in 3 trials (SENIOR-RITA, RINCAL, and Italian Elderly ACS) and according to Thrombolysis in Myocardial Infarction (TIMI) criteria in 4 trials (80 + , After Eighty, MOSCA, and TACTICS–TIMI 18) while one study (MOSCA-FRAIL) used a separate definition (Table S2). Bleeding outcomes were harmonized across trials using established criteria from the BARC and TIMI classifications (Table S3) . Major bleeding was defined as BARC type 3b or higher and its TIMI equivalent, encompassing fatal bleeding, symptomatic intracranial hemorrhage, hemodynamic compromise requiring intervention, and bleeding requiring transfusion of ≥ 5 units of whole blood/red cells. Minor bleeding was defined as BARC type 2-3a or its TIMI equivalent, characterized by overt bleeding requiring medical intervention or antithrombotic therapy modification without meeting major bleeding criteria. The data for major and minor bleeding were available separately in 5 trials (SENIOR-RITA, RINCAL, 80 + , After Eighty, and TACTICS–TIMI 18) while among the three remaining trials, the bleeding outcomes had been reported as a composite of major and minor bleeding in two trials (MOSCA-FRAIL and MOSCA), and in one study (Italian Elderly ACS) the bleeding outcome had been considered as a composite of BARC type 2, 3a, and 3b bleeding. Despite different classification systems, the fundamental criteria defining major bleeding events remained consistent between BARC and TIMI scales, enabling reliable cross-trial comparisons . Risk of bias assessment As summarized in Table , all studies were categorized as low-risk in terms of overall bias. While some concerns were noted regarding deviations from the intended intervention due to the open-label design and crossover rates, these did not significantly impact the overall assessments. Invasive vs. conservative management outcomes Analysis of the primary outcomes revealed comparable mortality rates between treatment strategies. Both all-cause mortality (RR: 1.04, 95% CI: 0.98–1.11, 95% PI: 0.97–1.12, p = 0.18) and cardiovascular mortality (RR: 0.98, 95% CI: 0.85–1.12, 95% PI: 0.82–1.16, p = 0.68) showed no significant differences between approaches, with completely homogeneous findings across studies (I2 = 0%, Tau2 = 0 for both outcomes) (Fig. A and B). Sensitivity analyses demonstrated remarkable stability in these findings, with all-cause mortality RRs ranging from 0.96–1.05 (all p -values > 0.05) and cardiovascular mortality RRs ranging from 0.92–1.02 (all p -values > 0.05) across all leave-one-out iterations (Fig. A and B). The narrow nonsignificant 95% PIs also suggest consistency across studies, as most future studies are also likely to show no clear survival benefit or harm from either strategy. The invasive strategy significantly reduced the need for subsequent revascularization procedures (RR: 0.41, 95% CI: 0.27–0.62, 95% PI: 0.19–0.90, p < 0.01; I2 = 30%, Tau2 = 0.0621) and the risk of MI (RR: 0.75, 95% CI: 0.57–0.99, 95% PI: 0.46–1.24, p = 0.04; I2 = 43%, Tau2 = 0.1768) (Fig. F and C). Sensitivity analyses confirmed the robustness of the revascularization benefit, with consistent RRs (0.37–0.49) maintaining statistical significance across all iterations ( p -values < 0.01) and moderate heterogeneity (I2: 0–42%) (Fig. F). The 95% PI confirms this potential benefit in future studies. The MI risk reduction showed more variability in sensitivity analyses (RRs: 0.72–0.79; I2: 24–51%), with statistical significance being lost in some analyses when certain studies were omitted ( p -values: 0.01–0.13), suggesting less stable but still potentially meaningful benefit (Fig. C). Furthermore, the wide 95% PI crossing null value for MI suggests that the observed risk reduction might not be consistent across all future populations or trials. Analysis of stroke outcomes showed no significant difference between strategies (RR: 0.99, 95% CI: 0.77–1.26, 95% PI: 0.64–1.53, p = 0.89) with excellent homogeneity (I2 = 0%, Tau2 = 0) (Fig. D). Sensitivity analyses maintained this finding (RRs: 0.88–1.16, all p > 0.05) with consistent absence of heterogeneity (Fig. D). The 95% PI reinforces this finding, suggesting that future studies will likely produce mixed findings. For decompensated heart failure, the invasive strategy showed a non-significant trend toward increased risk (RR: 1.26, 95% CI: 0.86–1.84, p = 0.16) with moderate heterogeneity (I2 = 25%, Tau2 = 0.1274) (Fig. E). This pattern persisted in sensitivity analyses (RRs: 1.13–1.45, all p > 0.05), while heterogeneity varied (I2: 0–49%) with study omissions (Fig. E). A subgroup analysis of octogenarians ( n = 893) from three trials (After Eighty, 80 + , RINCAL) showed similar patterns and point estimates to the overall population, though with wider confidence intervals and loss of statistical significance for several outcomes. In this subgroup, the invasive strategy showed no significant difference in all-cause mortality (RR: 1.05, 95% CI: 0.94–1.17) or cardiovascular death (RR: 0.98, 95% CI: 0.65–1.47) (Figure S1 A-B). Although MI risk showed a similar trend toward reduction with the invasive strategy (RR: 0.73, 95% CI: 0.26–2.02), the loss of statistical significance compared to the overall analysis suggests particular caution in interpreting this benefit in the very old adults (Figure S1C). The reduction in revascularization needs remained significant even in this older subgroup (RR: 0.43, 95% CI: 0.23–0.81, p = 0.03) (Figure S1E). In contrast to the neutral effect in the overall population, stroke risk trended higher with the invasive strategy in octogenarians (RR: 1.20, 95% CI: 0.85–1.90), though this difference did not reach statistical significance (Figure S1D). Meta-regression analyses exploring the relationship between mean age and treatment effects showed no statistically significant age-dependent trends for any of the clinical outcomes. Notably, stroke risk demonstrated a positive clinically relevant trend with advancing age (β = 0.1505, 95% CI: -0.1068 to 0.4079, p = 0.2517). The detailed results of meta-regression analyses are presented in Table S4 and visualized in Figure S2. As demonstrated in Fig. , safety analyses revealed significant increases in bleeding risk with the invasive strategy. The composite of major and minor bleeding was increased by 50% (RR: 1.50, 95% CI: 1.02–2.20, 95% PI: 0.77–2.91, p = 0.04) with moderate heterogeneity (I2 = 30%, Tau2 = 0.1894) (Fig. A), while major bleeding alone was nearly doubled (RR: 1.92, 95% CI: 1.04–3.56, p = 0.04) with no heterogeneity (I2 = 0%) (Fig. C). Sensitivity analyses demonstrated consistent effect directions with all point estimates above 1.0, though statistical significance varied. For the composite endpoint of major and minor bleeding, RRs ranged from 1.36 to 1.59 across leave-one-out iterations ( p -values: 0.02–0.17), with stable heterogeneity (I2: 17–33%) (Fig. B). The isolated major bleeding outcome showed similar stability, with RRs ranging from 1.54 to 2.13 ( p -values: 0.04–0.17) and persistent absence of heterogeneity (I2 = 0% throughout) (Fig. D). The 95% PI for the composite of major and minor bleeding suggests potential variability, as it spans a wide range and includes the null value, indicating the increase in bleeding risk associated with an invasive strategy may not be consistent across all clinical contexts. To address the heterogeneity in bleeding definitions, we performed a sensitivity analysis focusing specifically on studies using TIMI bleeding criteria (Figure S3). For the composite of major and minor bleeding, the pooled analysis of four studies using TIMI criteria showed a numerically increased but non-significant risk with the invasive strategy (RR: 1.47, 95% CI: 0.81–2.64) compared to the significant increase seen in the main analysis. Similarly, the analysis of major bleeding in this subgroup showed a nonsignificant trend toward increased risk (RR: 1.92, 95% CI: 0.01–470.93), though with substantial uncertainty in the estimate. Time-to-event analysis of pooled HRs demonstrated no significant differences in the composite endpoint of all-cause mortality and MI (HR: 0.95, 95% CI: 0.83–1.09, p = 0.48; I2 = 0%), all-cause mortality (HR: 1.10, 95% CI: 0.94–1.29, p = 0.22; I2 = 0%), cardiovascular mortality (HR: 0.94, 95% CI: 0.73–1.20, p = 0.60; I2 = 36%), or stroke (HR: 1.02, 95% CI: 0.58–1.79, p = 0.94; I2 = 48%) (Fig. F, A, B, and D). However, the invasive strategy significantly reduced the hazard of MI (HR: 0.64, 95% CI: 0.49–0.83, p < 0.01; I2 = 52%) and subsequent revascularization (HR: 0.30, 95% CI: 0.19–0.47, p < 0.01; I2 = 25%) (Figs. 5C and E). All studies showed consistent directions of effect for these significant outcomes, with SENIOR-RITA trial contributing the majority of the statistical weight (39.3% for MI and 70.5% for revascularization). Study characteristics and patient population Our systematic review identified 11 randomized controlled trials published between 2000 and 2024, enrolling a total of 4114 elderly patients with NSTE-ACS. The sample sizes varied considerably, from 106 patients in the MOSCA trial to 1,518 patients in the SENIOR-RITA trial . These trials were conducted across multiple European and North American countries. One noticeable variation among these RCTs is the age threshold defining “elderly,” which ranged from ≥ 70 to ≥ 80 years. Three trials—After Eighty , the 80 + study , and RINCAL —specifically focused on octogenarians, while others employed lower age thresholds. Nevertheless, the approximate mean age of the total included population in this analysis is over 80 and provides a representative sample of elderly patients, enhancing the generalizability of our findings. Cardiovascular risk profiles and comorbidities As shown in Table , cardiovascular risk profiles and comorbidity patterns varied widely across studies. Hypertension prevalence ranged from 59% in the After Eighty study to 92% in the MOSCA-FRAIL trial. Diabetes mellitus prevalence showed similar variation, from 15% in FIR trials to 56% in MOSCA-FRAIL. Prior MI was common across studies (27–44%), with the highest rates in MOSCA and lowest in the RINCAL. Renal dysfunction prevalence ranged markedly, from 21% in SENIOR-RITA to 69% in the 80 + study. Atrial fibrillation prevalence showed moderate variability (13–27%), highest in MOSCA-FRAIL and lowest in the Italian Elderly ACS study. Previous revascularization rates also differed, with prior PCI ranging from 4 to 31% and CABG from 3 to 18%. These differences in comorbidity profiles likely reflect variations in inclusion criteria and recruitment strategies across trials. While earlier trials, like TACTICS–TIMI 18 and FIR trials, employed broader inclusion criteria, more recent trials incorporated specific geriatric assessments . The MOSCA trial uniquely focused on patients with multiple comorbidities, requiring at least two major comorbidities for inclusion . Notably, the MOSCA-FRAIL and SENIOR-RITA trials systematically assessed frailty, with SENIOR-RITA also evaluating cognitive function . Procedural characteristics and management strategies Recent trials showed notable procedural advancements, particularly with increased radial access rates (> 80% in SENIOR-RITA and After Eighty), which may have influenced bleeding complications . As shown in Tables and , the variability in the timing and approach to invasive management was also observed. The allowed delay in the timing of angiography in invasive arms ranged from a maximum of 48 h in the TACTICS–TIMI 18 trial up to 7 days in SENIOR-RITA and FRISC II , with most trials mandating 72 h limit. Revascularization rates in these arms spanned 50% to 62% of randomized patients. Conservative arms showed distinct differences in cross-over criteria for angiography, and all trials allowed for refractory symptoms or clinical deterioration. However, thresholds varied, leading to coronary angiography rates from 0% in After Eighty to 49% in the TACTICS–TIMI 18 trial, with subsequent revascularization rates ranging from 0 to 32% . These differences likely stemmed from varying definitions of conservative and invasive strategies, criteria for medical therapy failure, and thresholds for rescue angiography. As outlined in Table , follow-up durations also varied, ranging from a minimum of 6 months to a median of 5.3 years . Unfortunately, both the 80 + study and RINCAL were terminated prematurely due to recruitment challenges. Clinical endpoint definitions and assessment The definition of MI evolved over time, with earlier trials using older universal definitions of MI, while more recent trials like SENIOR-RITA employed the Fourth Universal Definition . The bleeding outcome definition had some levels of heterogeneity across the studies, as the classification of bleeding outcomes was according to the Bleeding Academic Research Consortium (BARC) definition in 3 trials (SENIOR-RITA, RINCAL, and Italian Elderly ACS) and according to Thrombolysis in Myocardial Infarction (TIMI) criteria in 4 trials (80 + , After Eighty, MOSCA, and TACTICS–TIMI 18) while one study (MOSCA-FRAIL) used a separate definition (Table S2). Bleeding outcomes were harmonized across trials using established criteria from the BARC and TIMI classifications (Table S3) . Major bleeding was defined as BARC type 3b or higher and its TIMI equivalent, encompassing fatal bleeding, symptomatic intracranial hemorrhage, hemodynamic compromise requiring intervention, and bleeding requiring transfusion of ≥ 5 units of whole blood/red cells. Minor bleeding was defined as BARC type 2-3a or its TIMI equivalent, characterized by overt bleeding requiring medical intervention or antithrombotic therapy modification without meeting major bleeding criteria. The data for major and minor bleeding were available separately in 5 trials (SENIOR-RITA, RINCAL, 80 + , After Eighty, and TACTICS–TIMI 18) while among the three remaining trials, the bleeding outcomes had been reported as a composite of major and minor bleeding in two trials (MOSCA-FRAIL and MOSCA), and in one study (Italian Elderly ACS) the bleeding outcome had been considered as a composite of BARC type 2, 3a, and 3b bleeding. Despite different classification systems, the fundamental criteria defining major bleeding events remained consistent between BARC and TIMI scales, enabling reliable cross-trial comparisons . Our systematic review identified 11 randomized controlled trials published between 2000 and 2024, enrolling a total of 4114 elderly patients with NSTE-ACS. The sample sizes varied considerably, from 106 patients in the MOSCA trial to 1,518 patients in the SENIOR-RITA trial . These trials were conducted across multiple European and North American countries. One noticeable variation among these RCTs is the age threshold defining “elderly,” which ranged from ≥ 70 to ≥ 80 years. Three trials—After Eighty , the 80 + study , and RINCAL —specifically focused on octogenarians, while others employed lower age thresholds. Nevertheless, the approximate mean age of the total included population in this analysis is over 80 and provides a representative sample of elderly patients, enhancing the generalizability of our findings. As shown in Table , cardiovascular risk profiles and comorbidity patterns varied widely across studies. Hypertension prevalence ranged from 59% in the After Eighty study to 92% in the MOSCA-FRAIL trial. Diabetes mellitus prevalence showed similar variation, from 15% in FIR trials to 56% in MOSCA-FRAIL. Prior MI was common across studies (27–44%), with the highest rates in MOSCA and lowest in the RINCAL. Renal dysfunction prevalence ranged markedly, from 21% in SENIOR-RITA to 69% in the 80 + study. Atrial fibrillation prevalence showed moderate variability (13–27%), highest in MOSCA-FRAIL and lowest in the Italian Elderly ACS study. Previous revascularization rates also differed, with prior PCI ranging from 4 to 31% and CABG from 3 to 18%. These differences in comorbidity profiles likely reflect variations in inclusion criteria and recruitment strategies across trials. While earlier trials, like TACTICS–TIMI 18 and FIR trials, employed broader inclusion criteria, more recent trials incorporated specific geriatric assessments . The MOSCA trial uniquely focused on patients with multiple comorbidities, requiring at least two major comorbidities for inclusion . Notably, the MOSCA-FRAIL and SENIOR-RITA trials systematically assessed frailty, with SENIOR-RITA also evaluating cognitive function . Recent trials showed notable procedural advancements, particularly with increased radial access rates (> 80% in SENIOR-RITA and After Eighty), which may have influenced bleeding complications . As shown in Tables and , the variability in the timing and approach to invasive management was also observed. The allowed delay in the timing of angiography in invasive arms ranged from a maximum of 48 h in the TACTICS–TIMI 18 trial up to 7 days in SENIOR-RITA and FRISC II , with most trials mandating 72 h limit. Revascularization rates in these arms spanned 50% to 62% of randomized patients. Conservative arms showed distinct differences in cross-over criteria for angiography, and all trials allowed for refractory symptoms or clinical deterioration. However, thresholds varied, leading to coronary angiography rates from 0% in After Eighty to 49% in the TACTICS–TIMI 18 trial, with subsequent revascularization rates ranging from 0 to 32% . These differences likely stemmed from varying definitions of conservative and invasive strategies, criteria for medical therapy failure, and thresholds for rescue angiography. As outlined in Table , follow-up durations also varied, ranging from a minimum of 6 months to a median of 5.3 years . Unfortunately, both the 80 + study and RINCAL were terminated prematurely due to recruitment challenges. The definition of MI evolved over time, with earlier trials using older universal definitions of MI, while more recent trials like SENIOR-RITA employed the Fourth Universal Definition . The bleeding outcome definition had some levels of heterogeneity across the studies, as the classification of bleeding outcomes was according to the Bleeding Academic Research Consortium (BARC) definition in 3 trials (SENIOR-RITA, RINCAL, and Italian Elderly ACS) and according to Thrombolysis in Myocardial Infarction (TIMI) criteria in 4 trials (80 + , After Eighty, MOSCA, and TACTICS–TIMI 18) while one study (MOSCA-FRAIL) used a separate definition (Table S2). Bleeding outcomes were harmonized across trials using established criteria from the BARC and TIMI classifications (Table S3) . Major bleeding was defined as BARC type 3b or higher and its TIMI equivalent, encompassing fatal bleeding, symptomatic intracranial hemorrhage, hemodynamic compromise requiring intervention, and bleeding requiring transfusion of ≥ 5 units of whole blood/red cells. Minor bleeding was defined as BARC type 2-3a or its TIMI equivalent, characterized by overt bleeding requiring medical intervention or antithrombotic therapy modification without meeting major bleeding criteria. The data for major and minor bleeding were available separately in 5 trials (SENIOR-RITA, RINCAL, 80 + , After Eighty, and TACTICS–TIMI 18) while among the three remaining trials, the bleeding outcomes had been reported as a composite of major and minor bleeding in two trials (MOSCA-FRAIL and MOSCA), and in one study (Italian Elderly ACS) the bleeding outcome had been considered as a composite of BARC type 2, 3a, and 3b bleeding. Despite different classification systems, the fundamental criteria defining major bleeding events remained consistent between BARC and TIMI scales, enabling reliable cross-trial comparisons . As summarized in Table , all studies were categorized as low-risk in terms of overall bias. While some concerns were noted regarding deviations from the intended intervention due to the open-label design and crossover rates, these did not significantly impact the overall assessments. Analysis of the primary outcomes revealed comparable mortality rates between treatment strategies. Both all-cause mortality (RR: 1.04, 95% CI: 0.98–1.11, 95% PI: 0.97–1.12, p = 0.18) and cardiovascular mortality (RR: 0.98, 95% CI: 0.85–1.12, 95% PI: 0.82–1.16, p = 0.68) showed no significant differences between approaches, with completely homogeneous findings across studies (I2 = 0%, Tau2 = 0 for both outcomes) (Fig. A and B). Sensitivity analyses demonstrated remarkable stability in these findings, with all-cause mortality RRs ranging from 0.96–1.05 (all p -values > 0.05) and cardiovascular mortality RRs ranging from 0.92–1.02 (all p -values > 0.05) across all leave-one-out iterations (Fig. A and B). The narrow nonsignificant 95% PIs also suggest consistency across studies, as most future studies are also likely to show no clear survival benefit or harm from either strategy. The invasive strategy significantly reduced the need for subsequent revascularization procedures (RR: 0.41, 95% CI: 0.27–0.62, 95% PI: 0.19–0.90, p < 0.01; I2 = 30%, Tau2 = 0.0621) and the risk of MI (RR: 0.75, 95% CI: 0.57–0.99, 95% PI: 0.46–1.24, p = 0.04; I2 = 43%, Tau2 = 0.1768) (Fig. F and C). Sensitivity analyses confirmed the robustness of the revascularization benefit, with consistent RRs (0.37–0.49) maintaining statistical significance across all iterations ( p -values < 0.01) and moderate heterogeneity (I2: 0–42%) (Fig. F). The 95% PI confirms this potential benefit in future studies. The MI risk reduction showed more variability in sensitivity analyses (RRs: 0.72–0.79; I2: 24–51%), with statistical significance being lost in some analyses when certain studies were omitted ( p -values: 0.01–0.13), suggesting less stable but still potentially meaningful benefit (Fig. C). Furthermore, the wide 95% PI crossing null value for MI suggests that the observed risk reduction might not be consistent across all future populations or trials. Analysis of stroke outcomes showed no significant difference between strategies (RR: 0.99, 95% CI: 0.77–1.26, 95% PI: 0.64–1.53, p = 0.89) with excellent homogeneity (I2 = 0%, Tau2 = 0) (Fig. D). Sensitivity analyses maintained this finding (RRs: 0.88–1.16, all p > 0.05) with consistent absence of heterogeneity (Fig. D). The 95% PI reinforces this finding, suggesting that future studies will likely produce mixed findings. For decompensated heart failure, the invasive strategy showed a non-significant trend toward increased risk (RR: 1.26, 95% CI: 0.86–1.84, p = 0.16) with moderate heterogeneity (I2 = 25%, Tau2 = 0.1274) (Fig. E). This pattern persisted in sensitivity analyses (RRs: 1.13–1.45, all p > 0.05), while heterogeneity varied (I2: 0–49%) with study omissions (Fig. E). A subgroup analysis of octogenarians ( n = 893) from three trials (After Eighty, 80 + , RINCAL) showed similar patterns and point estimates to the overall population, though with wider confidence intervals and loss of statistical significance for several outcomes. In this subgroup, the invasive strategy showed no significant difference in all-cause mortality (RR: 1.05, 95% CI: 0.94–1.17) or cardiovascular death (RR: 0.98, 95% CI: 0.65–1.47) (Figure S1 A-B). Although MI risk showed a similar trend toward reduction with the invasive strategy (RR: 0.73, 95% CI: 0.26–2.02), the loss of statistical significance compared to the overall analysis suggests particular caution in interpreting this benefit in the very old adults (Figure S1C). The reduction in revascularization needs remained significant even in this older subgroup (RR: 0.43, 95% CI: 0.23–0.81, p = 0.03) (Figure S1E). In contrast to the neutral effect in the overall population, stroke risk trended higher with the invasive strategy in octogenarians (RR: 1.20, 95% CI: 0.85–1.90), though this difference did not reach statistical significance (Figure S1D). Meta-regression analyses exploring the relationship between mean age and treatment effects showed no statistically significant age-dependent trends for any of the clinical outcomes. Notably, stroke risk demonstrated a positive clinically relevant trend with advancing age (β = 0.1505, 95% CI: -0.1068 to 0.4079, p = 0.2517). The detailed results of meta-regression analyses are presented in Table S4 and visualized in Figure S2. As demonstrated in Fig. , safety analyses revealed significant increases in bleeding risk with the invasive strategy. The composite of major and minor bleeding was increased by 50% (RR: 1.50, 95% CI: 1.02–2.20, 95% PI: 0.77–2.91, p = 0.04) with moderate heterogeneity (I2 = 30%, Tau2 = 0.1894) (Fig. A), while major bleeding alone was nearly doubled (RR: 1.92, 95% CI: 1.04–3.56, p = 0.04) with no heterogeneity (I2 = 0%) (Fig. C). Sensitivity analyses demonstrated consistent effect directions with all point estimates above 1.0, though statistical significance varied. For the composite endpoint of major and minor bleeding, RRs ranged from 1.36 to 1.59 across leave-one-out iterations ( p -values: 0.02–0.17), with stable heterogeneity (I2: 17–33%) (Fig. B). The isolated major bleeding outcome showed similar stability, with RRs ranging from 1.54 to 2.13 ( p -values: 0.04–0.17) and persistent absence of heterogeneity (I2 = 0% throughout) (Fig. D). The 95% PI for the composite of major and minor bleeding suggests potential variability, as it spans a wide range and includes the null value, indicating the increase in bleeding risk associated with an invasive strategy may not be consistent across all clinical contexts. To address the heterogeneity in bleeding definitions, we performed a sensitivity analysis focusing specifically on studies using TIMI bleeding criteria (Figure S3). For the composite of major and minor bleeding, the pooled analysis of four studies using TIMI criteria showed a numerically increased but non-significant risk with the invasive strategy (RR: 1.47, 95% CI: 0.81–2.64) compared to the significant increase seen in the main analysis. Similarly, the analysis of major bleeding in this subgroup showed a nonsignificant trend toward increased risk (RR: 1.92, 95% CI: 0.01–470.93), though with substantial uncertainty in the estimate. Time-to-event analysis of pooled HRs demonstrated no significant differences in the composite endpoint of all-cause mortality and MI (HR: 0.95, 95% CI: 0.83–1.09, p = 0.48; I2 = 0%), all-cause mortality (HR: 1.10, 95% CI: 0.94–1.29, p = 0.22; I2 = 0%), cardiovascular mortality (HR: 0.94, 95% CI: 0.73–1.20, p = 0.60; I2 = 36%), or stroke (HR: 1.02, 95% CI: 0.58–1.79, p = 0.94; I2 = 48%) (Fig. F, A, B, and D). However, the invasive strategy significantly reduced the hazard of MI (HR: 0.64, 95% CI: 0.49–0.83, p < 0.01; I2 = 52%) and subsequent revascularization (HR: 0.30, 95% CI: 0.19–0.47, p < 0.01; I2 = 25%) (Figs. 5C and E). All studies showed consistent directions of effect for these significant outcomes, with SENIOR-RITA trial contributing the majority of the statistical weight (39.3% for MI and 70.5% for revascularization). This meta-analysis, including 4114 patients from 11 RCTs, represents the most comprehensive and up-to-date evaluation of the initial management strategies in elderly NSTE-ACS patients. Our findings address critical knowledge gaps in the care of this high-risk population, revealing that while invasive strategies reduce revascularization needs and may lower the risk of MI, they do not confer survival benefits and are associated with increased bleeding risk. These results have important implications for individualized patient care. The consistency between RR and HR analyses across all outcomes strengthens the robustness of our findings. For revascularization, where results were most consistent, the HR demonstrated a 70% reduction compared to a 59% reduction in the RR analysis. For MI, the HR showed a 36% reduction compared to a 25% reduction in the RR analysis. However, our findings regarding MI warrant cautious interpretation due to moderate to high heterogeneity and sensitivity analyses showing a loss of statistical significance when certain studies were omitted. The variation in effect size between the two methods may be attributed to both the inherent methodological differences between HRs and RRs and the inclusion of different trial versions in the analyses (MOSCA-FRAIL 2023 vs. 2024 , and After Eighty 2016 vs. 2023 in Kotanidis's versus our current analysis, respectively). Kotanidis et al. similarly reported reduced MI risk and revascularization needs without mortality benefit . Damman et al.’s age-stratified patient-level analysis of FIR trials (FRISC II , RITA 3 , and ICTUS ) demonstrated that while invasive strategy significantly reduced MI risk in patients over 65, it conferred no survival benefit across age groups (< 65, 65–75, and ≥ 75) . In contrast, Improta et al.'s meta-analysis, which included both RCTs and adjusted observational studies, suggested a short-term survival advantage with invasive management . This discrepancy is likely attributable to the inclusion of non-RCT data, which may have introduced confounding factors not present in strictly controlled trial environments. The results of the current study reinforce this observation that while invasive strategies can effectively prevent recurrent ischemic events, they do not necessarily translate into improved survival. Recent trials have highlighted the complex relationship between geriatric conditions, including frailty, comorbidity burden, and cognitive impairment, and treatment outcomes in elderly NSTE-ACS patients . The MOSCA-FRAIL revealed distinct temporal patterns in frail patients (as defined by Clinical Frailty Scale score > 4) undergoing invasive strategy experiencing early adverse outcomes during the first year followed by potential later benefits, ultimately leading to neutral long-term results . The SENIOR-RITA trial similarly found no significant differences in outcomes between invasive and conservative strategies in both frail and non-frail subgroups (HRs: 0.92 and 0.97, respectively) . The burden of comorbidities, assessed through the Charlson Comorbidity Index with a median score of 5 in both SENIOR-RITA and MOSCA-FRAIL trials, did not significantly impact treatment effectiveness regardless of comorbidity burden. Furthermore, regarding cognitive impairment (based on Montreal Cognitive Assessment scores < 26), which affected 62.5% of the SENIOR-RITA population, there was a trend toward lower rates of composite endpoint of cardiovascular death or nonfatal MI with invasive strategy in non-impaired patients (HR 1.18, 95% CI 0.81–1.72) and with conservative strategy in cognitively impaired patients (HR 0.85, 95% CI 0.67–1.09), though these differences were not statistically significant . This finding is in line with contemporary evidence demonstrating that cognitive impairment is associated with higher short- and long-term mortality in ACS patients undergoing coronary revascularization . The relationship between cognitive status and all-cause mortality in NSTE-ACS elderlies persists even after adjusting for frailty and other geriatric factors, as demonstrated in a recent long-term follow-up study . Age-stratified subgroup analysis of SENIOR-RITA trial showed that while younger elderly patients (< 80 years) demonstrated a trend toward benefit from invasive strategy (HR 0.70, 95% CI 0.46–1.07) for the composite endpoint of cardiovascular death or nonfatal MI, patients ≥ 80 years derived no apparent benefit (HR 1.01, 95% CI 0.81–1.27). While these subgroup analyses suggest important trends, dedicated prospective studies focusing on octogenarians and incorporating cognitive function and other geriatric measures as primary endpoints are needed to guide individualized treatment decisions better. Our subgroup analyses further highlight age-specific considerations, with octogenarians showing loss of MI benefit and a concerning trend toward higher stroke risk with the invasive strategy, though statistical significance was not reached. This vulnerability to stroke complications in the most elderly patients was further supported by our meta-regression analysis, which demonstrated a positive trend corresponding to a 15% increase in stroke relative risk for each year of advancing age. While these parallel findings strengthen the likelihood of a true age-dependent relationship, the absence of statistical significance in both analyses warrants cautious interpretation. Our findings strongly align with current ESC guideline recommendations for a selective approach to invasive management in elderly NSTE-ACS patients, carefully considering individual geriatric factors and balancing temporal patterns of benefits against risks . The sustained reduction in revascularization needs and potential decrease in recurrent MI risk support considering invasive strategies in selected elderly patients, though this benefit must be carefully weighed against the impact of frailty, cognitive status, and other comorbidities, which can increase procedural risks and complicate recovery . Thus, patient selection should incorporate several key factors. First, assessment of ischemic risk is crucial, as patients at higher risk of recurrent events may derive greater early and sustained benefit from invasive management. Second, given that mortality benefits were not observed over time, the decision should focus on quality of life and symptom improvement rather than survival advantage. We suggest future studies focus on comparing quality of life outcomes and functional status in elderly NSTE-ACS patients undergoing different management strategies. Third, the observed increase in bleeding complications emphasizes the need for thorough pre-procedural bleeding risk assessment and implementation of modern bleeding avoidance strategies, including preferred radial access . Future studies are warranted to examine the impact of newer access site techniques, closure devices, and modified anticoagulation protocols on bleeding outcomes. The role of abbreviated dual antiplatelet therapy durations following invasive management in the elderly, particularly those with high bleeding risk, also deserves focused investigation. Finally, studies evaluating the relationship between bleeding events and subsequent functional decline, quality of life, and long-term outcomes could provide valuable insights for patient risk–benefit discussions. Strengths and limitations Our meta-analysis offers several key strengths. First, with the inclusion of the SENIOR-RITA trial (1,518 patients) , our sample size nearly doubles that of the recent individual patient data meta-analysis by Kotanidis et al. . Second, incorporating extended follow-up data from the After Eighty and MOSCA-FRAIL trials provides more robust longitudinal evidence . Third, we conducted sensitivity analyses using the "leave-one-out" method, examining the robustness of our findings. Finally, our dual analytical approach using RRs and time-to-event analyses enhances the reliability of our findings. However, several limitations merit consideration. While our exclusive focus on RCTs ensures high internal validity, it may limit generalizability to real-world elderly populations who typically present with more complex comorbidity profiles. The heterogeneity in invasive protocols and medical practices across studies could influence outcomes, although we mitigated this through random-effects modeling and comprehensive sensitivity analyses. The inclusion of data from underpowered RCT subgroup analyses might introduce reporting bias. Additionally, formal assessment of publication bias was precluded by the limited number of included studies, leaving this potential source of bias unquantified. This limitation highlights the need for further high-quality RCTs designed explicitly for elderly patients with NSTE-ACS to expand the evidence base. Our meta-analysis offers several key strengths. First, with the inclusion of the SENIOR-RITA trial (1,518 patients) , our sample size nearly doubles that of the recent individual patient data meta-analysis by Kotanidis et al. . Second, incorporating extended follow-up data from the After Eighty and MOSCA-FRAIL trials provides more robust longitudinal evidence . Third, we conducted sensitivity analyses using the "leave-one-out" method, examining the robustness of our findings. Finally, our dual analytical approach using RRs and time-to-event analyses enhances the reliability of our findings. However, several limitations merit consideration. While our exclusive focus on RCTs ensures high internal validity, it may limit generalizability to real-world elderly populations who typically present with more complex comorbidity profiles. The heterogeneity in invasive protocols and medical practices across studies could influence outcomes, although we mitigated this through random-effects modeling and comprehensive sensitivity analyses. The inclusion of data from underpowered RCT subgroup analyses might introduce reporting bias. Additionally, formal assessment of publication bias was precluded by the limited number of included studies, leaving this potential source of bias unquantified. This limitation highlights the need for further high-quality RCTs designed explicitly for elderly patients with NSTE-ACS to expand the evidence base. This meta-analysis indicates that in elderly patients with NSTE-ACS, invasive strategies significantly reduce revascularization needs and may lower MI risk, though the latter finding showed moderate heterogeneity across studies. While no survival benefit was observed in either short- or long-term follow-up, invasive management increased bleeding risk. The temporal patterns of benefit and risk, along with the heterogeneous findings for some outcomes, emphasize the need for individualized treatment decisions based on patient-specific characteristics and risk factors, particularly considering bleeding risk and geriatric factors. Supplementary Material 1. |
Influence of biochar on the partitioning of iron and arsenic from paddy soil contaminated by acid mine drainage | e2586a34-8276-46ff-a84f-6b8749300f65 | 11808101 | Microbiology[mh] | Mining activities are some of the most environmentally destructive anthropogenic practices and have led to excessive levels of metal(loid)s in soil. Arsenic (As) contamination has attracted worldwide attention in recent years due to its significant threat to the agricultural product quality and human health , . Arsenic exposure through the food chain can cause disorders of the blood vessels, reproductive system and nervous system . The application of stabilisation agents to soil is a commonly used As remediation technology. Among various materials, biochar has attracted increasing interest due to its widespread sources, low cost and high stability. Biochar contains alkaline salts, a substantial amount of dissolved organic carbon (DOC) and oxygen-containing functional groups, and has a high specific surface area and porous structure . All of these physiochemical properties can change the environmental conditions of soil and influence the partitioning of As between solid and liquid phases. The application of biochar to soil has the potential to enhance the release of As, especially in paddy fields. The porous structure of biochar is one of its most attractive features due to its ability to increase soil porosity . This may improve soil’s water-holding capacity and decrease the solid-liquid ratio. The addition of water helps to enhance the dissolution of soil minerals and release trace elements, including heavy metals. Biochar also has a liming effect because it contains negatively charged groups such as hydroxyl and carboxyl groups, which can react with H [+ . As soil pH increases, mineral particle surfaces will have greater negative charges and their ability to adsorb anion ions like HAsO 4 2− and HAsO 3 2− will be weakened . This will also promote the release of As into soil solution. Additionally, some biochar functional groups—particularly hydroxyl (OH − ) groups—can act as electron donors . The functional groups of biochar vary with different pyrolysis temperatures, which can influence the bioavailability of As . Biochar can also provide organic carbon to promote anaerobic microbial activity , leading to a reduction of oxidising minerals in the soil. It has been reported that biochar can decrease redox potential (Eh) to − 300 mV and enhance the release of iron (Fe) and As , . However, as raw biochar cannot generally be used as an amendment agent for As-contaminated paddy fields, biochar is modified to improve its ability to passivate As. One modification method is to impregnate its surface with metal oxides, of which iron oxide has attracted the most attention . This is because iron oxide is environmentally friendly and can sequester As through inner-sphere complexation on its surface . Wu et al. showed that the application of iron-modified biochar to soil can increase microbial diversity, but the relative abundance of different species may be positively or negatively affected , . However, variations in the microbial community do not affect the ability of biochar to remediate As-contaminated paddy fields. Furthermore, it has been shown that modified biochar can significantly reduce the bioavailability of As and its content in rice grains , . Functional groups such as C = O can also decrease the bioavailability of As by acting as electron acceptors to oxidise As(III) to the less mobile As(V) . Once As is immobilised in the pores of biochar, its stability increases. Even though iron oxides in soil shift between amorphous and crystalline phases, their ability to adsorb dissolving As does not change significantly. Acid mine drainage (AMD) is a common source of pollution from mining activity and contributes greatly to soil contamination. AMD is mainly generated from the oxidation of pyrite (FeS 2 ). Its features include a low pH and high concentrations of sulphate, heavy metals and metalloids . AMD into farmland leads to significant physiochemical changes, including soil acidification, heavy metal enrichment and alterations to the microbial community . In terms of metal enrichment, Fe generally increases the most. This usually results in the generation of (hydro)oxides such as ferrihydrite, goethite and hematite, which are reduced by dissimilatory iron-reducing bacteria under anaerobic conditions to release large amounts of ferrous ions (Fe(II)) into soil solution . These cations can be readily adsorbed by the negatively charged functional groups of biochar , as in the preparation process of iron-modified biochar described previously , and are oxidised to Fe(III) under aerobic conditions. Based on the above understanding, we hypothesised that biochar could load iron oxides and effectively immobilise As in AMD-contaminated paddy fields with fluctuating redox conditions. The objectives of this study were to (1) investigate the impact of biochar application on soil properties with changes in anaerobic and aerobic conditions, (2) to understand the effect of biochar on soil Fe and As transformation and release, and (3) to elucidate the partitioning process of the two elements on the biochar surface. The research results will provide insight into the use of biochar for As immobilisation in AMD-contaminated paddy fields. Preparation of soil and biochar Soil samples were collected from paddy fields near a high-As coal mine in Jiaole, Guizhou Province, southwest China. The soil at this site had been acidified by AMD, and its pH was about 4.8. Its Fe content was about 82 g kg −1 , and its As content was about 65 mg kg −1 , which exceeded China’s standard for soil (GB 15618-2008) (30 mg kg −1 ). This has influenced rice grain quality in the area . The soil texture was silty clay loam. Other soil characteristics were described in our previous publication . The samples were air-dried and then crushed to a particle size of < 0.4 mm for incubation experiments. Eupatorium adenophorum is an invasive species in the study area. It was dried in an oven at 60 °C and used to prepare raw biochar in a tubular carbonisation furnace. The pyrolysis temperature was a key parameter in biochar production. Eupatorium adenophorum biochar is typically produced within the temperature range of 300–700 °C , . If the temperature is too high, the biochar surface may lack functional groups. Additionally, the biochar produced at 300 °C is nearly neutral and does not provide advantages for remediating acidified soil caused by AMD. Therefore, the pyrolysis temperatures of 400 °C, 550℃, and 700 °C were selected for this study. The residence time in the reactor was 4 h, and the heating rate was 2.5 °C min −1 . The biochar produced at the different pyrolysis temperatures was denoted BC-400, BC-550 and BC-700. The biochar was washed with ultrapure water before use to remove ash and soluble components. Incubation experiment Figure presents a schematic diagram of the experimental setup, which was adopted to conduct a series of bench-scale soil incubation experiments. The apparatus was a columnar Perspex reactor with a diameter of 12 cm, a height of 21 cm and a sealing cover at the top. A stirrer, pH and Eh probes, and an exhaust valve were installed on it. One aeration disk was set at the bottom and connected to nitrogen and oxygen cylinders. The sampling port was located at the lower part of the cylinder. Portions of soil (30 kg) were mixed thoroughly with one of the biochar samples According to our pot experiment , the biochar application rate was 3% (w/w). They were then placed into the setup. Based on the biochar used, the experiments were divided into the BC-400, BC-550 and BC-700 groups, and a control group (CK) without biochar was also included. Additionally, ultrapure water was added in a solid-liquid ratio of 5:6, and the resulting suspension was stirred at 100 rpm. Nitrogen and oxygen were injected in sequence to create an anaerobic or aerobic environment, with each period lasting for 15 days. The entire experiment spanned 75 days, including three anaerobic periods (0–15 d, 30–45 d and 60–75 d) and two aerobic periods (15–30 d and 45–60 d). The pH and redox potential (Eh) were measured using portable parameter instruments. Each treatment was performed in triplicate. Sample collection and determination Aliquots of the soil suspension were collected into centrifuge tubes at the end of each period (15, 30, 45, 60 and 75 d). These samples were centrifuged at 5000 rpm for 20 min and the soil solution was filtered into centrifuge tubes through a 0.45 μm filter membrane for the determination of total Fe, Fe(II), As and DOC. The remaining soil and aged biochar samples were separated by the flotation method . The soil was freeze-dried to a constant weight and ground through a 200-mesh quasi-sieve to determine the Fe and As content and the As form. The biochar was washed using ultrapure water to remove soil adhered to its surface for the analysis of Fe and As content, surface morphology and chemistry. Additionally, samples of fresh soil suspension were collected in the second anaerobic and aerobic periods for the analysis of microorganisms. The concentrations of total Fe and As in soil solution were determined by flame atomic absorption spectrophotometry (FAAS) and inductively coupled plasma mass spectrometry (ICP-MS), respectively. The concentration of Fe(II) was determined by the phenanthroline spectrophotometric method. The DOC was analysed using a total organic carbon analyser (TOC-5000 A). The Fe and As content in soil and biochar were determined using the above methods after digestion with aqua regia. The form of As in the soil was determined by the modified Wenzel’s method and was classified as non-specifically bound (F1), specifically bound (F2), amorphous and poorly crystalline hydrous oxides (F3), well-crystallised hydrous oxides (F4) and residual phase (F5). The microbial population structure in soil was investigated using high-throughput sequencing of 16 S rRNA . The microbial biomass carbon (MBC) was measured via the substrate-induced respiration method . Fourier transform infrared spectroscopy (FTIR) was used to analyse the chemical properties of the raw biochar. Scanning electron microscopy (SEM) was used to investigate the morphology of the aged biochars and X-ray photoelectron spectroscopy (XPS) was used to determine their surface chemical properties. Data processing The XPS results were fitted using Avantage 5.9, where the standard peak for fitting the C1s spectrum was set at 284.8 eV. The data from triplicates were averaged and standard deviations (SD) were calculated. Pearson correlation analysis ( p < 0.05) was carried out to examine the relationship between different indicators. The figures were produced using Origin 2021. Soil samples were collected from paddy fields near a high-As coal mine in Jiaole, Guizhou Province, southwest China. The soil at this site had been acidified by AMD, and its pH was about 4.8. Its Fe content was about 82 g kg −1 , and its As content was about 65 mg kg −1 , which exceeded China’s standard for soil (GB 15618-2008) (30 mg kg −1 ). This has influenced rice grain quality in the area . The soil texture was silty clay loam. Other soil characteristics were described in our previous publication . The samples were air-dried and then crushed to a particle size of < 0.4 mm for incubation experiments. Eupatorium adenophorum is an invasive species in the study area. It was dried in an oven at 60 °C and used to prepare raw biochar in a tubular carbonisation furnace. The pyrolysis temperature was a key parameter in biochar production. Eupatorium adenophorum biochar is typically produced within the temperature range of 300–700 °C , . If the temperature is too high, the biochar surface may lack functional groups. Additionally, the biochar produced at 300 °C is nearly neutral and does not provide advantages for remediating acidified soil caused by AMD. Therefore, the pyrolysis temperatures of 400 °C, 550℃, and 700 °C were selected for this study. The residence time in the reactor was 4 h, and the heating rate was 2.5 °C min −1 . The biochar produced at the different pyrolysis temperatures was denoted BC-400, BC-550 and BC-700. The biochar was washed with ultrapure water before use to remove ash and soluble components. Figure presents a schematic diagram of the experimental setup, which was adopted to conduct a series of bench-scale soil incubation experiments. The apparatus was a columnar Perspex reactor with a diameter of 12 cm, a height of 21 cm and a sealing cover at the top. A stirrer, pH and Eh probes, and an exhaust valve were installed on it. One aeration disk was set at the bottom and connected to nitrogen and oxygen cylinders. The sampling port was located at the lower part of the cylinder. Portions of soil (30 kg) were mixed thoroughly with one of the biochar samples According to our pot experiment , the biochar application rate was 3% (w/w). They were then placed into the setup. Based on the biochar used, the experiments were divided into the BC-400, BC-550 and BC-700 groups, and a control group (CK) without biochar was also included. Additionally, ultrapure water was added in a solid-liquid ratio of 5:6, and the resulting suspension was stirred at 100 rpm. Nitrogen and oxygen were injected in sequence to create an anaerobic or aerobic environment, with each period lasting for 15 days. The entire experiment spanned 75 days, including three anaerobic periods (0–15 d, 30–45 d and 60–75 d) and two aerobic periods (15–30 d and 45–60 d). The pH and redox potential (Eh) were measured using portable parameter instruments. Each treatment was performed in triplicate. Aliquots of the soil suspension were collected into centrifuge tubes at the end of each period (15, 30, 45, 60 and 75 d). These samples were centrifuged at 5000 rpm for 20 min and the soil solution was filtered into centrifuge tubes through a 0.45 μm filter membrane for the determination of total Fe, Fe(II), As and DOC. The remaining soil and aged biochar samples were separated by the flotation method . The soil was freeze-dried to a constant weight and ground through a 200-mesh quasi-sieve to determine the Fe and As content and the As form. The biochar was washed using ultrapure water to remove soil adhered to its surface for the analysis of Fe and As content, surface morphology and chemistry. Additionally, samples of fresh soil suspension were collected in the second anaerobic and aerobic periods for the analysis of microorganisms. The concentrations of total Fe and As in soil solution were determined by flame atomic absorption spectrophotometry (FAAS) and inductively coupled plasma mass spectrometry (ICP-MS), respectively. The concentration of Fe(II) was determined by the phenanthroline spectrophotometric method. The DOC was analysed using a total organic carbon analyser (TOC-5000 A). The Fe and As content in soil and biochar were determined using the above methods after digestion with aqua regia. The form of As in the soil was determined by the modified Wenzel’s method and was classified as non-specifically bound (F1), specifically bound (F2), amorphous and poorly crystalline hydrous oxides (F3), well-crystallised hydrous oxides (F4) and residual phase (F5). The microbial population structure in soil was investigated using high-throughput sequencing of 16 S rRNA . The microbial biomass carbon (MBC) was measured via the substrate-induced respiration method . Fourier transform infrared spectroscopy (FTIR) was used to analyse the chemical properties of the raw biochar. Scanning electron microscopy (SEM) was used to investigate the morphology of the aged biochars and X-ray photoelectron spectroscopy (XPS) was used to determine their surface chemical properties. The XPS results were fitted using Avantage 5.9, where the standard peak for fitting the C1s spectrum was set at 284.8 eV. The data from triplicates were averaged and standard deviations (SD) were calculated. Pearson correlation analysis ( p < 0.05) was carried out to examine the relationship between different indicators. The figures were produced using Origin 2021. Variation of environmental conditions There were similar variation characteristics of the pH, Eh and DOC concentrations in soil suspensions among groups during the anaerobic/aerobic periodic experiments (Fig. ). The pH of the CK group was about 5.4 at 0–45 d, decreased to 5.2 after the following aerobic period and changed minimally after the final anaerobic period. The BC-400 group also experienced a significant decrease in pH after the second aerobic treatment and was about 0.2 lower than the CK group, on average. The pH of both the BC-550 and BC-700 groups was lower than that of the CK group throughout the entire experimental process, with the lowest pH being that of the BC-550 group. The Eh in each group showed an overall increasing trend during the experiment. The Eh ranged from 80 to 179 mV in the CK group. The Eh of the BC-400 group was consistently lower than that of the CK group, with a minimum of 12 mV. Usually, iron oxides undergo reductive dissolution when the redox potential is below 200 mV and As is also released . The DOC concentration in soil solution showed a decreasing trend in each group, which indicated that DOC was continuously decomposed by microorganisms during the experiment. Except for at 45 d, the DOC concentration in each BC group was lower than that in the CK group and the BC-400 group was lowest at about 8.61 mg L −1 . This was because the cleaned biochar did not release DOC and probably had a certain adsorption effect . The amount of soil microbial biomass can be reflected by the MBC. The analysis results showed that the MBC of the CK group was approximately 379 mg kg −1 (dry soil) at the end of the experiment, while the MBC of the BC-400, BC-550 and BC-700 groups decreased by about 76 mg kg −1 (dry soil), 6 mg kg −1 (dry soil) and 183 mg kg −1 (dry soil), respectively. After anaerobic treatment, the main microbial species in each group were similar at the phylum level, including Proteobacteria, Firmicutes, Acidobacteria and Chloroflexi with relative abundances greater than 10%, followed by Bacilli, Chthonomonadetes etc. (Fig. ). Proteobacteria genera Geobacterium and Shewanella are typical dissimilatory iron-reducing bacteria . The Clostridium genus of Firmicutes also has iron-reducing ability and the arrA gene it carries also reduces As . Additionally, the Acidobacteria phylum also contains iron-reducing bacteria . After the aerobic treatments, the microbial species of each group did not change at the phylum level. However, the relative abundance of Firmicutes decreased significantly, while the relative abundance of Proteobacteria and Chloroflexi increased. There was a significant negative correlation between the relative abundance of Firmicutes and Chloroflexi ( r = -0.80, p < 0.05). The Proteobacteria genera Alphaproteobacteria and Betaproteobacteria have Fe(II) oxidation ability . Promoting effect of biochar on iron and arsenic release The environmental behaviour of As in soil is closely related to the presence of iron oxides. Throughout the entire experimental process, the Fe concentration in soil solution showed high and low periodic variations under anaerobic/aerobic operations, respectively (Fig. a). In the first anaerobic period (0–15 d), the total Fe concentration in soil solution ranged from 45.30 to 69.60 mg L −1 with the main speciation being Fe(II). Although the BC-400 group had the lowest Eh, its Fe concentration was not significantly higher than that of the CK group; however, in the second anaerobic period (30–45 d), the BC-400 group had the highest Fe concentration, followed by the BC-550 group. The Fe concentration of the BC-700 group was slightly higher than that of the CK group. These results indicated that after a certain period of interaction between biochar and soil, iron release was significantly enhanced. It was also observed that the lower the pyrolysis temperature of the biochar, the greater the Fe release. The analysis of the relationship with microorganisms revealed a positive correlation between Fe(II) concentration and the relative abundance of Firmicutes ( r = 0.61, p = 0.11) during the second anaerobic and aerobic periods, while a significant negative correlation was observed with the relative abundance of Proteobacteria ( r = -0.76, p < 0.05). This suggested that Firmicutes might play a key role in the reductive dissolution of iron oxides, while Proteobacteria likely catalysed the dissimilatory oxidation of Fe(II), promoting the formation of iron oxide precipitates. The As concentration in soil solution also showed periodic variations with experimental conditions. It decreased after entering the aerobic period (Fig. b) and was significantly positively correlated with Fe concentration ( r = 0.66, p < 0.01). In the first anaerobic period, As release was relatively weak and the concentration was between 6.06 and 9.43 µg L −1 , with little difference among the groups. The average As/Fe concentration ratio was 1.2 × 10 −4 . The As concentration in the second anaerobic stage was about 2 to 4 times that in the first stage and the As concentrations of the three BC groups—especially BC-400 and BC-550—were higher than that of the CK group. This was because biochar could serve as an electron donor and shuttle to promote iron oxide dissolution and As release . During this stage, the As/Fe concentration ratio increased to about 4.5 × 10 −4 . The As concentration continued to decrease after the experiment progressed into the third anaerobic period (60–75 d). This may have been related to the continuous consumption of DOC (Fig. c), which weakened the reduction of iron oxides. Additionally, under anaerobic conditions, Fe(II) would have been oxidised by iron-oxidising bacteria to generate secondary minerals such as ferrihydrite, with which As would have coprecipitated . The DOC and functional groups contained in biochar can influence microbial activity and soil mineral transformation . The biochar did not increase the DOC concentration in soil solution because of the pretreatment (Fig. c). As described above, the soil microbial biomass carbon and population structure of the BC-400 and BC-550 groups were similar to those of the CK group, suggesting that their higher Fe release during the second anaerobic period may not have been solely influenced by iron-reducing bacteria. Although the microbial biomass of the BC-700 group was about half that of the CK group, the difference in Fe concentration between the two groups was relatively small, which further confirmed this point. It is speculated that the different Fe release levels between the BC and CK groups may be related to the differences in the functional groups of the biochar. The results of biochar FTIR analysis are shown in Fig. . Compared with BC-700, BC-400 and BC-550 contained more functional groups. The characteristic peaks were mainly concentrated in the wavenumber range of 750 to 3500 cm −1 , indicating the presence of various functional groups on the biochar surfaces. Hydroxyl groups exhibit stretching vibrations at 3420 cm −1 , C-H at 2920 cm −1 , C = C group at 2360 cm [− and C = C/C = O on the aromatic ring at 1567 cm [−1 , – . This indicated that the biomass was not fully carbonised after pyrolysis and there were many aromatic compounds in BC-400 and BC-500. Therefore, they likely contained phenolic and alcohol compounds, with the oxygen atoms in the hydroxyl groups serving as electron donors , . The graphite-like aromatic structures could also serve as electron shuttles , thereby promoting the reduction and dissolution of Fe(III) minerals. Kappler et al. and Xu et al. also found that biochar promotes the dissolution of weakly crystalline iron oxides (ferrihydrite) and crystalline iron oxides (haematite). As the pyrolysis temperature increased, the number of electron donors contained in the biochar decreased ; this resulted in a significant decrease in Fe release for the BC-700 group compared with the other two BC groups. Analysis of arsenic transformation in soil The content and form of As in the soil were influenced by the interaction between biochar and soil (Fig. ). Compared with the CK group, the As content in the soil of all three BC groups decreased at the end of the experiment. The lower the pyrolysis temperature, the lower the As content in the soil, with a decrease of approximately 10.5 mg kg −1 in the BC-400 group compared with the CK group. The large specific surface area and abundant functional groups of biochar meant that it not only promoted the reduction and dissolution of As and Fe in soil but also caused coprecipitation of the two elements on its surface , resulting in a significant decrease of soil As content. The main forms of As in the CK group soil were F4 and F5, accounting for 29.5% and 48.4%, respectively. The soil As forms in BC-400, BC-550 and BC-700 were similar but their F5 content was significantly decreased compared with the CK group, by about 12.0, 7.2 and 7.3 mg kg –1 , respectively. The phenomenon of residual As decrease was consistent with the findings of Cai et al. and Jiang et al. . Additionally, the content of F4 increased by about 1.1 mg kg −1 in the BC-400 group but decreased by about 2.4 mg kg −1 in the BC-550 group. This may have been due to the fragmentation of soil particles and a decrease in particle size under an anaerobic environment , which meant that the As bound to iron oxides that were encapsulated by other minerals would be released and further transformed. In addition, the contents of F1 increased in the BC groups, particularly in the BC-400 and BC-550 groups, where it rose by approximately 3 and 5 times, respectively. In contrast, Wei et al. conducted a meta-analysis and found that iron-modified biochar reduced the content of F1. The discrepancy suggested that raw biochar might utilize the functional groups to enhance the mobility and repartition of As in soil. Iron and arsenic loading on biochar The variations in Fe and As content on biochar differed from those in soil solution and did not show periodic variations with changes between anaerobic to aerobic conditions (Fig. ). For the three types of biochar, the Fe loading mainly occurred in the first anaerobic period. Additionally, the lower the pyrolysis temperature when forming the biochar, the more Fe was loaded. This may have been because biochar prepared at lower temperatures has a greater abundance of functional groups , such as -OH, which can bind with Fe [2+ . The Fe content of biochar in the BC-400 group was relatively high throughout the experiment and showed a decreasing trend over time. This may have been due to the action of dissimilatory iron-reducing bacteria on iron oxides with lower crystallinity (such as ferrihydrite), causing them to detach, similar to iron plaque on rice roots . The other two groups also displayed this phenomenon but at different times. It was also observed that the Fe content of biochar in the BC-550 group was lower than in the other two groups after the second aerobic period (45–60 days). This could be attributed to its lowest relative abundance of Proteobacteria (Fig. ), which resulted in more Fe(II) remaining in the soil solution (Fig. ). In contrast to Fe content variation, the amount of As loaded on biochar in each group showed an overall increasing trend over time. This indicated that, although As was mainly combined with iron oxides, the loading capacity of biochar for As did not weaken as the Fe content decreased. The first 30 days were the primary period for biochar to adsorb As, and the As content on biochar was significantly positively correlated with the As concentration in the soil solution ( r = 0.86, p < 0.05). Based on the significant positive correlation between As and Fe concentrations in the soil solution, it was believed that As on biochar primarily originated from the dissolution of As-containing iron oxides. Surface morphology analysis using SEM showed that the pores were filled after the interaction between biochar and soil and that the minerals therein were in a cemented state (Fig. ). This was because biochar had a larger specific surface area and higher surface energy, which could promote the aggregation of mineral particles on its pore surface, thereby reducing surface energy. In the high-resolution Fe 2p XPS spectrum of biochar, peaks at 710.40–711.00 eV (Fe 2p3/2) and 723.89–724.32 eV (Fe 2p1/2) were observed (Fig. ). The single peaks at binding energies of 711.00, 710.40 710.76, 723.89 and 724.32 eV corresponded to Fe 3 O 4 – and the peak at 724.30 eV corresponded to FeOOH . The iron oxides carried by biochar had a strong adsorption capacity for As. The characteristic peaks of As(III) at 43.40–43.80 eV were detected on the biochar surface , . These findings suggested that the biochar mainly adsorbed As(III) on its surface and formed inner-sphere complexes through ligand exchange between arsenite anions and the hydroxyl functional groups of iron (hydr)oxide – . There was no evidence that Fe(III) oxidised As(III). Partition analysis of iron and arsenic in soil As and Fe are closely related in soil and are simultaneously partitioned to soil solution and biochar under flooding conditions. This process is influenced by physiochemical environmental conditions. The As/Fe molar ratio in soil solution exhibited fluctuating characteristics (Fig. ). Except for the late period for the CK group, the As/Fe molar ratio increased after entering aerobic conditions. This indicated that, although the As and Fe that were released under anaerobic conditions underwent coprecipitation upon the introduction of oxygen, the proportion of As precipitation was relatively low. This suggested that As may have stronger mobility than Fe. It may also be due to the weaker secondary adsorption ability of iron oxides for As under the action of microorganisms . During the experimental process, the As/Fe molar ratio of biochar prepared at different temperatures showed an increasing trend. In the first anaerobic period, the As/Fe molar ratio was lower than that of the soil, which indicated that Fe in soil was more easily partitioned to biochar than As. The gradual increase in the ratio suggested that the adsorption capacity of biochar for As was continuously enhanced (Fig. ). By the end of the experiment, the As/Fe molar ratio was greater than that of the soil. This further indicated that the partitioning of As from soil to biochar was enhanced, resulting in a larger proportion of As in the soil being enriched on biochar. Therefore, it is speculated that when biochar was applied to As-polluted paddy fields, Fe preferentially attached to it and its capacity to load As was enhanced after cycles of wet-drought. Practical implications AMD pollution in paddy fields can cause soil acidification and the accumulation of Fe and As. Biochar made from Eupatorium adenophorum may help improve contaminated soil. Fe and As will be released and mainly immobilized in the pore structure of biochar as redox conditions fluctuate in paddy fields. Applying biochar prepared at lower temperatures (400 °C and 550 °C) will be more effective in dissolving As-containing iron oxides in soil, while biochar prepared at 400 °C has abundant functional groups and is relatively more stable for immobilizing Fe and As. As trapped in the pores of biochar may not be easily released back into the soil solution under fluctuating redox conditions. The stability of surface As may be influenced by the transformation of iron oxide crystal structure. Eupatorium adenophorum is a powerful invasive plant found worldwide, which is both inexpensive and easy to obtain . Using it as feedstock can help reduce the production cost of biochar. After biochar application in paddy fields, grain yield may increase with improvements in soil quality . However, it should be noted that biochar may introduce organic and inorganic contaminants into soil, potentially causing phytotoxicity, cytotoxicity, and neurotoxicity . The microbial population structure in AMD-contaminated fields will change in response to shifts in soil environmental conditions. There were similar variation characteristics of the pH, Eh and DOC concentrations in soil suspensions among groups during the anaerobic/aerobic periodic experiments (Fig. ). The pH of the CK group was about 5.4 at 0–45 d, decreased to 5.2 after the following aerobic period and changed minimally after the final anaerobic period. The BC-400 group also experienced a significant decrease in pH after the second aerobic treatment and was about 0.2 lower than the CK group, on average. The pH of both the BC-550 and BC-700 groups was lower than that of the CK group throughout the entire experimental process, with the lowest pH being that of the BC-550 group. The Eh in each group showed an overall increasing trend during the experiment. The Eh ranged from 80 to 179 mV in the CK group. The Eh of the BC-400 group was consistently lower than that of the CK group, with a minimum of 12 mV. Usually, iron oxides undergo reductive dissolution when the redox potential is below 200 mV and As is also released . The DOC concentration in soil solution showed a decreasing trend in each group, which indicated that DOC was continuously decomposed by microorganisms during the experiment. Except for at 45 d, the DOC concentration in each BC group was lower than that in the CK group and the BC-400 group was lowest at about 8.61 mg L −1 . This was because the cleaned biochar did not release DOC and probably had a certain adsorption effect . The amount of soil microbial biomass can be reflected by the MBC. The analysis results showed that the MBC of the CK group was approximately 379 mg kg −1 (dry soil) at the end of the experiment, while the MBC of the BC-400, BC-550 and BC-700 groups decreased by about 76 mg kg −1 (dry soil), 6 mg kg −1 (dry soil) and 183 mg kg −1 (dry soil), respectively. After anaerobic treatment, the main microbial species in each group were similar at the phylum level, including Proteobacteria, Firmicutes, Acidobacteria and Chloroflexi with relative abundances greater than 10%, followed by Bacilli, Chthonomonadetes etc. (Fig. ). Proteobacteria genera Geobacterium and Shewanella are typical dissimilatory iron-reducing bacteria . The Clostridium genus of Firmicutes also has iron-reducing ability and the arrA gene it carries also reduces As . Additionally, the Acidobacteria phylum also contains iron-reducing bacteria . After the aerobic treatments, the microbial species of each group did not change at the phylum level. However, the relative abundance of Firmicutes decreased significantly, while the relative abundance of Proteobacteria and Chloroflexi increased. There was a significant negative correlation between the relative abundance of Firmicutes and Chloroflexi ( r = -0.80, p < 0.05). The Proteobacteria genera Alphaproteobacteria and Betaproteobacteria have Fe(II) oxidation ability . The environmental behaviour of As in soil is closely related to the presence of iron oxides. Throughout the entire experimental process, the Fe concentration in soil solution showed high and low periodic variations under anaerobic/aerobic operations, respectively (Fig. a). In the first anaerobic period (0–15 d), the total Fe concentration in soil solution ranged from 45.30 to 69.60 mg L −1 with the main speciation being Fe(II). Although the BC-400 group had the lowest Eh, its Fe concentration was not significantly higher than that of the CK group; however, in the second anaerobic period (30–45 d), the BC-400 group had the highest Fe concentration, followed by the BC-550 group. The Fe concentration of the BC-700 group was slightly higher than that of the CK group. These results indicated that after a certain period of interaction between biochar and soil, iron release was significantly enhanced. It was also observed that the lower the pyrolysis temperature of the biochar, the greater the Fe release. The analysis of the relationship with microorganisms revealed a positive correlation between Fe(II) concentration and the relative abundance of Firmicutes ( r = 0.61, p = 0.11) during the second anaerobic and aerobic periods, while a significant negative correlation was observed with the relative abundance of Proteobacteria ( r = -0.76, p < 0.05). This suggested that Firmicutes might play a key role in the reductive dissolution of iron oxides, while Proteobacteria likely catalysed the dissimilatory oxidation of Fe(II), promoting the formation of iron oxide precipitates. The As concentration in soil solution also showed periodic variations with experimental conditions. It decreased after entering the aerobic period (Fig. b) and was significantly positively correlated with Fe concentration ( r = 0.66, p < 0.01). In the first anaerobic period, As release was relatively weak and the concentration was between 6.06 and 9.43 µg L −1 , with little difference among the groups. The average As/Fe concentration ratio was 1.2 × 10 −4 . The As concentration in the second anaerobic stage was about 2 to 4 times that in the first stage and the As concentrations of the three BC groups—especially BC-400 and BC-550—were higher than that of the CK group. This was because biochar could serve as an electron donor and shuttle to promote iron oxide dissolution and As release . During this stage, the As/Fe concentration ratio increased to about 4.5 × 10 −4 . The As concentration continued to decrease after the experiment progressed into the third anaerobic period (60–75 d). This may have been related to the continuous consumption of DOC (Fig. c), which weakened the reduction of iron oxides. Additionally, under anaerobic conditions, Fe(II) would have been oxidised by iron-oxidising bacteria to generate secondary minerals such as ferrihydrite, with which As would have coprecipitated . The DOC and functional groups contained in biochar can influence microbial activity and soil mineral transformation . The biochar did not increase the DOC concentration in soil solution because of the pretreatment (Fig. c). As described above, the soil microbial biomass carbon and population structure of the BC-400 and BC-550 groups were similar to those of the CK group, suggesting that their higher Fe release during the second anaerobic period may not have been solely influenced by iron-reducing bacteria. Although the microbial biomass of the BC-700 group was about half that of the CK group, the difference in Fe concentration between the two groups was relatively small, which further confirmed this point. It is speculated that the different Fe release levels between the BC and CK groups may be related to the differences in the functional groups of the biochar. The results of biochar FTIR analysis are shown in Fig. . Compared with BC-700, BC-400 and BC-550 contained more functional groups. The characteristic peaks were mainly concentrated in the wavenumber range of 750 to 3500 cm −1 , indicating the presence of various functional groups on the biochar surfaces. Hydroxyl groups exhibit stretching vibrations at 3420 cm −1 , C-H at 2920 cm −1 , C = C group at 2360 cm [− and C = C/C = O on the aromatic ring at 1567 cm [−1 , – . This indicated that the biomass was not fully carbonised after pyrolysis and there were many aromatic compounds in BC-400 and BC-500. Therefore, they likely contained phenolic and alcohol compounds, with the oxygen atoms in the hydroxyl groups serving as electron donors , . The graphite-like aromatic structures could also serve as electron shuttles , thereby promoting the reduction and dissolution of Fe(III) minerals. Kappler et al. and Xu et al. also found that biochar promotes the dissolution of weakly crystalline iron oxides (ferrihydrite) and crystalline iron oxides (haematite). As the pyrolysis temperature increased, the number of electron donors contained in the biochar decreased ; this resulted in a significant decrease in Fe release for the BC-700 group compared with the other two BC groups. The content and form of As in the soil were influenced by the interaction between biochar and soil (Fig. ). Compared with the CK group, the As content in the soil of all three BC groups decreased at the end of the experiment. The lower the pyrolysis temperature, the lower the As content in the soil, with a decrease of approximately 10.5 mg kg −1 in the BC-400 group compared with the CK group. The large specific surface area and abundant functional groups of biochar meant that it not only promoted the reduction and dissolution of As and Fe in soil but also caused coprecipitation of the two elements on its surface , resulting in a significant decrease of soil As content. The main forms of As in the CK group soil were F4 and F5, accounting for 29.5% and 48.4%, respectively. The soil As forms in BC-400, BC-550 and BC-700 were similar but their F5 content was significantly decreased compared with the CK group, by about 12.0, 7.2 and 7.3 mg kg –1 , respectively. The phenomenon of residual As decrease was consistent with the findings of Cai et al. and Jiang et al. . Additionally, the content of F4 increased by about 1.1 mg kg −1 in the BC-400 group but decreased by about 2.4 mg kg −1 in the BC-550 group. This may have been due to the fragmentation of soil particles and a decrease in particle size under an anaerobic environment , which meant that the As bound to iron oxides that were encapsulated by other minerals would be released and further transformed. In addition, the contents of F1 increased in the BC groups, particularly in the BC-400 and BC-550 groups, where it rose by approximately 3 and 5 times, respectively. In contrast, Wei et al. conducted a meta-analysis and found that iron-modified biochar reduced the content of F1. The discrepancy suggested that raw biochar might utilize the functional groups to enhance the mobility and repartition of As in soil. The variations in Fe and As content on biochar differed from those in soil solution and did not show periodic variations with changes between anaerobic to aerobic conditions (Fig. ). For the three types of biochar, the Fe loading mainly occurred in the first anaerobic period. Additionally, the lower the pyrolysis temperature when forming the biochar, the more Fe was loaded. This may have been because biochar prepared at lower temperatures has a greater abundance of functional groups , such as -OH, which can bind with Fe [2+ . The Fe content of biochar in the BC-400 group was relatively high throughout the experiment and showed a decreasing trend over time. This may have been due to the action of dissimilatory iron-reducing bacteria on iron oxides with lower crystallinity (such as ferrihydrite), causing them to detach, similar to iron plaque on rice roots . The other two groups also displayed this phenomenon but at different times. It was also observed that the Fe content of biochar in the BC-550 group was lower than in the other two groups after the second aerobic period (45–60 days). This could be attributed to its lowest relative abundance of Proteobacteria (Fig. ), which resulted in more Fe(II) remaining in the soil solution (Fig. ). In contrast to Fe content variation, the amount of As loaded on biochar in each group showed an overall increasing trend over time. This indicated that, although As was mainly combined with iron oxides, the loading capacity of biochar for As did not weaken as the Fe content decreased. The first 30 days were the primary period for biochar to adsorb As, and the As content on biochar was significantly positively correlated with the As concentration in the soil solution ( r = 0.86, p < 0.05). Based on the significant positive correlation between As and Fe concentrations in the soil solution, it was believed that As on biochar primarily originated from the dissolution of As-containing iron oxides. Surface morphology analysis using SEM showed that the pores were filled after the interaction between biochar and soil and that the minerals therein were in a cemented state (Fig. ). This was because biochar had a larger specific surface area and higher surface energy, which could promote the aggregation of mineral particles on its pore surface, thereby reducing surface energy. In the high-resolution Fe 2p XPS spectrum of biochar, peaks at 710.40–711.00 eV (Fe 2p3/2) and 723.89–724.32 eV (Fe 2p1/2) were observed (Fig. ). The single peaks at binding energies of 711.00, 710.40 710.76, 723.89 and 724.32 eV corresponded to Fe 3 O 4 – and the peak at 724.30 eV corresponded to FeOOH . The iron oxides carried by biochar had a strong adsorption capacity for As. The characteristic peaks of As(III) at 43.40–43.80 eV were detected on the biochar surface , . These findings suggested that the biochar mainly adsorbed As(III) on its surface and formed inner-sphere complexes through ligand exchange between arsenite anions and the hydroxyl functional groups of iron (hydr)oxide – . There was no evidence that Fe(III) oxidised As(III). As and Fe are closely related in soil and are simultaneously partitioned to soil solution and biochar under flooding conditions. This process is influenced by physiochemical environmental conditions. The As/Fe molar ratio in soil solution exhibited fluctuating characteristics (Fig. ). Except for the late period for the CK group, the As/Fe molar ratio increased after entering aerobic conditions. This indicated that, although the As and Fe that were released under anaerobic conditions underwent coprecipitation upon the introduction of oxygen, the proportion of As precipitation was relatively low. This suggested that As may have stronger mobility than Fe. It may also be due to the weaker secondary adsorption ability of iron oxides for As under the action of microorganisms . During the experimental process, the As/Fe molar ratio of biochar prepared at different temperatures showed an increasing trend. In the first anaerobic period, the As/Fe molar ratio was lower than that of the soil, which indicated that Fe in soil was more easily partitioned to biochar than As. The gradual increase in the ratio suggested that the adsorption capacity of biochar for As was continuously enhanced (Fig. ). By the end of the experiment, the As/Fe molar ratio was greater than that of the soil. This further indicated that the partitioning of As from soil to biochar was enhanced, resulting in a larger proportion of As in the soil being enriched on biochar. Therefore, it is speculated that when biochar was applied to As-polluted paddy fields, Fe preferentially attached to it and its capacity to load As was enhanced after cycles of wet-drought. AMD pollution in paddy fields can cause soil acidification and the accumulation of Fe and As. Biochar made from Eupatorium adenophorum may help improve contaminated soil. Fe and As will be released and mainly immobilized in the pore structure of biochar as redox conditions fluctuate in paddy fields. Applying biochar prepared at lower temperatures (400 °C and 550 °C) will be more effective in dissolving As-containing iron oxides in soil, while biochar prepared at 400 °C has abundant functional groups and is relatively more stable for immobilizing Fe and As. As trapped in the pores of biochar may not be easily released back into the soil solution under fluctuating redox conditions. The stability of surface As may be influenced by the transformation of iron oxide crystal structure. Eupatorium adenophorum is a powerful invasive plant found worldwide, which is both inexpensive and easy to obtain . Using it as feedstock can help reduce the production cost of biochar. After biochar application in paddy fields, grain yield may increase with improvements in soil quality . However, it should be noted that biochar may introduce organic and inorganic contaminants into soil, potentially causing phytotoxicity, cytotoxicity, and neurotoxicity . The microbial population structure in AMD-contaminated fields will change in response to shifts in soil environmental conditions. Biochar had the potential to remediate AMD-contaminated paddy fields, with its functional groups playing a key role in the process. When prepared at lower pyrolysis temperatures, biochar contained a range of functional groups. In synergy with microorganisms, these functional groups could facilitate the dissolution of As-containing iron oxides produced by AMD, and some of the residual As could also undergo transformation. The dissolved Fe and As in soil solution would be rapidly adsorbed by biochar. Biochar prepared at lower pyrolysis temperatures exhibited a greater capacity to adsorb them, with Fe forming secondary oxides on its surface. While the iron oxides desorbed with fluctuations in redox conditions, the adsorption of As steadily increased. The As/Fe molar ratio on biochar could exceed that of the soil. To be practically applied, field-scale experiments are necessary to further investigate whether the functional groups of biochar degrade and affect the stability of adsorbed Fe and As under repeated wet-dry cycles in paddy fields. |
NTP Nonneoplastic Lesion Atlas: A New Tool for Toxicologic Pathology | 2b780f11-a69f-480e-bb95-2cbafffdf586 | 3948027 | Pathology[mh] | Derived from Greek, the term “neoplasia” literally means “new plasma” and refers to new growth in tissue that does not serve a useful purpose—i.e., tumors. Neoplasms may be malignant or benign; some benign tumors may progress to malignancy. According to Boorman, coeditor of a classic text in the field, nonneoplastic lesions encompass a very broad assortment of tissue alterations including congenital, degenerative, inflammatory, adaptive, and reparative changes. Some nonneoplastic lesions occur normally with age; exposure to a test chemical may either increase or decrease the incidence and/or severity of these spontaneously occurring “background” lesions. In other cases, exposures may induce novel nonneoplastic changes. Some nonneoplastic lesions may progress to tumor formation with time or continued chemical exposure. Pathologists have traditionally diagnosed tissue lesions using print resources, such as textbooks and journals. With the Atlas, users are able to access and zoom in on hundreds (and eventually thousands) of high-resolution images online. Each lesion is accompanied by commentary and recommendations for accurate diagnosis, supported by peer-reviewed references. NTP pathologist Mark Cesta, who serves as the Atlas’ primary editor, says NTP staff and their collaborators strove for consistency with established texts in the field, notably Pathology of the Fischer Rat: References and Atlas and Pathology of the Mouse: References and Atlas. Notably, the NTP Atlas includes slides derived from studies with the Sprague-Dawley rat, which is now more widely used in NTP research than its predecessor, the Fischer 344 rat. Veterinary pathologists rely increasingly on the Sprague-Dawley rat because Fischer 344 rats are at naturally high risk of mononuclear cell leukemia and testicular tumors, , which can muddy study results. According to Boorman, these two strains have been used more than any other in product safety assessment and evaluation of environmental chemicals. The Atlas is also consistent with documents issued by the International Harmonization of Nomenclature and Diagnostic Criteria (INHAND), a global toxicologic pathology initiative to develop consensus terms for diagnosing lesions in rats and mice. Launched in 2006, INHAND publishes its recommendations in the peer-reviewed journals Toxicologic Pathology and Journal of Toxicologic Pathology , dedicating separate supplemental issues to specific organ systems. By contrast, the Atlas’ content resides in a single online location. Whereas INHAND is geared toward scientists in the pharmaceutical industry and limits its content to diagnostic terminology, the Atlas was developed mainly as an internal resource for the NTP with an additional focus on strategies for consistency in diagnosing lesions. But with the NTP’s far-reaching influence in toxicologic pathology, Boorman predicts the Atlas, which anyone may use for free, will find widespread use in training and research.
The fact that veterinary pathologists generally agree on neoplastic terms reflects a long focus on cancer in animal studies. Neoplasms are distinct, easily recognizable entities, unlike nonneoplastic lesions, which raise more diagnostic challenges. For instance, a chemical that induces inflammation as its primary effect might also induce secondary changes, such as metaplasia (an adaptive shift from one cell type to another) or necrosis (tissue death). But given that inflammation can induce necrosis and vice versa, it’s not always obvious which lesion came first. Moreover, inflammation can take different forms—for instance, acute, suppurative, or chronic—and there may be considerable overlap between these subclassifications. “The same inflammatory lesion might be subclassified differently depending on who reads the slide,” Cesta says. In yet another complicating factor, many nonneoplastic lesions occur normally with age but become more pronounced with chemical exposure. Thus, it can be difficult to discern chemical effects from a lesion’s natural background rate. One example, Boorman explains, is a kidney disease called chronic progressive nephropathy (CPN), which occurs to some degree in all rats as they age. CPN displays multiple features on histology, including an influx of white blood cells, an accumulation of connective tissue, and a buildup of calcium and phosphate in the kidney. Depending on the pathologist’s interpretation, each of these findings might be recorded separately, or they might be lumped together as CPN. Therefore, a search for “CPN” in the NTP database might not return a complete set of results for studies that report CPN characteristics. In a related scenario, Boorman says that aging female Fischer 344 rats normally accumulate high levels of basophilic foci in the liver, meaning small clusters of liver cells that stain a darker blue than the normal liver, indicating they might be precancerous. These age-related lesions don’t all turn into tumors. Yet, nitrosamines, which are used to manufacture certain products and also occur as by-products, produce basophilic foci that do often become malignant. “In treated animals, the foci precede cancer, but in control animals, they are very unlikely to progress to cancer,” Boorman says. “So that’s a case in which the same nonneoplastic term—‘basophilic foci’—could apply to very different scenarios.” He says that discrepancy could be a challenge for pathologists trying to compare the dose-related incidence of basophilic foci in chemically treated animals with background lesions in untreated controls. Many nonneoplastic diseases are associated with environmental exposures, and lesions that appear in human illness can also be observed in chemically treated animals. For example, when exposed by inhalation to a chemical called diacetyl (an ingredient in artificial butter flavoring), C57BL/6 mice develop bronchial lesions similar to a human illness known as bronchiolitis obliterans. This irreversible obstructive lung disease has been known to occur in workers with occupational exposure to artificial butter flavoring. Ideally, by improving how pathologists classify and interpret these and other nonneoplastic lesions, the Atlas will enhance our understanding of environmentally related diseases in humans. Meanwhile, Bucher adds that discrepancies in terminology create a huge workload for the NTP’s Pathology Working Group, which is responsible for resolving differences of opinion on diagnostic issues. Pathologists have to read reports and compare images that can be described differently depending on the source as they try to come up with an accurate diagnosis. The task can be onerous, “and this is a problem that’s just getting worse as we get farther into noncancer end points,” Bucher says. “We’re doing more and more work in reproductive, immunological, neurological, and developmental toxicology, and that means more time analyzing nonneoplastic lesions.” According to Robert Sills, chief of the NTP Cell and Molecular Pathology Branch, nonneoplastic lesions tend to occur soon after chemical exposures, unlike tumors, which can take much longer to appear. The Atlas will provide guidance for achieving greater consistency in both short- and long-term studies, Sills says.
To access the Atlas’ content, users navigate from a homepage organized around anatomical systems. At the time of this publication, 5 systems had been published online: the hematopoietic system (which makes blood), the hepatobiliary system (i.e., the liver and gallbladder), the integumentary system (i.e., skin), the nervous system, and the urinary system. Each system is subdivided into pages for relevant organs and tissues. The nervous system, for instance, has pages devoted to the brain, nerves, and spinal cord. In turn, the page for the brain has an illustrated anatomical discussion and 18 additional pages, each devoted to a specific brain lesion. The pages include multiple enlargeable images of the lesions along with the diagnostic recommendations. According to Cesta, the Atlas will eventually be expanded to 13 anatomical systems encompassing 56 different tissue types in all. “We think of it as a living document that will be continually updated,” he says. Since the Atlas is a living document, pathologists will have the flexibility to diagnose new lesions as they emerge in research, and the Atlas can incorporate new terms for novel findings. Much of the initial NTP research and pathology review is performed by contractors, who will be encouraged to use the consensus terms contained in the Atlas. “NTP pathologists review all the work that our contractors do for us,” Sills says. “We meet with our contractor partners one on one, and that gives us the opportunity to confirm that our recommendations for documenting nonneoplastic lesions are being followed.” Prior to its official launch, the Atlas underwent extensive review coordinated primarily by Cesta, Sills, NTP colleagues David Malarkey and Ronald Herbert, and Amy Brix of Experimental Pathology Laboratories, Inc., in Research Triangle Park, North Carolina. Additional review was conducted by outside experts. Among them was Rick Hailey, a veterinary pathologist with GlaxoSmithKline in Research Triangle Park. Hailey says there’s little difference between drug-induced lesions and those induced by environmental exposures, indicating that pharmaceutical scientists may find the Atlas useful. “That’s especially true of younger scientists,” he says, “who might gravitate naturally to a web-based application instead of the more traditional textbooks.” Hailey says he views the Atlas as a valuable supplement to INHAND, “primarily for folks that evaluate NTP studies and younger students in training.” He adds, “It’s an intuitive program and a valuable search tool. From the standpoint of functionality, it works well.”
|
Tonsil biopsy to detect chronic wasting disease in white-tailed deer ( | 70e2ab50-906b-4aa2-af33-e8ab06f75bbb | 10062608 | Anatomy[mh] | Since its initial identification several decades ago, the incidence of chronic wasting disease (CWD) in North American wild and farmed cervid populations has increased. Since animals can transmit infection months to years before developing clinical signs, strategies to limit transmission depend on detecting affected stock during early infection. Diagnosis of CWD was initially limited to postmortem examination of clinical animals, where histologic analysis of the central nervous system would reveal characteristic spongiform neurodegeneration in advanced cases . Antibody-based diagnostics were developed when the role of abnormally folded prion protein was recognized as central to prion diseases, accumulating in neural tissue before spongiform neurodegeneration and, in the case of cervids with CWD, even earlier in specific lymphoid tissues . Immunohistochemistry (IHC) provides a great deal of diagnostic certainty. A sample is identified as positive when immunostaining specific for disease forms of the prion protein (e.g., PrP CWD ) can be visualized in expected locations such as lymphoid follicles . The official regulatory test of the United States Department of Agriculture (USDA) used for postmortem diagnosis of CWD is IHC of the medial retropharyngeal lymph nodes (MRPLNs) and the obex . Though the MRPLNs are a site of early PrP CWD accumulation in white-tailed deer (WTD) ( Odocoileus virginianus ) , biopsy of the MRPLNs requires a surgical approach and is thus an impractical tissue source for routine antemortem diagnosis. Abnormal prion protein can also accumulate in the recto-anal mucosa-associated lymphoid tissue (RAMALT) in sheep, WTD, and elk, which is readily sampled through superficial mucosa biopsy in living animals . However, the diagnostic sensitivity of IHC using RAMALT samples can vary between 25% and 95%, depending on animal species and genetic variability within the prion protein gene ( PRNP ) . Furthermore, IHC detection of PrP CWD in the RAMALT of WTD can vary from 12 to 27 months after infection . Accumulation of PrP CWD in the palatine tonsils is a relatively early event in mule deer ( Odocoileus hemnionus ) and WTD , and some antemortem tonsil biopsy data have been published . From a retrospective study of WTD , PrP CWD was detected in tonsil biopsies by IHC as early as six months post-inoculation. In the present study, we report the IHC diagnostic sensitivity of a two-bite tonsil biopsy from 79 field cases in farmed WTD. All study deer were preclinical, and all samples, including tonsil biopsies, were collected postmortem. Also evaluated were the potential associations of infection stage, PRNP genotype at codon 96, and tonsil follicle metrics on detection of PrP CWD by tonsil biopsy IHC.
Sample collection The study was carried out using tissues collected postmortem by employees of, and under the authority of, USDA-APHIS and Texas state regulatory agencies. These WTD herds were depopulated as an official regulatory action due to the presence of CWD in the herds. No animals were euthanized for the purpose of this study. All study samples were collected opportunistically postmortem. All deer were considered preclinical and appeared healthy at the time of depopulation. Antemortem biopsy of the tonsil was performed in some of these animals as conducted by local regulatory agencies. Regulatory tissue samples (left and right MRPLNs, obex) were collected and submitted to the USDA National Veterinary Services Laboratories (NVSL) (Ames, IA) for official CWD IHC testing. After collecting the regulatory samples, a two-bite tonsil biopsy procedure was conducted as previously described , preserving the contralateral tonsil for unbiased metrics. In brief, the tongue was reflected and two biopsies were collected in situ from the left tonsil using a 6 mm ovarian biopsy instrument inserted into the left tonsillar crypt at a dorsolateral angle. When the tonsillar crypt was not large enough to insert the biopsy instrument, a bite of the overlying epithelium was first removed to expose the tonsil. The biopsies were placed into a tissue cassette with a sponge and put in 10% formalin. The biopsy technique mimicked the antemortem process as much as possible. To limit variation in the biopsy technique , all the samples were collected at diagnostic laboratories and a single operator (TAN) utilized the same procedure across depopulation groups. After tonsil biopsy, both whole tonsils were removed and placed in 10% formalin. The tonsil samples were held until the official CWD diagnostic reports were received from NVSL. Immunohistochemistry Immunohistochemistry (IHC) was conducted at NVSL using the standard operating procedures for detecting PrP CWD as previously described . Briefly, 5 μm tissue sections were mounted on positively charged glass slides (Fisher Scientific), oven dried, treated with formic acid, rinsed with Tris buffer (pH 7.5), and subjected to hydrated autoclaving using DIVA antigen retrieval solution (Biocare Medical) and a decloaking chamber (Biocare Medical). Immunostaining was carried out using an automated immunostainer and associated reagents (Ventana Medical Systems) as well as the Anti-Prion (99) Research Kit, RTU (Ventana Medical Systems). The main reagents of these kits included decloaker solution, antibody block, monoclonal antibody F99, alkaline phosphatase-conjugated anti-mouse IgG secondary antibody, fast red chromogen, and hematoxylin. Each automated run included tissue controls from CWD-infected and non-infected deer. Data collection and statistical analyses Age was either precisely known from records or only known at the birth year level. For quantitative purposes, ages were recoded into one-year age groups such that precise ages were rounded up if equal or greater than a one-half year. The genotypes of the prion protein gene ( PRNP ) at codon 96—coding for the amino acids glycine (G) and serine (S)—were determined by a commercial service (GeneCheck). Genotypes at other codons were not determined. The stage of preclinical infection was classified by IHC of both MRPLNs and the obex, where ‘early’ stage deer had PrP CWD accumulation in MRPLN follicles but not the obex, and ‘late’ stage deer had accumulation at both tissue locations. The PrP CWD -positive and total numbers of lymphoid follicles were counted in a thin section of the whole tonsil. Data were analyzed and graphed using the procedures available in SAS 9.4 (SAS Institute Inc.). Basic statistics and histogram plots were produced using the UNIVARIATE procedure. The FREQ procedure was used to calculate diagnostic sensitivities, exact 95% confidence limits (CLs), and measures of agreement (Cohen’s kappa coefficient, κ; McNemar’s Q test for 2x2 contingencies, Q M ; Cochran’s Q test for stratified contingencies, Q C ). Values of κ were categorized as one of six agreement levels : none = 0–0.20, minimum = 0.21–0.39, weak = 0.40–0.59, moderate = 0.60–0.79, strong = 0.80–0.90, almost perfect > 0.90. The LOGISTIC procedure was used to test the association of stage of infection with genotype, sex, and age group and included first and second-order effects. The GLIMMIX procedure was used to model the effects of genotype, sex, and age on the total follicle counts of the whole tonsil sample (distribution: negative binomial). The likelihood ratio ( Q LR ), 95% CLs, and fit plots were used to assess the significance of each regression model.
The study was carried out using tissues collected postmortem by employees of, and under the authority of, USDA-APHIS and Texas state regulatory agencies. These WTD herds were depopulated as an official regulatory action due to the presence of CWD in the herds. No animals were euthanized for the purpose of this study. All study samples were collected opportunistically postmortem. All deer were considered preclinical and appeared healthy at the time of depopulation. Antemortem biopsy of the tonsil was performed in some of these animals as conducted by local regulatory agencies. Regulatory tissue samples (left and right MRPLNs, obex) were collected and submitted to the USDA National Veterinary Services Laboratories (NVSL) (Ames, IA) for official CWD IHC testing. After collecting the regulatory samples, a two-bite tonsil biopsy procedure was conducted as previously described , preserving the contralateral tonsil for unbiased metrics. In brief, the tongue was reflected and two biopsies were collected in situ from the left tonsil using a 6 mm ovarian biopsy instrument inserted into the left tonsillar crypt at a dorsolateral angle. When the tonsillar crypt was not large enough to insert the biopsy instrument, a bite of the overlying epithelium was first removed to expose the tonsil. The biopsies were placed into a tissue cassette with a sponge and put in 10% formalin. The biopsy technique mimicked the antemortem process as much as possible. To limit variation in the biopsy technique , all the samples were collected at diagnostic laboratories and a single operator (TAN) utilized the same procedure across depopulation groups. After tonsil biopsy, both whole tonsils were removed and placed in 10% formalin. The tonsil samples were held until the official CWD diagnostic reports were received from NVSL.
Immunohistochemistry (IHC) was conducted at NVSL using the standard operating procedures for detecting PrP CWD as previously described . Briefly, 5 μm tissue sections were mounted on positively charged glass slides (Fisher Scientific), oven dried, treated with formic acid, rinsed with Tris buffer (pH 7.5), and subjected to hydrated autoclaving using DIVA antigen retrieval solution (Biocare Medical) and a decloaking chamber (Biocare Medical). Immunostaining was carried out using an automated immunostainer and associated reagents (Ventana Medical Systems) as well as the Anti-Prion (99) Research Kit, RTU (Ventana Medical Systems). The main reagents of these kits included decloaker solution, antibody block, monoclonal antibody F99, alkaline phosphatase-conjugated anti-mouse IgG secondary antibody, fast red chromogen, and hematoxylin. Each automated run included tissue controls from CWD-infected and non-infected deer.
Age was either precisely known from records or only known at the birth year level. For quantitative purposes, ages were recoded into one-year age groups such that precise ages were rounded up if equal or greater than a one-half year. The genotypes of the prion protein gene ( PRNP ) at codon 96—coding for the amino acids glycine (G) and serine (S)—were determined by a commercial service (GeneCheck). Genotypes at other codons were not determined. The stage of preclinical infection was classified by IHC of both MRPLNs and the obex, where ‘early’ stage deer had PrP CWD accumulation in MRPLN follicles but not the obex, and ‘late’ stage deer had accumulation at both tissue locations. The PrP CWD -positive and total numbers of lymphoid follicles were counted in a thin section of the whole tonsil. Data were analyzed and graphed using the procedures available in SAS 9.4 (SAS Institute Inc.). Basic statistics and histogram plots were produced using the UNIVARIATE procedure. The FREQ procedure was used to calculate diagnostic sensitivities, exact 95% confidence limits (CLs), and measures of agreement (Cohen’s kappa coefficient, κ; McNemar’s Q test for 2x2 contingencies, Q M ; Cochran’s Q test for stratified contingencies, Q C ). Values of κ were categorized as one of six agreement levels : none = 0–0.20, minimum = 0.21–0.39, weak = 0.40–0.59, moderate = 0.60–0.79, strong = 0.80–0.90, almost perfect > 0.90. The LOGISTIC procedure was used to test the association of stage of infection with genotype, sex, and age group and included first and second-order effects. The GLIMMIX procedure was used to model the effects of genotype, sex, and age on the total follicle counts of the whole tonsil sample (distribution: negative binomial). The likelihood ratio ( Q LR ), 95% CLs, and fit plots were used to assess the significance of each regression model.
Seventy-nine (31 female and 48 male) WTD deer from nine herds were identified as infected with CWD by official testing at NVSL. The data collected from these animals are provided in . The dates of birth were known for 47 deer; the ages of 29 deer were recorded in whole years. Ages ranged from two deer less than 1 year of age to two deer aged 9.29 years (mean of original age data, 4.01 years). The median of ages grouped by year was 3 years. PRNP codon 96 genotypes included 56 GG, 17 GS, and one SS deer; genotype was not available for two female and three male deer. The age and sex distributions for genotypes GG and GS are shown in ; the single SS deer was a 6.33-year-old female. Accumulation of PrP CWD in MRPLN follicles was observed in all 79 deer. The stage of infection was classified as early preclinical in 42 deer (25 male, 17 female) and late preclinical in 37 deer (23 male, 14 female). The age distributions of deer in early and late preclinical stages of infection are shown for PRNP codon 96 genotypes GG and GS in ; the SS deer was in early preclinical infection. The probability of a deer being in an early stage of preclinical infection was not dependent on age group, PRNP genotype (where codon 96 was either GG or GS), sex, or any interaction of these factors ( Q LR , P = 0.2037). Diagnostic sensitivity of tonsil IHC using sections of whole and biopsy sample types Upon official diagnosis, the paired postmortem samples of whole tonsil and two-bite tonsil biopsy were submitted to NVSL for evaluation by IHC. Postmortem tonsil biopsies from all deer had more than six lymphoid follicles present. Tonsil biopsies collected antemortem were considered inconclusive if accumulation of PrP CWD was not observed and fewer than six lymphoid follicles were present in thin sections. Accumulation of PrP CWD in antemortem tonsil biopsies was detected in 14 of 36 WTD in which antemortem sampling had been conducted . Of the 22 WTD in which PrP CWD was not detected antemortem, 12 were detected in a postmortem biopsy. Conversely, tonsil accumulation of PrP CWD in antemortem biopsies was not detected in any WTD in which accumulation was not detected in the postmortem biopsy. Hereafter, all results are for postmortem samples. Accumulation of PrP CWD was observed in a thin section of whole tonsil in 69 deer (diagnostic sensitivity = 87.3%) and in the tonsil biopsy samples of 57 deer (72.2%). Accumulation of PrP CWD in a tonsil biopsy was only observed when accumulation was also observed in the whole tonsil. These paired estimates of general diagnostic sensitivity (i.e., without consideration of other factors) were significantly different ( Q M = 12.0, P exact = 0.0005). Furthermore, the agreement of diagnoses between sample types was categorized as weak (κ = 0.5460, 95% CLs: 0.3352, 0.7568) but was better than by chance alone ( P exact < 0.0001). The following analyses compare diagnostic sensitivities and agreement as stratified by stage of preclinical infection and by genotype at PRNP codon 96. The agreement of results between sample types depended on the stage of preclinical infection ( Q c = 20.7170; P < 0.0001). From deer in late preclinical infection, there were no discordant pairs of results (that is, there was perfect agreement between sample types), yielding a joint tonsil IHC diagnostic sensitivity of 91.9% (exact 95% CLs: 78.1%, 98.3%). For deer in early preclinical infection, there was minimum agreement between tonsil sample types (κ = 0.3898, 95% CLs: 0.1587, 0.6210) but which was better than by chance alone ( P exact < 0.0019). The early preclinical diagnostic sensitivity of whole tonsil IHC was 83.3% (exact 95% CLs: 68.6%, 93.0%) and for tonsil biopsy IHC was 54.8% (exact 95% CLs: 38.7%, 70.2%); these estimates were significantly different ( Q M = 12.000, P exact = 0.0005). The agreement of results between sample types significantly depended on genotype (GG vs GS) stratified by stage of infection ( Q c = 21.3860; P < 0.0001). For deer in late preclinical infection, there were no discordant pairs of results for either genotype, yielding a joint estimate of tonsil IHC diagnostic sensitivity in GG deer of 92.6% (exact 95% CLs: 75.7%, 99.1%) and in GS deer of 85.7% (42.1%, 99.6%); a significant difference between these joint sensitivities was not detected ( Q C = 4.000, P = 0.1353). In contrast, during early preclinical infection there was minimum agreement of results between tonsil sample types when from GG deer (κ = 0.3596, 95% CLs: 0.0423, 0.6770) and when from GS deer (κ = 0.4444, 95% CLs: 0.0071, 0.8818). The agreement of results from early preclinical GG deer was significantly better than by chance alone ( P exact = 0.0328) but agreement from early preclinical GS deer was not ( P exact = 0.1667). For early preclinical GG deer, the tonsil IHC diagnostic sensitivity for whole samples was 89.7% (exact 95% CLs: 72.7%, 97.8%) but for tonsil biopsy samples was 65.5% (exact 95% CLs: 45.7%, 82.1%); these estimates were significantly different ( Q M = 7.000, P exact = 0.0156). For early preclinical GS deer, a statistical difference between tonsil IHC diagnostic sensitivity of whole samples (60.0%, 95% CLs: 26.2%, 87.8%) and tonsil biopsy (30.0%, 95% CLs: 6.7%, 65.3%) was not detected ( Q M = 3.000, P exact = 0.2500). Relationship of whole tonsil metrics with the probability of detecting PrP CWD in a tonsil biopsy The proportion of PrP CWD positive tonsil follicles was estimated by counting the total and positive numbers of follicles present in thin sections of the unbiopsied whole tonsil (N = 66 WTD; ). The total number of whole tonsil follicles counted was highly variable between deer (mean = 126.7, standard deviation = 47.5). The mean of whole tonsil follicle counts was marginally dependent on the animal’s age ( F = 4.83, P = 0.0316); the estimated reduction in mean total follicle count was 4.9 follicles per year. The probability of a false negative tonsil biopsy result in early preclinical deer was not significantly dependent on the whole tonsil total follicle count ( Q LR = 0.3717, P = 0.5421). In contrast, the probability of a false negative tonsil biopsy result in early preclinical deer was significantly dependent on the whole tonsil estimate of the proportion of positive follicles ( Q LR = 30.4393, P < 0.0001; ). The odds of a false negative result based on a two-bite tonsil biopsy from deer in early preclinical infection increased 1.617 (95% CLs: 1.226, 2.603) for each 0.1 unit decrease in positive proportion of whole tonsil follicles .
Upon official diagnosis, the paired postmortem samples of whole tonsil and two-bite tonsil biopsy were submitted to NVSL for evaluation by IHC. Postmortem tonsil biopsies from all deer had more than six lymphoid follicles present. Tonsil biopsies collected antemortem were considered inconclusive if accumulation of PrP CWD was not observed and fewer than six lymphoid follicles were present in thin sections. Accumulation of PrP CWD in antemortem tonsil biopsies was detected in 14 of 36 WTD in which antemortem sampling had been conducted . Of the 22 WTD in which PrP CWD was not detected antemortem, 12 were detected in a postmortem biopsy. Conversely, tonsil accumulation of PrP CWD in antemortem biopsies was not detected in any WTD in which accumulation was not detected in the postmortem biopsy. Hereafter, all results are for postmortem samples. Accumulation of PrP CWD was observed in a thin section of whole tonsil in 69 deer (diagnostic sensitivity = 87.3%) and in the tonsil biopsy samples of 57 deer (72.2%). Accumulation of PrP CWD in a tonsil biopsy was only observed when accumulation was also observed in the whole tonsil. These paired estimates of general diagnostic sensitivity (i.e., without consideration of other factors) were significantly different ( Q M = 12.0, P exact = 0.0005). Furthermore, the agreement of diagnoses between sample types was categorized as weak (κ = 0.5460, 95% CLs: 0.3352, 0.7568) but was better than by chance alone ( P exact < 0.0001). The following analyses compare diagnostic sensitivities and agreement as stratified by stage of preclinical infection and by genotype at PRNP codon 96. The agreement of results between sample types depended on the stage of preclinical infection ( Q c = 20.7170; P < 0.0001). From deer in late preclinical infection, there were no discordant pairs of results (that is, there was perfect agreement between sample types), yielding a joint tonsil IHC diagnostic sensitivity of 91.9% (exact 95% CLs: 78.1%, 98.3%). For deer in early preclinical infection, there was minimum agreement between tonsil sample types (κ = 0.3898, 95% CLs: 0.1587, 0.6210) but which was better than by chance alone ( P exact < 0.0019). The early preclinical diagnostic sensitivity of whole tonsil IHC was 83.3% (exact 95% CLs: 68.6%, 93.0%) and for tonsil biopsy IHC was 54.8% (exact 95% CLs: 38.7%, 70.2%); these estimates were significantly different ( Q M = 12.000, P exact = 0.0005). The agreement of results between sample types significantly depended on genotype (GG vs GS) stratified by stage of infection ( Q c = 21.3860; P < 0.0001). For deer in late preclinical infection, there were no discordant pairs of results for either genotype, yielding a joint estimate of tonsil IHC diagnostic sensitivity in GG deer of 92.6% (exact 95% CLs: 75.7%, 99.1%) and in GS deer of 85.7% (42.1%, 99.6%); a significant difference between these joint sensitivities was not detected ( Q C = 4.000, P = 0.1353). In contrast, during early preclinical infection there was minimum agreement of results between tonsil sample types when from GG deer (κ = 0.3596, 95% CLs: 0.0423, 0.6770) and when from GS deer (κ = 0.4444, 95% CLs: 0.0071, 0.8818). The agreement of results from early preclinical GG deer was significantly better than by chance alone ( P exact = 0.0328) but agreement from early preclinical GS deer was not ( P exact = 0.1667). For early preclinical GG deer, the tonsil IHC diagnostic sensitivity for whole samples was 89.7% (exact 95% CLs: 72.7%, 97.8%) but for tonsil biopsy samples was 65.5% (exact 95% CLs: 45.7%, 82.1%); these estimates were significantly different ( Q M = 7.000, P exact = 0.0156). For early preclinical GS deer, a statistical difference between tonsil IHC diagnostic sensitivity of whole samples (60.0%, 95% CLs: 26.2%, 87.8%) and tonsil biopsy (30.0%, 95% CLs: 6.7%, 65.3%) was not detected ( Q M = 3.000, P exact = 0.2500).
CWD in a tonsil biopsy The proportion of PrP CWD positive tonsil follicles was estimated by counting the total and positive numbers of follicles present in thin sections of the unbiopsied whole tonsil (N = 66 WTD; ). The total number of whole tonsil follicles counted was highly variable between deer (mean = 126.7, standard deviation = 47.5). The mean of whole tonsil follicle counts was marginally dependent on the animal’s age ( F = 4.83, P = 0.0316); the estimated reduction in mean total follicle count was 4.9 follicles per year. The probability of a false negative tonsil biopsy result in early preclinical deer was not significantly dependent on the whole tonsil total follicle count ( Q LR = 0.3717, P = 0.5421). In contrast, the probability of a false negative tonsil biopsy result in early preclinical deer was significantly dependent on the whole tonsil estimate of the proportion of positive follicles ( Q LR = 30.4393, P < 0.0001; ). The odds of a false negative result based on a two-bite tonsil biopsy from deer in early preclinical infection increased 1.617 (95% CLs: 1.226, 2.603) for each 0.1 unit decrease in positive proportion of whole tonsil follicles .
Early detection of CWD-infected cervids is key to mitigating the spread of disease. Of particular interest is the potential application of antemortem diagnostic testing to farmed WTD. This study determined the sensitivity of CWD IHC using a two-bite biopsy technique reported to produce optimal antemortem retrieval of tonsillar follicles from white-tailed deer under field conditions . In this study, diagnostically adequate numbers of follicles were obtained using this two-bite biopsy sampling technique in 79 preclinical, naturally infected farmed WTD from nine CWD-positive herds from across the United States. The study group included similar proportions of deer with early and late preclinical infections, and each infection stage was similarly represented by males and females, ages ranging from 6 months to 9 years, and the GG and GS genotypes of PRNP codon 96. The contralateral tonsil was collected intact to provide unbiased whole tonsil metrics to better understand factors that may affect the diagnostic sensitivity of tonsillar biopsy. The overall preclinical diagnostic sensitivity of CWD IHC using this unilateral two-bite tonsil biopsy technique was estimated to be 72% whereas the sensitivity from the paired whole tonsil was significantly higher at 87% . Animals with early preclinical infection—a stage defined in this study as official IHC detection of PrP CWD in MRPLN follicles but not the obex of WTD—are notoriously difficult to diagnose antemortem. Thus, even though the diagnostic sensitivity of tonsil biopsy for WTD in late preclinical infection was 92% and was the same as that achieved by examining the whole tonsil, the tonsil biopsy sensitivity for WTD with early preclinical infection was reduced to 55% despite an 83% sensitivity based on the whole tonsil. Furthermore, early preclinical detection was low at 30% in WTD bearing the GS genotype of PRNP codon 96 as compared to 66% detection in GG herd mates. The poor sensitivity of tonsil biopsy during early preclinical infection was strongly associated with the proportion of PrP CWD -positive tonsil follicles as estimated using follicle counts from the unbiopsied whole tonsil. As seen in , the detection of PrP CWD in at least 80% of tonsillar follicles (x-axis) was observed in 31 (or 94%, y-axis) of 33 late preclinical deer and in 14 (42%) of 33 early preclinical deer. Thus, it is not surprising that PrP CWD was detected by tonsil biopsy in all 45 of these deer. But false negative results from tonsil biopsies occurred when tonsil estimates fell below 80% positive follicles. In WTD with early preclinical infection and PrP CWD present in the whole tonsil sample (N = 33), PrP CWD was not detected by tonsil biopsy in two deer with respective estimates of 76% and 28% positive tonsil follicles, and in 9 of 12 (75%) deer in which the tonsil estimates were less than 20% positive follicles (range 4% to 19%). The odds of a false negative biopsy result during early preclinical infection increased by approximately 1.6 for every 10% decrease in the estimated proportion of positive tonsil follicles. As such, the chance of a false negative result from a unilateral two-bite tonsil biopsy was greater than 50% (probability 0.5) when the tonsils of WTD with early preclinical infection were estimated to have 20% or fewer positive follicles. Other sample types and novel detection methods have been studied in naturally infected WTD as potential antemortem tests for CWD and, in each case, diagnostic sensitivity was negatively impacted by the PRNP genotype at codon 96 and for deer during the early stage of infection (all using the same definition as in this study). Deer with GS and SS codon 96 polymorphisms are still susceptible to CWD. However, the amount of PrP CWD staining is significantly less in these animals as demonstrated in a controlled intranasal inoculation . From a meta-analysis of IHC-based diagnosis using RAMALT biopsy , the sensitivity was 68% overall but was only 42% in GS deer and only 36% for deer in early preclinical infection. Newer assay methods detect the misfolding activity associated with prions and have the potential to detect far lower amounts of PrP CWD than are routinely detected by immunoassays, including IHC. In one application of the real-time quaking-induced conversion (RT-QuIC) assay , the sensitivity of this misfolding assay applied to RAMALT biopsies was 69% overall but only 39% in GS deer and only 25% for deer in early preclinical infection. When the protein misfolding cyclic amplification (PMCA) assay was optimized for use with cervid blood samples , the sensitivity was 79% overall but only 57% in GS deer and 53% for deer in early preclinical infection.
While this study demonstrates some potential for using CWD IHC and tonsil biopsy as an antemortem diagnostic in naturally infected farmed WTD, detection was limited during early preclinical infection and in deer bearing the GS genotype at PRNP codon 96. This is not surprising given these same factors have been observed to negatively impact the measured diagnostic sensitivity of other antemortem sample types (e.g., RAMALT and blood), even when tested using protein misfolding assays. Thus, evaluations of CWD IHC applied to a two-bite biopsy sample of the palatine tonsil must also consider the potential impact of these limitations on its intended application.
S1 Table Whole tonsil and tonsil biopsy results for seventy-nine preclinical white-tail deer naturally infected with chronic wasting disease. (XLSX) Click here for additional data file.
|
Evidence–time dilemma in a pandemic with high mortality: Can outcome‐driven decision making on vaccines prevent deaths? | 8ff38529-8286-4a96-b9a0-7e273fde54c1 | 8653071 | Preventive Medicine[mh] | When the first vaccines were authorized for coronavirus disease 2019 (COVID‐19) in December 2020, its death toll exceeded 2,500,000 deaths globally. Basic science showed an unprecedented pace in its response to the virus with the synthesis of mRNA‐1273 (Spikevax), the active substance of a COVID‐19 vaccine, on January 13, 3 weeks prior to the first confirmed death in the United States. Can regulatory science accelerate access to vaccinations, prevent deaths, and overcome the evidence–time dilemma in future pandemics? The death toll of the COVID‐19 pandemic has only been exceeded by the Spanish Flu in 1918. Early in the first wave of the pandemic, a highly disproportionate distribution of COVID‐19 infections and deaths was observed between the age groups with a disproportionately high case fatality rate in the elderly subpopulation (<1% in <64‐year‐old, 8.0% in 70–79‐year‐old, and 14.8% in >80‐year‐old subjects). Already at the start of the pandemic, it was obvious that effective vaccines will be the ultimate tool to control the COVID‐19 pandemic and bring societies back to normality. So, science excelled with the severe acute respiratory syndrome‐coronavirus 2 (SARS‐CoV‐2) virus genome being sequenced on January 11, the active substance, mRNA‐1273, synthesized on January 13, 2020. By December 2020, with an unprecedented speed of less than a year, the mRNA‐1273 and the BNT162 (Comirnaty) vaccine were developed and granted an Emergency Use Authorization (EUA) in the United States. , In April 2020, the International Coalition of Medicines Regulatory Authorities (ICMRA) discussed aspects for COVID‐19 therapeutic developments, including clinical trials, real‐world evidence (RWE), and compassionate use. They expressed the need for robust evidence to establish safety and efficacy for the proposed medicines, leading to timely regulatory decisions and thus guiding clinicians in defining the best treatment options for COVID‐19 to serve the patients’ needs in the fastest fashion possible. In support of the EUA, the pivotal clinical evidence was generated in large randomized controlled trials (RCTs) in an ideal‐world setting, in the broad adult population, with prevention as the primary end point (starting in July 2020). Due to the limited availability of the first two authorized COVID‐19 vaccines, the United States and almost all other countries prioritized the elderly in the vaccination process. This decision was based on modeling approaches revealing that mortality is minimized in scenarios where the subpopulation with the highest risk of COVID‐19‐related deaths is vaccinated first, already established for influenza vaccinations. Unfortunately, in a pandemic with such high mortality, there is an evidence–time dilemma; during the clinical evidence generation, the death toll continues to rise in the real world. Knowing that the second wave is often bigger than the first and was expected to start in autumn 2020 and last until spring 2021 further emphasizes the limited time. Indeed, emergencies and crises often act as a magnifying glass for known shortcomings in the drug development and regulatory decision making related to the evidence–time dilemma. First, the ideal‐world (efficacy and prevention) versus real‐world (effectiveness and mortality) dilemma. Second, the evidence versus access dilemma. Third, the population versus subpopulation dilemma. Before the first COVID‐19 vaccines were granted an EUA, there was a very high unmet medical need, especially in the subpopulation with the highest burden, risk of hospitalization, and COVID‐19‐related deaths, namely the elderly. I, therefore, propose the consideration of early access to the most advanced vaccines at the time of enrollment of the pivotal RCTs via managed access programs (MAPs) for the target population, here, the elderly. Especially as the majority of subjects in pivotal RCTs, mainly young adults, are not the ones with the highest benefit. The implications of early access are discussed regarding evidence generation, benefit‐risk assessment, regulatory decision making, and, finally, prevention of deaths. Regulatory science permanently evolves and provides multiple solutions to overcome the evidence–time dilemma in general, including the abovementioned shortcomings. First, it offers new approaches to generate high‐quality, broader evidence in simultaneous efficacy and effectiveness trials. That way, additional RWE on effectiveness can be generated with mortality as an outcome measure in cluster‐randomized pragmatic vaccination trials in the elderly in long‐term care facilities. It thereby complements the evidence on efficacy that the classical vaccine development paradigm (CVDP) generates in the ideal‐world setting with prevention as the primary end point. Simultaneous efficacy and effectiveness trials before authorization can address the ideal‐world versus real‐world dilemma intrinsic to the classical regulatory framework. Second, studying the effectiveness of vaccines on mortality in its most affected subpopulation in long‐term care facilities, where, in this pandemic, 45% of the total COVID‐19‐related deaths were observed, offers the opportunity of high‐quality evidence on clinically relevant outcomes in a short time. Thus, it addresses the subpopulation versus general population and the access versus evidence dilemma. Interestingly, one innovative approach to overcome the evidence–time dilemma was implemented in the RCTs for COVID‐19 vaccines within the CVDP: the adaptive clinical trial design with preplanned interim analyses. However, in October 2020, additionally required safety information, for the benefit‐risk assessment (BRA) and the issuance of the EUA in the United States, prevented all but the last one of the preplanned interim analyses and reflect an asymmetric focus on risks compared with benefits. Positive results of earlier interim analyses regarding efficacy could enable early and rapid access to vaccines for the elderly via MAPs and could thus prevent deaths. A pandemic with such high mortality and dynamic time course, occurring in waves and evolving virus variants, likewise requires highly dynamic, transparent, and consistent decision making from all relevant stakeholders, ideally in real‐time based on the dynamically changing totality of evidence (ToE). At the time of authorization of COVID‐19 vaccines, clinical evidence was only available on a single short‐term benefit, prevention, and on the risks of acute and short‐term side effects. Mid‐ and long‐term benefits and risks (e.g., the vaccine’s effect on mortality or long COVID‐19), remained unknown. Assessing the benefits and risks of vaccines only on the broad population level does not consider the substantial difference in the mortality rate between elderly and young adults. The classical pivotal RCTs do not generate evidence regarding this effectiveness outcome. Thus, the BRA so far disregards the probable epidemiological and clinical dependency between preventing disease and reducing disease‐related deaths. It misses the opportunity to grant high‐risk subgroups the potential benefits of vaccines when no effective treatment alternatives are available. The ToE, showing clear dose‐effect relationships in early exploratory dose‐ranging studies resulting in immune responses for both mRNA vaccines, might justify early access for future pandemics. When the pivotal RCTs for the broad adult population, including the elderly, are authorized, regulatory agencies could authorize early access restricted to the elderly within a MAP, too. Restriction to the elderly, vaccine administration by physicians, and real‐time monitoring of acute safety issues in a similar way, as in pivotal RCTs, minimize the risks associated with early access. Consequently, these risks can be considered low for the elderly subpopulation as additional high‐level RWE is parallelly generated in pragmatic RCTs. Above all, considering the known high benefit‐risk ratio of vaccines in general, the most relevant benefit, the prevention of deaths, can be expected to outweigh the risks. The implementation of the proposed new approach during the COVID‐19 would have been associated with a degree of uncertainty regarding potential safety and quality issues considering the new mRNA technology and the already very short development time of the first mRNA COVID‐19 vaccines. On the one hand, those safety or quality issues, if occurring in the managed access program, could result in negative downstream consequences on the willingness of patients/caregivers to be vaccinated with the vaccines once they are authorized. On the other hand, earlier, ensuring evidence on additional effectiveness outcomes, such as hospitalization and mortality, generated in the MAP could even facilitate and accelerate vaccination campaigns. Finally, transparent and consistent communication by regulators and policymakers of key benefits and risks (known, unknown, and expected) and respective uncertainties of vaccines at key milestones during the clinical development, ideally supported by valid quantitative BRA methods, will be critical to enable appropriate and responsible informed consent procedures and shared decision making between patients and physicians, and an informed public. Currently, evidence emerges in support of the proposed early access to the high‐risk subpopulation. An observational study within a nationwide vaccination setting in Israel demonstrates effectiveness when preventing COVID‐19‐related deaths in 72% of the subjects aged greater than or equal to 70 years. In the largest pandemic of the last 100 years, the first pandemic in the 21st century, early access to effective vaccines with the potential to prevent infection, burden, and death, could have been considered. Regulatory decision making based on the ToE, accepting moderate levels of uncertainty where the risks can be managed and giving high‐risk patients access to the potential benefits, can save lives in a pandemic. To maximize the future impact of regulatory science regarding the development of new treatments and their regulatory review, it is imperative to take appropriate actions on two levels. First, solutions adapted successfully during the COVID‐19 crisis should continue to be applied thereafter. Second, lessons learned, and insights gained during the crisis need to be transformed into future solutions. The unprecedented speed of the development and approval of the first COVID‐19 vaccines can mainly be attributed to four factors: rapid development of active substances for efficacious vaccines, operationally fast RCTs required by regulators combined with streamlined/reduced nonclinical requirements prior to first‐in‐human trials, the rolling review by regulatory agencies, and the spending of governments and nongovernmental organizations on manufacturing of COVID‐19 vaccines at risk. However, options from the current regulatory science toolbox (e.g., early/managed access programs, simultaneous efficacy, and effectiveness trials), and quantitative methods for differentiated BRA and model‐informed regulatory decision making, carry additional potential. Further, adaptive designs with interim analyses without additional safety requirements allow for compassionate use of early access to vaccines and their potential benefits. The proposed early access to vaccines for the high‐risk subpopulation based on the ToE contributes to a faster translation of basic science into life‐saving vaccines. It demonstrates how well‐known dilemmas in the classical clinical drug development and regulatory decision making framework can be addressed in the future, in the interest of public health, and, in particular, high‐risk subpopulations. The ToE approach is consistent with Eichler et al., explaining that the future is not about RCTs versus RWE but RCTs and RWE—not just assessing efficacy and safety but also effectiveness. , Approving vaccines using a platform approach based on the available prior evidence on the mRNA vaccine technology should be able to permit even earlier access to effective and safe vaccines. This carries the potential to prevent future pandemics at the stage of local outbreaks with new viruses. KE developed the design and collected, analysed, and interpreted the data for this article. KE also drafted, wrote, critically revised, and approved the article and agrees to be accountable for all aspects of his work. |
Identifying Water-Salt Homeostasis and Inflammatory Response in Pathological Cardiac Surgery-Associated Acute Kidney Injury: NT-proBNP-related lncRNAs and miRNAs as Novel Diagnostic Biomarkers and Therapeutic Targets | 7f8870fe-5238-42dc-b93c-8879c611d6b7 | 11843142 | Surgical Procedures, Operative[mh] | Acute kidney injury following cardiac surgery (CS-AKI) is a prevalent and serious issue, affecting about 40% of patients and resulting in significant mortality , . Recent research has pointed out that disruptions in water and salt balance, along with inflammation, play a role in AKI - . Numerous clinical studies have identified a link between pre-surgery NT-proBNP and BNP levels and the occurrence of CS-AKI, particularly in severe cases, with NT-proBNP significantly improving CS-AKI prediction - . Currently, our comprehension of the relevant molecular mechanisms between CS-AKI and NT-proBNP is rather restricted. Non-coding RNAs (ncRNAs), prevalent in the human genome, are crucial for gene regulation and hold potential as biomarkers for diagnosing diseases like AKI . Research indicates that the improper regulation of ncRNAs is linked to the pathological development of CS-AKI - . Within the group of ncRNAs, lncRNAs and miRNAs are crucial components that need to be studied in relation to CS-AKI , . Exploring the link between NT-proBNP and CS-AKI through lncRNAs and miRNAs is crucial, as it could lead to more effective treatments for CS-AKI. Our past findings showed that the NT-proBNP prior to operation, when elevated, had a connection with a boosted possibility of CS-AKI. Therefore, the current research concentrated on exploring the ncRNA alterations in expression among participants diagnosed with CS-AKI and probing into potential regulation modes via RNA sequencing. 2.1 Patients and ethics approval This cohort study included 30 participants who underwent heart surgery at Fuwai Hospital in Beijing, China. Individuals with mental disorders, significant liver and kidney issues, a history of major surgeries excluding cardiac surgery, or those who declined to join the trial were not included. The study procedure was supervised and approved by Fuwai Hospital's Institutional Review Board (IRB), which waived the requirement for written informed consent due to the retrospective nature of the research. The study was carried out in full compliance with the applicable regulations and guidelines. 2.2 Gathering samples and data origins Healthcare professionals documented the patients' fundamental and clinical information, encompassing demographic details, biochemical markers, and data from before, during, and after surgery. Samples of serum and urine were gathered at different times during the after-surgery phase (0, 12, 16, 24 hours) and before surgery , . As mentioned in our earlier article , the hospital laboratory regularly measured serum NT-proBNP levels before and after surgery. Additionally, we periodically monitor plasma levels of biochemical markers. 2.3 Group division Patients were categorized into two groups according to NT-proBNP values: patients whose post-surgical to pre-surgical specific value was equal to or greater than 2 were categorized into the BNP-high group. On the other hand, patients whose specific value was less than 2 were assigned to the BNP-stable group. Besides, among the 30 participants, those who were diagnosed with AKI were categorized as AKI, while the rest were classified as non-AKI. The patients had their CS-AKI diagnosed as per the diagnostic criteria defined in the Kidney Disease Improving Global Outcomes (KDIGO) guidelines . 2.4 RNAseq TRIzol (Thermo Fisher Scientific, USA) method was applied to isolate RNA. The NanoDrop ND-1000 (Nano Drop, Wilmington, DE, USA) was used to quantify the RNA concentration and quality of each sample. The RNA integrity was analyzed by the Agilent 2100 Bioanalyzer (Agilent Technologies, USA). The Collibri Stranded RNA Library Prep Kit (Thermo Fisher Scientific, USA) was used for mRNA library preparation. Afterward, PCR was employed to enrich DNA fragments followed by library purification and validation. RNA sequencing was performed on the Illumina NOVA 6000 platform - . The NEBNext® Multiplex Small RNA Library Prep Set for Illumina (NEB, USA) was used to establish the small RNA library. In this procedure, 3' adapters tailored for microRNAs and other small RNAs were ligated to RNA molecule ends, followed by the addition of 5' adapters. Single-strand cDNAs were amplified using RT-PCR and subsequently purified through gel electrophoresis. The quality of the cDNA construct was confirmed using the Agilent 2100 Bioanalyzer. Using the cBot (Illumina, USA), cluster generation was completed. Finally, the small RNA library underwent sequencing on the same sequencing platform as the mRNA library - . 2.5 Bioinformatics analysis High-quality data were acquired by filtering the raw next-generation sequencing reads using Seqtk ( https://github.com/lh3/seqtk ). Under the guidance of the Ensembl GTF gene annotation file, the Cuffdiff software was utilized to obtain the FPKM (Fragments per kilobase of exon per million fragments mapped) values of mRNA at the gene level and small RNAs (including miRNAs and lncRNAs). These FPKM values served as the expression profiles of mRNA and small RNAs. Subsequently, the fold change and P-value between the two groups of samples were calculated to screen for differentially expressed mRNAs, miRNAs, and lncRNAs. For mRNAs, GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway analyses were conducted directly. Meanwhile, the target genes of miRNAs and lncRNAs were predicted, and then GO and KEGG pathway analyses were performed on these target genes as well. Distinct lncRNAs and miRNAs were identified in both the AKI and BNP groups, and their overlaps were illustrated using Venn diagrams - . 2.6 Statistics Variables following a normal distribution were analyzed using a Student's t-test, with results presented as mean ± standard deviation (SD). Normality tests were performed on continuous variables for data analysis. The Mann-Whitney U test was used to analyze non-parametric data not following a normal distribution, with results presented as medians and interquartile ranges (IQRs). Categorical variables were analyzed using either Fisher's exact test or the χ² test, with results presented as numbers . For the continuous variables, Pearson correlation analysis was applied when the data exhibited a normal distribution; otherwise, Spearman correlation analysis was utilized - . Statistical significance was set at a P-value ≤ 0.05. This cohort study included 30 participants who underwent heart surgery at Fuwai Hospital in Beijing, China. Individuals with mental disorders, significant liver and kidney issues, a history of major surgeries excluding cardiac surgery, or those who declined to join the trial were not included. The study procedure was supervised and approved by Fuwai Hospital's Institutional Review Board (IRB), which waived the requirement for written informed consent due to the retrospective nature of the research. The study was carried out in full compliance with the applicable regulations and guidelines. Healthcare professionals documented the patients' fundamental and clinical information, encompassing demographic details, biochemical markers, and data from before, during, and after surgery. Samples of serum and urine were gathered at different times during the after-surgery phase (0, 12, 16, 24 hours) and before surgery , . As mentioned in our earlier article , the hospital laboratory regularly measured serum NT-proBNP levels before and after surgery. Additionally, we periodically monitor plasma levels of biochemical markers. Patients were categorized into two groups according to NT-proBNP values: patients whose post-surgical to pre-surgical specific value was equal to or greater than 2 were categorized into the BNP-high group. On the other hand, patients whose specific value was less than 2 were assigned to the BNP-stable group. Besides, among the 30 participants, those who were diagnosed with AKI were categorized as AKI, while the rest were classified as non-AKI. The patients had their CS-AKI diagnosed as per the diagnostic criteria defined in the Kidney Disease Improving Global Outcomes (KDIGO) guidelines . TRIzol (Thermo Fisher Scientific, USA) method was applied to isolate RNA. The NanoDrop ND-1000 (Nano Drop, Wilmington, DE, USA) was used to quantify the RNA concentration and quality of each sample. The RNA integrity was analyzed by the Agilent 2100 Bioanalyzer (Agilent Technologies, USA). The Collibri Stranded RNA Library Prep Kit (Thermo Fisher Scientific, USA) was used for mRNA library preparation. Afterward, PCR was employed to enrich DNA fragments followed by library purification and validation. RNA sequencing was performed on the Illumina NOVA 6000 platform - . The NEBNext® Multiplex Small RNA Library Prep Set for Illumina (NEB, USA) was used to establish the small RNA library. In this procedure, 3' adapters tailored for microRNAs and other small RNAs were ligated to RNA molecule ends, followed by the addition of 5' adapters. Single-strand cDNAs were amplified using RT-PCR and subsequently purified through gel electrophoresis. The quality of the cDNA construct was confirmed using the Agilent 2100 Bioanalyzer. Using the cBot (Illumina, USA), cluster generation was completed. Finally, the small RNA library underwent sequencing on the same sequencing platform as the mRNA library - . High-quality data were acquired by filtering the raw next-generation sequencing reads using Seqtk ( https://github.com/lh3/seqtk ). Under the guidance of the Ensembl GTF gene annotation file, the Cuffdiff software was utilized to obtain the FPKM (Fragments per kilobase of exon per million fragments mapped) values of mRNA at the gene level and small RNAs (including miRNAs and lncRNAs). These FPKM values served as the expression profiles of mRNA and small RNAs. Subsequently, the fold change and P-value between the two groups of samples were calculated to screen for differentially expressed mRNAs, miRNAs, and lncRNAs. For mRNAs, GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway analyses were conducted directly. Meanwhile, the target genes of miRNAs and lncRNAs were predicted, and then GO and KEGG pathway analyses were performed on these target genes as well. Distinct lncRNAs and miRNAs were identified in both the AKI and BNP groups, and their overlaps were illustrated using Venn diagrams - . Variables following a normal distribution were analyzed using a Student's t-test, with results presented as mean ± standard deviation (SD). Normality tests were performed on continuous variables for data analysis. The Mann-Whitney U test was used to analyze non-parametric data not following a normal distribution, with results presented as medians and interquartile ranges (IQRs). Categorical variables were analyzed using either Fisher's exact test or the χ² test, with results presented as numbers . For the continuous variables, Pearson correlation analysis was applied when the data exhibited a normal distribution; otherwise, Spearman correlation analysis was utilized - . Statistical significance was set at a P-value ≤ 0.05. 3.1 Population characteristics Table shows no remarkable differences in baseline data, details within the operation, and AKI incidence after the operation between BNP groups. Pre-surgery NT-proBNP levels of the BNP-high group were dominantly decreased compared to the BNP-stable group ( P <0.001), with biochemical indicators showing no noticeable distinctions, as shown in Table . As shown in , the NT-proBNP fold change was 9.69 (8.11-12.81) for the BNP-high group and 1.19 (0.91-1.47) for the BNP-stable group. Still, there were no significant discrepancies in other heart function markers (Figure C). Figure compares the pre- and post-surgery biochemical index ratios in these 2 BNP groups. In contrast to the group with stable BNP levels, the group with high BNP levels demonstrated substantially elevated SG proportion before and after surgery (Figure A). Renal function did not show any statistically significant changes across the groups (Figure B). The comparison of inflammatory factors between groups revealed that the ratios of TNF and IL10 at 24 hours after surgery and pre-surgical were significantly higher in the BNP-high group compared to the BNP-stable group ( , Figure D). Spearman correlation analysis revealed no significant correlation between BNP multiple and water-salt metabolism and inflammation indexes (Table ). TNFα T5/T1 demonstrated a prominent medium positive association with inflammatory-promoting elements IL-6, CRP, and IL-8, as well as with the anti-phlogistic element IL-10 (r=0.574, P=0.001). 3.2 DELs across various groups shows QC results of lncRNA sequencing and miRNA sequencing. Figures A and B present lncRNAs with differential expressions (DELs) between 2 BNP groups. Among the 138 DELs found, 108 exhibited up-regulation and 30 exhibited down-regulation. The predicted differential lncRNA target genes underwent GO term and KEGG pathway enrichment analysis, with results displayed as scatter plots (Figures C and D). According to the top 30 enriched GO terms, the target genes were chiefly engaged in regulating dendritic spine morphogenesis (Figure C). The top 30 KEGG pathways showed significant enrichment in pathways associated with renin secretion and actin cytoskeleton regulation (Figure D). The include the differential lncRNAs identified for the AKI group and the GO and KEGG analysis results of the target genes. 3.3 Differential expression of miRNAs across various groups We collected 20 MB of raw small RNA sequencing data per sample (QC results in ). The heatmap and volcano plot show differentially expressed miRNAs (DEMs) identified by edgeR in the BNP groups (Figures A and B), revealing 62 up-regulated and 43 down-regulated miRNAs. Forecasted genes targeted by DEMs underwent GO and KEGG analysis, of which the results are displayed in Figures C and D. GO analysis indicates target genes are mainly involved in macromolecule and RNA biosynthesis/modification and are located in nuclear components. KEGG analysis shows significant clustering in metabolic pathways and glycosphingolipid biosynthesis-ganglion series. Similar analyses were conducted for the AKI group. 3.4 Common DEMs and DELs in different groups Seven miRNAs as well as seven lncRNAs were screened out by Venn diagrams. Among them were four novel lncRNAs, namely MSTRG.129696.18, MSTRG.39610.1, MSTRG.129293.3, and MSTRG.129696.10. These are graphically depicted in Figure to afford additional elucidation and serve as a referential resource. Table displays their regulation status. The 7 lncRNA target genes screened out showed no association with pathways associated with AKI or BNP rise. Among the 3 already identified lncRNAs that are transcribed from exons, high expression levels in the kidney and heart, however, are shown only by NON-HSAT160556.1. BNP and AKI pathways are closely involved with 4 miRNAs, namely hsa-miR-206, hsa-miR-138-5p, hsa-miR-135a-5p, and hsa-miR-143-3p (Figure ). Table shows no remarkable differences in baseline data, details within the operation, and AKI incidence after the operation between BNP groups. Pre-surgery NT-proBNP levels of the BNP-high group were dominantly decreased compared to the BNP-stable group ( P <0.001), with biochemical indicators showing no noticeable distinctions, as shown in Table . As shown in , the NT-proBNP fold change was 9.69 (8.11-12.81) for the BNP-high group and 1.19 (0.91-1.47) for the BNP-stable group. Still, there were no significant discrepancies in other heart function markers (Figure C). Figure compares the pre- and post-surgery biochemical index ratios in these 2 BNP groups. In contrast to the group with stable BNP levels, the group with high BNP levels demonstrated substantially elevated SG proportion before and after surgery (Figure A). Renal function did not show any statistically significant changes across the groups (Figure B). The comparison of inflammatory factors between groups revealed that the ratios of TNF and IL10 at 24 hours after surgery and pre-surgical were significantly higher in the BNP-high group compared to the BNP-stable group ( , Figure D). Spearman correlation analysis revealed no significant correlation between BNP multiple and water-salt metabolism and inflammation indexes (Table ). TNFα T5/T1 demonstrated a prominent medium positive association with inflammatory-promoting elements IL-6, CRP, and IL-8, as well as with the anti-phlogistic element IL-10 (r=0.574, P=0.001). shows QC results of lncRNA sequencing and miRNA sequencing. Figures A and B present lncRNAs with differential expressions (DELs) between 2 BNP groups. Among the 138 DELs found, 108 exhibited up-regulation and 30 exhibited down-regulation. The predicted differential lncRNA target genes underwent GO term and KEGG pathway enrichment analysis, with results displayed as scatter plots (Figures C and D). According to the top 30 enriched GO terms, the target genes were chiefly engaged in regulating dendritic spine morphogenesis (Figure C). The top 30 KEGG pathways showed significant enrichment in pathways associated with renin secretion and actin cytoskeleton regulation (Figure D). The include the differential lncRNAs identified for the AKI group and the GO and KEGG analysis results of the target genes. We collected 20 MB of raw small RNA sequencing data per sample (QC results in ). The heatmap and volcano plot show differentially expressed miRNAs (DEMs) identified by edgeR in the BNP groups (Figures A and B), revealing 62 up-regulated and 43 down-regulated miRNAs. Forecasted genes targeted by DEMs underwent GO and KEGG analysis, of which the results are displayed in Figures C and D. GO analysis indicates target genes are mainly involved in macromolecule and RNA biosynthesis/modification and are located in nuclear components. KEGG analysis shows significant clustering in metabolic pathways and glycosphingolipid biosynthesis-ganglion series. Similar analyses were conducted for the AKI group. Seven miRNAs as well as seven lncRNAs were screened out by Venn diagrams. Among them were four novel lncRNAs, namely MSTRG.129696.18, MSTRG.39610.1, MSTRG.129293.3, and MSTRG.129696.10. These are graphically depicted in Figure to afford additional elucidation and serve as a referential resource. Table displays their regulation status. The 7 lncRNA target genes screened out showed no association with pathways associated with AKI or BNP rise. Among the 3 already identified lncRNAs that are transcribed from exons, high expression levels in the kidney and heart, however, are shown only by NON-HSAT160556.1. BNP and AKI pathways are closely involved with 4 miRNAs, namely hsa-miR-206, hsa-miR-138-5p, hsa-miR-135a-5p, and hsa-miR-143-3p (Figure ). The pre-surgical serum levels of NT-proBNP are broadly acknowledged as predictors for CS-AKI , . Nevertheless, the precise pathophysiological mechanisms and molecular regulatory pathways underlying CS-AKI remain inadequately elucidated. It is postulated that disruptions in the water and salt homeostasis related to BNP, in conjunction with inflammatory responses, are intimately correlated and play a crucial role in both the onset and the advancement of CS-AKI - . In both BNP groups, our study made it clear that there were notable differences in urinary specific gravity quotient between the after-surgery and pre-surgical periods, thus highlighting variations in the metabolism of body fluids and salts. No significant differences were found in serum potassium and sodium levels. As BNP reduces sodium reabsorption in the renal inner medullary collecting duct without affecting water reabsorption , we suggest that the group with high BNP levels showed higher urinary specific gravity due to reduced reuptake of sodium. As a crucial effector organ, the kidney plays an essential role in keeping the homeostasis of the metabolism of body fluids and salts. Biotic elements influencing this internal balance can significantly impact renal function, with inflammatory mediators being particularly notable. Studies on marathon runners suggest that AKI is linked to sodium and water loss through sweat and increased serum copeptin levels , . Marathon runners undergo sustained physical exertion, which induces an acute inflammatory response marked by elevated cytokine levels, including TNF-α, IL-6, and IL-8 , . Disruptions in cellular water and salt homeostasis can elevate tonicity beyond tolerable thresholds, thereby exacerbating inflammatory responses and inducing cellular apoptosis . BNP plays a vital and coordinating role between cardiac and renal functions, particularly with regard to the modulation of inflammatory responses, sodium excretion, and the maintenance of water balance , . It is widely acknowledged that inflammation has an intricate link with BNP and CS-AKI , - . A growing body of evidence suggests that elevated plasma concentrations of TNF-α, IL-6, IL-8, and IL-10 are significantly involved in the pathophysiology of CS-AKI. Consistent with the current literature, it is indicated by our findings that the ratios of IL-10 and TNF-α are significantly increased 24 hours following surgery in contrast to the pre-surgery in the group of high BNP levels. RNA sequencing of both 24-hour pre-surgical and post-surgical plasma samples was executed to investigate the potential pathways associated with CS-AKI. It is indicated by the results that in reply to NT-proBNP, the progression of CS-AKI might involve various inflammatory mediators, which demands more research for clarification of the interactions and underlying pathways. Through GO and KEGG pathway enrichment analyses, we have obtained important findings in CS-AKI patients characterized by increased BNP levels. We noticed that there were rather significant differences in the expressions of 7 microRNAs and 7 lncRNAs before and after the operation. Specifically, 4 of these miRNAs have demonstrated crucial roles. On the one hand, they are involved in regulating the homeostasis of body fluids and salts. For example, miR-143-3p, miR-206, and miR-138-5p are predicted to be involved in the inhibition of water reabsorption (corresponding pathway: hsa04962). Also, they are predicted to participate in sodium reabsorption (corresponding pathway: hsa04960), aldosterone synthesis and secretion (corresponding pathway: hsa04925), and vasodilation (corresponding pathway: hsa04270). On the other hand, they regulate the secretion of inflammatory factors, and these inflammatory factors play a significant role in regulating tissue damage. In addition, miR-135a-5p is associated with vasodilation (hsa04270). Particularly, it is the reduction of sodium intake that can lead to the upregulation of the level of miR-143-3p among patients with untreated hypertension . Moreover, it has been confirmed that miR-206 can regulate the homeostasis of Na⁺ by targeting NCX2 . What these results suggest is that these miRNAs might genuinely be involved in the modulation of water and sodium metabolism, which means deeper exploration is needed. This research indicates that miR-143-3p and miR-138-5p may influence inflammatory responses by regulating mediators linked to TRP channels and activating calcium signaling pathways and calcium influx. These findings are consistent with earlier studies showing elevated plasma levels of miR-143-3p in AKI caused by drugs and the dual role of miR-138-5p in modulating inflammatory responses in different diseases - . Variability in miR-138-5p expression and function across pathological contexts may result from individual patient differences and sample selection criteria - . Recent studies have identified miR-135a-5p as being downregulated in patients with atherosclerosis, where it has been implicated in promoting inflammatory responses and oxidative stress - . Meanwhile, miR-135a-5p has also been found to be decreased in smooth muscle cells of the human aorta, and in this case, it can alleviate vascular inflammation in rats with chronic kidney disease - . In this research, it was determined that miR-135a-5p exhibited a remarkable downregulation in both the AKI and BNP groupings and was linked to the process of migration of leukocytes across the endothelium (hsa04670). miR-206 contributes to inflammatory responses and increases the reactive oxygen species (ROS) of mice via targeting as well as inhibiting SOD1 - . Overexpressed miR-206 increases inflammatory-stimulating immunomodulators IL-1β, IL-6, and CCL5 - . In the present research, a connection was uncovered between miR-206 and calcium influx along with the activation process of the calcium signaling pathway (hsa04020) . Additionally, miR-143-3p, miR-206, miR-135a-5p, and miR-138-5p likely modulate inflammatory cytokine secretion and tissue damage, supporting their documented roles in existing literature - . The observations demonstrate the four miRNAs screened out might influence the inflammatory reaction in the kidney and the metabolism of body fluids and salts, indicating their potential intermediary role between NT-proBNP and CS-AKI. As an inflammation marker for the kidneys and myocardium, lactate dehydrogenase (LDH), mainly detected in the myocardium and kidneys - , shows increased serum activity due to cell lysis or membrane disruption - . Studies show that elevated serum LDH levels in patients with AKI or CS-AKI suggest its potential as a predictive biomarker for CS-AKI , . Our study found no significant differences in nephric function or AKI incidence changes amid pre-surgical and after-surgery groups with stable or increased BNP levels. The study indicates that existing cardiac circulatory arrest procedures are generally safe, and the surgery-induced rise in NT-proBNP levels does not worsen renal function impairment. In larger cohorts, patients with pre-surgical elevated NT-proBNP levels show a higher incidence of AKI after cardiac circulatory arrest surgery, seemingly independent of operation. Pre-surgical low cardiac function or pre-existing renal impairment might account for this. MicroRNAs, including miR-138-5p, miR-143-3p, miR-135a-5p, and miR-206, impact the regulation of the inflammatory response to tissue injury and sodium and water metabolism. These microRNAs may impact NT-proBNP metabolism and AKI through inflammatory response factors, potentially regulating AKI in the BNP-high group. Future research shall clarify heart-kidney injury mechanisms and discover prospective treatment and prophylaxis loci. Generally, this study highlights the impact of DELs on CS-AKI. There is an intimate correlation between DEMs and inflammatory response and water-salt stability, offering significant insights into the molecular mechanisms of CS-AKI. These findings establish a foundation for the exploration of novel molecular markers indicative of early renal dysfunction, thereby facilitating the development of innovative CS-AKI treatment. Supplementary figures and tables. |
Co-Designing Communication: A Design Thinking Approach Applied to Radon Health Communication | 1867b130-2254-4728-abef-226623c5e5e1 | 10048842 | Health Communication[mh] | Health intervention planning models emphasize the importance of participatory methods, thus involving community members and other relevant stakeholders in the different planning stages, from problem definition to intervention implementation . Not only does this increase the external validity of the intervention by the acceptance and acknowledgment of the input provided by the community, but it also provides broad perspectives and skills from community members, stakeholders, and the design team. Using the collective creativity of professionals and the local community in designing an intervention is referred to as co-design and can be seen as a citizen science approach . Although multiple citizen science projects were conducted within the field of radon, co-design methods have, to our knowledge, not yet been adopted in intervention design . Radon is an indoor air pollutant. It is a natural radioactive gas that is present in the soil in varying concentrations depending on the composition of the ground. Radon is invisible and has no scent, there are no visible casualties due to the gas, and since it is a natural gas, there is no culprit to blame . In high-risk areas, radon can enter houses through cracks or different installation tubes in the foundations of buildings, and the gas can accumulate indoors. Radon concentrations are one of the leading causes of lung cancer . Despite current health interventions, research shows that testing and mitigation rates remain insufficient . This raises the question of whether the current interventions tackle the right barriers and provide the right facilitators. Research specifically focused on (mass) communication interventions regarding radon has observed multiple gaps in the communication strategies adopted in the past. For instance, statistical information in leaflets or news articles prevails . To address these gaps, an exploratory co-design study was developed to first focus on general barriers and facilitators to perform radon protective behaviors and second on the ideation and designing of communication interventions, together with people with personal experience with radon. In this way, community members co-design a communication intervention, making it more personally relevant and likely more effective . 2.1. Health Interventions to Address Radon Exposure Changing behavior requires change on different levels; the behavior change wheel identifies capability, opportunity, and motivation as the main sources of behavior. Motivation reflects the individual, opportunity reflects the individual’s environment, and capability reflects a combination of the two. For behavior change to be effective and durable, the three components should be addressed with different types of interventions that often stem from the policy level . Looking at the policy level regarding radon, Europe adapted the Basic Safety Standards in 2013 and included radon protection as well . In practice, all European Member States are legally required to develop and implement a radon action plan containing information on ways to decrease radon levels at homes and workplaces. In the United States, the Indoor Radon Abatement Act (IRAA) from 1988 requires that indoor radon levels be as low as outdoors . These legislations, however, are on the highest level (namely the European level and the National level of the United States). The responsibility lies with the countries/states and their interpretation of their responsibility and legislation. Some countries/states, for instance, Estonia, only inform people about radon and place the responsibility for behavioral actions on the individual , whereas other countries, for instance, Ireland and Belgium, take the initial steps to include more specific legislation . Multiple scholars state that legislation procedures in terms of housing code requirements (comparable to energy efficiency) might increase the uptake for radon testing and mitigating , as is the case in certain States in The United States, for instance, Pennsylvania . On a European level, Austria is considering similar measures . Other policy measures are mostly concerned with reducing the economic impact of the testing and mitigating procedure—for instance, incentivizing mitigations, offering subventions, or providing free tests . A city in Ireland experimented with providing digital radon monitors in the library to facilitate the need for these monitors without the costs of buying them . Other countries, such as Bulgaria and the Czech Republic, provide free tests, and yet other countries (e.g., Belgium) sell tests at lowered prices during the heating season. Subventions for mitigation are also country-dependent; for instance, Austria, Germany, and Sweden provide financial support to those carrying out mitigation works . No real evidence is available on whether the financial aspect matters to people. Interestingly, focus groups in Ireland show that people who performed mitigation perceived the costs as not too high as it was an investment in their health. At the same time, people who did not mitigate (but had high levels of radon) perceived the costs as too high and an important barrier . Despite the interventions and measures in place, the uptake of radon protective behavior remains insufficient . It remains unclear whether the interventions in place address the barriers people experience and whether they create the right facilitating conditions. Therefore, there is a need to explore in more depth what barriers and facilitators people experience regarding radon-protective behavior. As radon is a multi-level problem, not only do the situational and the environmental factors matter, the responsibility of actually performing testing and mitigating often still lies with the individual homeowners . So, while creating the right environment for them to act is needed, they still must be motivated to act. One way to increase motivation is through communication and persuasion. Communication occurs on different levels, including interpersonal communication (e.g., an individual talking about radon with their general practitioner), stakeholder communication (e.g., general practitioners that are informed about radon on a higher level), and mass media communication (e.g., press articles about radon). A recent systematic review that focused on mass media communication about radon shows that campaigns mostly aim to increase awareness, knowledge, risk perception, and perceived susceptibility using factual communication in the form of brochures or press articles. The focus is on providing people with information about the characteristics of radon and the (technical) solutions. Although informative leaflets can be effective, they assume the full rationality of the audience, where they act upon the information they receive. The literature on behavior change has shown that people often experience bounded rationality and that other aspects, such as relevance, biases, and emotions, play an important part in the process . Other messages such as fear appeals in videos showed increased intention to request more information , and direct phone calls and letters increased intention to test . Moreover, while these communication interventions have shown to be effective to some level (e.g., low degree of increase in testing behavior), the next step, namely mitigation, remains mainly unchanged , which identifies an additional gap. In particular, Hevey identified 17 steps of behavior, from becoming informed about radon to having confirmed mitigation . However, communication interventions rarely move along these steps. The precaution adoption process model is a theory based on the different stages of behavior, from being unaware of the problem to maintaining the problem. The theory emphasizes that different stages require different communication approaches. For instance, to move from the first stage (unaware) to the second stage (unengaged), media messages about the hazards are needed, while in progressing from the second stage to the third (undecided), testimonials and personal experiences are most effective. Further, to proceed from the third stage to the fourth (decided not to act) or to the fifth (decided to act), information about personal susceptibility, likelihood, and severity of radon exposure is effective. Detailed information about ways to perform the behavior, the costs, and the resources are mainly effective when moving from the fifth stage to the final stage (maintenance) . Overall, the systematic review showed a need for more personally relevant communication efforts, as the question remains whether and to what extent the current communication approaches tackle the right determinants at the right moment and are in line with the needs of the public . This unveils the need to inquire about the their preferences of the target group regarding radon-related communication. 2.2. Co-Design in Health Interventions on Radon To answer these questions, we need to engage in dialogue with the target group themselves and, even more so, involve them actively in developing communication tools. Participatory designs include various methods; however, the mean denominator is the active engagement of the public. Different levels exist within participatory designs, from providing information (one-way) to a discussion (two-way) and active participation (multiple ways), which is the highest level of involvement. The latter often results in participatory decision-making and co-design of new products, technologies, or health interventions . Within the existing research about the health issues related to radon, participatory designs or citizen science projects have been adopted previously . The main topic investigated in previous studies was the understanding of the lack of mitigating behavior, either through interviews (i.e., providing the information) or through discussing the topic in focus groups (i.e., discussion) . Citizen science projects were related to, for instance, raising awareness, radon mapping, or radon testing and mitigating . To our knowledge, ours is the first study applying active participation in the design process of a communication intervention in the context of radon. More specifically, our study was designed to involve residents and homeowners in understanding the lack of radon protective behaviors and related general barriers and facilitators and considering solutions regarding communication campaigns. To investigate these aspects, we opted for design thinking. This participatory design framework allows for opening up the problem and inviting people to think along to identify it and create solutions based on their first-hand experiences . It is a way of creative problem-solving that is human-centered and emphasizes observation, collaboration, and visualization of ideas. It emphasizes empathizing with the issue and the context of the issue, defining the exact problem and challenge, ideating ways to solve the challenge, and testing prototypes to do so . This method, both problem- and solution-oriented, can provide new insights into why people avoid radon protective behaviors, what they think the solution would be, and even what the solution should look like. To summarize, two questions are raised: first, what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions, if at all? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? Changing behavior requires change on different levels; the behavior change wheel identifies capability, opportunity, and motivation as the main sources of behavior. Motivation reflects the individual, opportunity reflects the individual’s environment, and capability reflects a combination of the two. For behavior change to be effective and durable, the three components should be addressed with different types of interventions that often stem from the policy level . Looking at the policy level regarding radon, Europe adapted the Basic Safety Standards in 2013 and included radon protection as well . In practice, all European Member States are legally required to develop and implement a radon action plan containing information on ways to decrease radon levels at homes and workplaces. In the United States, the Indoor Radon Abatement Act (IRAA) from 1988 requires that indoor radon levels be as low as outdoors . These legislations, however, are on the highest level (namely the European level and the National level of the United States). The responsibility lies with the countries/states and their interpretation of their responsibility and legislation. Some countries/states, for instance, Estonia, only inform people about radon and place the responsibility for behavioral actions on the individual , whereas other countries, for instance, Ireland and Belgium, take the initial steps to include more specific legislation . Multiple scholars state that legislation procedures in terms of housing code requirements (comparable to energy efficiency) might increase the uptake for radon testing and mitigating , as is the case in certain States in The United States, for instance, Pennsylvania . On a European level, Austria is considering similar measures . Other policy measures are mostly concerned with reducing the economic impact of the testing and mitigating procedure—for instance, incentivizing mitigations, offering subventions, or providing free tests . A city in Ireland experimented with providing digital radon monitors in the library to facilitate the need for these monitors without the costs of buying them . Other countries, such as Bulgaria and the Czech Republic, provide free tests, and yet other countries (e.g., Belgium) sell tests at lowered prices during the heating season. Subventions for mitigation are also country-dependent; for instance, Austria, Germany, and Sweden provide financial support to those carrying out mitigation works . No real evidence is available on whether the financial aspect matters to people. Interestingly, focus groups in Ireland show that people who performed mitigation perceived the costs as not too high as it was an investment in their health. At the same time, people who did not mitigate (but had high levels of radon) perceived the costs as too high and an important barrier . Despite the interventions and measures in place, the uptake of radon protective behavior remains insufficient . It remains unclear whether the interventions in place address the barriers people experience and whether they create the right facilitating conditions. Therefore, there is a need to explore in more depth what barriers and facilitators people experience regarding radon-protective behavior. As radon is a multi-level problem, not only do the situational and the environmental factors matter, the responsibility of actually performing testing and mitigating often still lies with the individual homeowners . So, while creating the right environment for them to act is needed, they still must be motivated to act. One way to increase motivation is through communication and persuasion. Communication occurs on different levels, including interpersonal communication (e.g., an individual talking about radon with their general practitioner), stakeholder communication (e.g., general practitioners that are informed about radon on a higher level), and mass media communication (e.g., press articles about radon). A recent systematic review that focused on mass media communication about radon shows that campaigns mostly aim to increase awareness, knowledge, risk perception, and perceived susceptibility using factual communication in the form of brochures or press articles. The focus is on providing people with information about the characteristics of radon and the (technical) solutions. Although informative leaflets can be effective, they assume the full rationality of the audience, where they act upon the information they receive. The literature on behavior change has shown that people often experience bounded rationality and that other aspects, such as relevance, biases, and emotions, play an important part in the process . Other messages such as fear appeals in videos showed increased intention to request more information , and direct phone calls and letters increased intention to test . Moreover, while these communication interventions have shown to be effective to some level (e.g., low degree of increase in testing behavior), the next step, namely mitigation, remains mainly unchanged , which identifies an additional gap. In particular, Hevey identified 17 steps of behavior, from becoming informed about radon to having confirmed mitigation . However, communication interventions rarely move along these steps. The precaution adoption process model is a theory based on the different stages of behavior, from being unaware of the problem to maintaining the problem. The theory emphasizes that different stages require different communication approaches. For instance, to move from the first stage (unaware) to the second stage (unengaged), media messages about the hazards are needed, while in progressing from the second stage to the third (undecided), testimonials and personal experiences are most effective. Further, to proceed from the third stage to the fourth (decided not to act) or to the fifth (decided to act), information about personal susceptibility, likelihood, and severity of radon exposure is effective. Detailed information about ways to perform the behavior, the costs, and the resources are mainly effective when moving from the fifth stage to the final stage (maintenance) . Overall, the systematic review showed a need for more personally relevant communication efforts, as the question remains whether and to what extent the current communication approaches tackle the right determinants at the right moment and are in line with the needs of the public . This unveils the need to inquire about the their preferences of the target group regarding radon-related communication. To answer these questions, we need to engage in dialogue with the target group themselves and, even more so, involve them actively in developing communication tools. Participatory designs include various methods; however, the mean denominator is the active engagement of the public. Different levels exist within participatory designs, from providing information (one-way) to a discussion (two-way) and active participation (multiple ways), which is the highest level of involvement. The latter often results in participatory decision-making and co-design of new products, technologies, or health interventions . Within the existing research about the health issues related to radon, participatory designs or citizen science projects have been adopted previously . The main topic investigated in previous studies was the understanding of the lack of mitigating behavior, either through interviews (i.e., providing the information) or through discussing the topic in focus groups (i.e., discussion) . Citizen science projects were related to, for instance, raising awareness, radon mapping, or radon testing and mitigating . To our knowledge, ours is the first study applying active participation in the design process of a communication intervention in the context of radon. More specifically, our study was designed to involve residents and homeowners in understanding the lack of radon protective behaviors and related general barriers and facilitators and considering solutions regarding communication campaigns. To investigate these aspects, we opted for design thinking. This participatory design framework allows for opening up the problem and inviting people to think along to identify it and create solutions based on their first-hand experiences . It is a way of creative problem-solving that is human-centered and emphasizes observation, collaboration, and visualization of ideas. It emphasizes empathizing with the issue and the context of the issue, defining the exact problem and challenge, ideating ways to solve the challenge, and testing prototypes to do so . This method, both problem- and solution-oriented, can provide new insights into why people avoid radon protective behaviors, what they think the solution would be, and even what the solution should look like. To summarize, two questions are raised: first, what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions, if at all? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? To apply the participatory design, we composed a research team comprising researchers from different disciplines, such as risk communication, health communication, sociology, nuclear physics, and citizen science. This ensured the avoidance of conceptual bias. Most researchers of the team had expertise with qualitative methods and radon research; however, none had operational expertise in design thinking as a research method. Therefore, the research protocol was developed in collaboration with a Belgian company specializing in design thinking (ACOMPANY). The company also provided a full training day of the method for all researchers involved in this study. 3.1. Participants The aim was to recruit participants who already had some experience with radon so that they could speak from their own experiences rather than a hypothetical scenario. This meant that we recruited people who had already measured (high) radon levels. 3.2. Workshop Design A workshop was designed that consisted of two unstructured group sessions. Each session lasted two hours and was scheduled a week apart. More specifically, the framework of the double diamond was applied to the context of radon and the workshop design itself . The first stage of this framework, as seen in , is the challenge, which is the starting point of the workshops and describes the ideal scenario. For this research project, the challenge was defined as “would it not be nice if all houses were radon-free,” referring to the ideal scenario where radon protective behavior is performed and facilitated easily among all homeowners in radon-prone areas. In the first session, the participants used this challenge to consider why houses are not already radon-free. In other words, “would it not be nice if all houses were radon-free” was the initial prompt to discuss barriers and facilitators in the first session. Since the participants all had experience with radon, this prompt was understandable for the participants as a starting point. Participants recorded all the problems (i.e., barriers) that arose on post-it notes while discussing them. These problem statements could relate to the causes of the challenge, the importance, the target audience, and other related issues, specifically in the form of “how-to questions.” This stems from the concept of how to ensure that all houses are radon-free, formulating a barrier as a facilitator; for instance, “how to make people aware” (i.e., facilitator) refers to the lack of awareness (i.e., barrier). Once saturation was reached and no new problems were added, dot-voting allowed for defining the most pressing problem statements. In other words, the first session discovered the why of the main challenge. Between the first and second sessions, the problem was defined further. In this case, the problem definition for the second session was “how to improve radon communication.” In the workshop’s second session, this was used as the prompt to start the discussion, together with the main findings from the first session. In this session, the focus was on ideation and brainstorming. The participants discussed potential radon communication strategies, selected the ones they considered the best, and started to develop protocols for the materials, which led to a communication strategy. This session explored the how of the main challenge. Both sessions aimed to diverge first (i.e., creating options) and converge afterward (i.e., selecting options). One of the tools often used in design thinking approaches is developing a customer journey, which indicates all the steps between being aware and purchasing a product or even becoming an ambassador (i.e., as a customer actively promoting the product among peers). Based on the precaution adoption process model and the 17 steps of radon behavior developed by Hevey , a homeowner journey was developed before the workshops. Seven steps were identified: awareness, evaluation of the knowledge (i.e., engagement with the health issue), purchase of radon test kit, delivery and conducting radon test, action (i.e., mitigating home), reassuring (i.e., confirming successful mitigation by re-testing), and ambassadorship (i.e., convincing others about the importance of radon tests). For every step, barriers, motivations, emotional states, and actions were identified. Developing the homeowner journey ensured a complete overview of the available literature about radon behavior. The full homeowner journey can be found in . If the discussion dtalled, the homeowner journey was an additional prompt during the first sessions. The workshops were conducted in Belgium and Slovenia. 3.3. Workshop 1: Belgium Effects of radon are a significant health problem in Belgium. Approximately 48% of the Walloon region in Belgium is expected to be affected by radon . Radon likely contributes to approximately 480 deaths due to lung cancer per year . To prevent this, approximately 36.000 dwellings need to be mitigated . The Federal Agency of Nuclear Control (FANC) is responsible for organizing activities to apply the regulations, comply with the obligations, and raise awareness of the actors involved in radon. Therefore, FANC strives for close collaboration with multiple actors, such as the provinces, municipalities, professional organizations, academic institutions, and the public. While exposure to radon at work is regulated and the employer is responsible for mitigating the working place, mitigation of dwellings is not legally required. It remains the responsibility of the homeowner . To increase the number of radon tests in dwellings, regional authorities contribute to radon test kits, which means that the price for a test kit is reduced from 30 euros to 15 euros. Financial help from the regional government for mitigation actions is also in place. The mitigation of a dwelling in Belgium costs between 500 euros and 5000 euros. Lists of companies with expertise in radon mitigation are published online . A communication plan was defined in 2014 and is updated yearly based on the evaluation of the past year to improve awareness and increase mitigation rates. In this context, a dedicated internet page was established. The effectiveness of the communication interventions is evaluated for the most impactful activities, such as orders of test kits. Other measures such as reach (e.g., visits to internet pages) and media return are also evaluated. FANC also tested social advertising in 2021 (paid ads on Twitter). However, this campaign was not further evaluated. The results of a public opinion survey show that 32% of the population are aware of radon and that 11% of them applied some mitigation measure in their home . The first workshop was conducted in March 2022 in Belgium. Due to COVID-19 restrictions, both sessions occurred online. An online whiteboard was used as an online alternative to physical post-its. 3.3.1. Sample Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. 3.3.2. Facilitation Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. 3.4. Workshop 2: Slovenia Due to its geology, Slovenia has many municipalities heavily influenced by radon. It is estimated that 100 people per year die due to lung cancer caused by radon . To prevent radon-related deaths, the Slovenian Radiation Protection Administration is responsible for the Radon Action Plan . Through online and face-to-face meetings, it consults with all ministries involved with radon, including the Ministry of Health and Ministry of Environment, Technical Support Organizations, and Education. Free measurements for dwellings are available for residents in radon-risk areas; however, the number of available tests is limited. The average mitigation costs for standard dwelling amount to approximately a few thousand euros. Target groups of communication interventions are employers, employees, local decision-makers, and the public in general. Communication interventions are focused on increasing awareness and are mainly developed in the form of brochures. Other strategies include news articles, seminars, expert meetings, workshops, and a comic book for children . Perko and Turcanu determined that the frequency of personal advice, dialogue, and response to radon-related questions and concerns of residents are very good in Slovenia compared to other European countries . The effectiveness of the communication interventions is not measured, and objective radon awareness measurements among residents are unavailable. In May 2022, the second workshop occurred face-to-face in Slovenia. The recruitment was also conducted through local authorities; however, it was also picked up by local media, such as the local radio and newspaper. 3.4.1. Sample The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. 3.4.2. Facilitation The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. 3.5. Data Analysis Both workshops were recorded and transcribed according to the ethical guidelines of the social sciences. The research team conducted an inductive thematic analysis, adopting a semantical approach. The participants recorded their main thoughts regarding the barriers, facilitators, and communication approaches on post-it notes. Therefore, their views, opinions, and experiences were made explicit, hence the semantic approach. These post-it notes were used to code the transcripts to provide more background information. After each session, these post-it notes (i.e., codes) were categorized thematically by the research team, until a consensus was reached. Since the approach was to explore the barriers, facilitators, and communication ideas, no pre-defined codebook was used. The aim was to recruit participants who already had some experience with radon so that they could speak from their own experiences rather than a hypothetical scenario. This meant that we recruited people who had already measured (high) radon levels. A workshop was designed that consisted of two unstructured group sessions. Each session lasted two hours and was scheduled a week apart. More specifically, the framework of the double diamond was applied to the context of radon and the workshop design itself . The first stage of this framework, as seen in , is the challenge, which is the starting point of the workshops and describes the ideal scenario. For this research project, the challenge was defined as “would it not be nice if all houses were radon-free,” referring to the ideal scenario where radon protective behavior is performed and facilitated easily among all homeowners in radon-prone areas. In the first session, the participants used this challenge to consider why houses are not already radon-free. In other words, “would it not be nice if all houses were radon-free” was the initial prompt to discuss barriers and facilitators in the first session. Since the participants all had experience with radon, this prompt was understandable for the participants as a starting point. Participants recorded all the problems (i.e., barriers) that arose on post-it notes while discussing them. These problem statements could relate to the causes of the challenge, the importance, the target audience, and other related issues, specifically in the form of “how-to questions.” This stems from the concept of how to ensure that all houses are radon-free, formulating a barrier as a facilitator; for instance, “how to make people aware” (i.e., facilitator) refers to the lack of awareness (i.e., barrier). Once saturation was reached and no new problems were added, dot-voting allowed for defining the most pressing problem statements. In other words, the first session discovered the why of the main challenge. Between the first and second sessions, the problem was defined further. In this case, the problem definition for the second session was “how to improve radon communication.” In the workshop’s second session, this was used as the prompt to start the discussion, together with the main findings from the first session. In this session, the focus was on ideation and brainstorming. The participants discussed potential radon communication strategies, selected the ones they considered the best, and started to develop protocols for the materials, which led to a communication strategy. This session explored the how of the main challenge. Both sessions aimed to diverge first (i.e., creating options) and converge afterward (i.e., selecting options). One of the tools often used in design thinking approaches is developing a customer journey, which indicates all the steps between being aware and purchasing a product or even becoming an ambassador (i.e., as a customer actively promoting the product among peers). Based on the precaution adoption process model and the 17 steps of radon behavior developed by Hevey , a homeowner journey was developed before the workshops. Seven steps were identified: awareness, evaluation of the knowledge (i.e., engagement with the health issue), purchase of radon test kit, delivery and conducting radon test, action (i.e., mitigating home), reassuring (i.e., confirming successful mitigation by re-testing), and ambassadorship (i.e., convincing others about the importance of radon tests). For every step, barriers, motivations, emotional states, and actions were identified. Developing the homeowner journey ensured a complete overview of the available literature about radon behavior. The full homeowner journey can be found in . If the discussion dtalled, the homeowner journey was an additional prompt during the first sessions. The workshops were conducted in Belgium and Slovenia. Effects of radon are a significant health problem in Belgium. Approximately 48% of the Walloon region in Belgium is expected to be affected by radon . Radon likely contributes to approximately 480 deaths due to lung cancer per year . To prevent this, approximately 36.000 dwellings need to be mitigated . The Federal Agency of Nuclear Control (FANC) is responsible for organizing activities to apply the regulations, comply with the obligations, and raise awareness of the actors involved in radon. Therefore, FANC strives for close collaboration with multiple actors, such as the provinces, municipalities, professional organizations, academic institutions, and the public. While exposure to radon at work is regulated and the employer is responsible for mitigating the working place, mitigation of dwellings is not legally required. It remains the responsibility of the homeowner . To increase the number of radon tests in dwellings, regional authorities contribute to radon test kits, which means that the price for a test kit is reduced from 30 euros to 15 euros. Financial help from the regional government for mitigation actions is also in place. The mitigation of a dwelling in Belgium costs between 500 euros and 5000 euros. Lists of companies with expertise in radon mitigation are published online . A communication plan was defined in 2014 and is updated yearly based on the evaluation of the past year to improve awareness and increase mitigation rates. In this context, a dedicated internet page was established. The effectiveness of the communication interventions is evaluated for the most impactful activities, such as orders of test kits. Other measures such as reach (e.g., visits to internet pages) and media return are also evaluated. FANC also tested social advertising in 2021 (paid ads on Twitter). However, this campaign was not further evaluated. The results of a public opinion survey show that 32% of the population are aware of radon and that 11% of them applied some mitigation measure in their home . The first workshop was conducted in March 2022 in Belgium. Due to COVID-19 restrictions, both sessions occurred online. An online whiteboard was used as an online alternative to physical post-its. 3.3.1. Sample Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. 3.3.2. Facilitation Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. Recruitment was conducted through local authorities, who spread the message about the workshops on their social media and websites. The principal investigator also contacted radon mitigation companies, who, in turn, forwarded the message to people that had completed (or were in the process of completing) radon mitigation. This way, people were invited to contact the research team to enroll in the workshops. The sample of the first workshop consisted of six participants, of which four detected radon in their homes, and two were professionally engaged with radon. Three participants belonged to the same family, all living in Luxembourg. This was unforeseen and only known at the start of the first session, but due to recruitment challenges, we decided that they still could participate as their experiences could inform us as well. In every session, five participants were present, with four overlapping participants in both sessions. Facilitators of ACOMPANY moderated the workshop in Belgium. This allowed the research team to observe and learn the methods they adopted. During both sessions, the researchers observed without interfering, as the objective was to explore first-hand barriers and solutions of the participants. This workshop demonstrated some limitations to the online format; therefore, we decided to wait until the end of COVID-19 restrictions to host the second workshop face-to-face. Due to its geology, Slovenia has many municipalities heavily influenced by radon. It is estimated that 100 people per year die due to lung cancer caused by radon . To prevent radon-related deaths, the Slovenian Radiation Protection Administration is responsible for the Radon Action Plan . Through online and face-to-face meetings, it consults with all ministries involved with radon, including the Ministry of Health and Ministry of Environment, Technical Support Organizations, and Education. Free measurements for dwellings are available for residents in radon-risk areas; however, the number of available tests is limited. The average mitigation costs for standard dwelling amount to approximately a few thousand euros. Target groups of communication interventions are employers, employees, local decision-makers, and the public in general. Communication interventions are focused on increasing awareness and are mainly developed in the form of brochures. Other strategies include news articles, seminars, expert meetings, workshops, and a comic book for children . Perko and Turcanu determined that the frequency of personal advice, dialogue, and response to radon-related questions and concerns of residents are very good in Slovenia compared to other European countries . The effectiveness of the communication interventions is not measured, and objective radon awareness measurements among residents are unavailable. In May 2022, the second workshop occurred face-to-face in Slovenia. The recruitment was also conducted through local authorities; however, it was also picked up by local media, such as the local radio and newspaper. 3.4.1. Sample The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. 3.4.2. Facilitation The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. The sample of the second workshop consisted of 9 participants for the first session and 8 participants in the second session. All of them were residents from a high-risk area in Slovenia who were experienced with testing their homes and detected indoor radon concentrations above the reference level of 300 Becquerel/m 3 . They all were either planning to mitigate or had already performed mitigation measures. The second workshop was moderated by two researchers of the research team, native Slovenian speakers with experience with moderating qualitative research. The researchers who conducted the second workshop were briefed by those who observed the first one to align the workshop procedures. Both workshops were recorded and transcribed according to the ethical guidelines of the social sciences. The research team conducted an inductive thematic analysis, adopting a semantical approach. The participants recorded their main thoughts regarding the barriers, facilitators, and communication approaches on post-it notes. Therefore, their views, opinions, and experiences were made explicit, hence the semantic approach. These post-it notes were used to code the transcripts to provide more background information. After each session, these post-it notes (i.e., codes) were categorized thematically by the research team, until a consensus was reached. Since the approach was to explore the barriers, facilitators, and communication ideas, no pre-defined codebook was used. 4.1. Workshop 1: Belgium (Online) 4.1.1. Session 1: Problem Statements The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. 4.1.2. Session 2: Solution Statements In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. 4.2. Workshop 2: Slovenia (Face-to-Face) 4.2.1. Session 1: Problem Statements Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? 4.2.2. Session 2: Solution Statements For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. 4.1.1. Session 1: Problem Statements The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. 4.1.2. Session 2: Solution Statements In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. The results of the first session were oriented toward problem formulations related to the following challenge: “would it not be nice if all houses were radon-free?”. In total, 36 problem statements were formulated, identifying the underlying barriers and facilitators. Not all of them were in the “how-to” format. However, they were still valuable in emphasizing certain problem areas. The following are examples of problem statements: “How to establish an EU standard?”, “How to oblige radon measures in new buildings?”, “How to find help from the state?”, “How to facilitate the necessary steps?”, “How to shock people?”, “How to develop a decision tree? ”, etc. The full list of problem statements can be found in . Another example includes problem statements such as “How to make people aware?”, “How to ‘touch’ people?”, “How to visualize the danger?”: “… we realize that people don’t know about radon in our country. I live in the province of Luxembourg [Belgium], which is the most affected. And despite everything we do, people don’t know about it. I think that if we want to be able to act and do something, people must first know.” (P2) “One difficulty is that when we talk about the FANC [Federal Agency of Nuclear Control], we don’t know, it’s something we don’t know too much about, which is, which is not close to here. So, there is a certain distance, both physical and perhaps also in the consciousness of people.” (P3) Other problem statements included issues related to “How to get help to remediate?”, “How to find reliable information?” and “How to find the right solution for the right house?”: “To give you an example, we have a list of companies in Luxembourg [country] that should be able to deal with radon. We contacted them all, the whole list, there is nobody who really has experience on it, but they are on the list of experts.” (P5) After diverging, i.e., collecting different problem statements, and after saturation was reached, the participants converged by choosing the problems that they felt were most important, as presented in . Participants compiled their top 3 issues. To provide an overview of the prioritized issues, researchers attributed 3 points to their number 1, 2 points to their number 2, and 1 point to their number 3. The ones with the most points are therefore considered the most important. Problem definition After the first session, researchers clustered the problem statements thematically to identify the underlying facilitators. The following categories were formulated: installing standardization to ensure quality ( n = 7), clarifying a stepwise approach ( n = 4), communication through different stakeholders ( n = 4), thresholds ( n = 7), cost of mitigation ( n = 2), mitigation contractors ( n = 2), and communication ( n = 10). The full overview can be found in . Since the study aimed to co-design communication tools, the problem definition was also related to communication. Since communication was also highly represented and comprised some of the prioritized problem statements, this decision was justified. In the second session, the working statement concerned communication. In total, 41 ideas were presented by the participants. Examples of ideas are workshops in primary schools, including general practitioners in the communication concerning radon, creating a “radon safe” label, a testimonial of someone who easily mitigated, a catchy radio spot with humor, advertising via social media, more visibility to mitigation companies, flyers in public spaces, etc. The full list of communication ideas can be found in . After saturation during the brainstorming, participants converged by voting for their favorite ideas. They each had two votes, and the results are presented in . During this session, the facilitator prompted ideas for four steps of the homeowner journey: radon awareness, evaluation (before testing), action (i.e., mitigation), and ambassadorships. To simplify the process for the participants, the research team decided to map the ideas to the homeowner journey among themselves after the session. Some ideas were mapped in multiple stages. The full overview can be found in . Most of the ideas were mapped to the first ( n = 20) and the second step ( n = 20) with a lot of overlapping communication strategies such as an advertising campaign via social media, a catchy radio spot with humor, a booklet in schools, press articles and flyers. In the action step, fewer ideas were presented ( n = 14), and these strategies implied more specific information. Examples include a testimonial of someone who easily mitigated radon effects, flyers with information about mitigation costs, showing examples of other people who mitigated, showing pictures that emphasize the simplicity of the process, and providing more visibility to solutions and mitigators. Finally, the last step, ambassadorship, was the one with the least ideas ( n = 5); however, those ideas do emphasize the social component of communication strategies, including, for instance, an advertising campaign on social media, a testimonial, creating a “radon safe” label, or organizing a competition with prizes for people who mitigated their houses. Due to the limits of the online format in time management and lacking group dynamics, the second session of the first workshop ended with prioritizing solutions and did not further proceed with designing the solutions. 4.2.1. Session 1: Problem Statements Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? 4.2.2. Session 2: Solution Statements For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. Similar to the first workshop in Belgium, the first session in Slovenia was oriented toward problem formulations; however, the highly involved participants had already started formulating solutions at this stage. Despite the different formats, the solutions provided in this first session also expose underlying issues. For clarification, we rephrased the solutions from the first workshop to problem statements; however, the original formulations can still be found in . In total, 45 problem statements/solutions were formulated. A few examples include: “How to include radon as a topic in schools?”, “How to provide understandable and accessible information about mitigation?”, “How to provide accessible free dosimeters?”, “How to get subventions from the state?” “How to guarantee the quality of the mitigation works?”, etc. The full list can be found in . Another example is “How to increase awareness about radon in the population?”. Multiple participants indicated that they learned about radon through their social networks: “Well, then one of my friends was encouraged [to test], and she also said, I didn’t know either, I didn’t know, and the problem is that we ordinary people don’t even know, unless we are really terribly interested in it, to even report it so that you can measure it.” (P6) “We had a measurement done because a friend of ours had done it a couple of 500 m away, and then we had it done.” (P9) After diverging, and when no new problems were added, the participants converged by voting for the most important problem statements in their opinion. They each cast three votes. The issues with the most votes were the most important barriers. The results of the dot voting can be found in . Problem definition The problem statements were clustered thematically by the researchers, resulting in the following categories: communication, information, and awareness ( n = 10), advice after measurement ( n = 6), comprehensive/holistic approach ( n = 3), accessibility of passive and active dosimeters and measurement support ( n = 9), mitigation support ( n = 5), the financial burden of mitigation ( n = 5), the legal requirement ( n = 6), and motivation ( n = 1). The full overview can be found in . Similar to the Belgian workshop, the communication, information, and awareness category were emphasized. Again, this justified the decision to focus on communication in the second session. More specifically, the following questions were raised: How do you think radon awareness should be raised? Moreover, how should advice on mitigation be communicated? For the first question about awareness, 22 ideas were formulated, including advertisements on YouTube, TikTok, and Instagram, regular information about radon in mass media, personal letters to all households, an interactive portal about radon, radon education in schools, contributions about radon in TV, radio, and newspapers. The participants voted for the best ideas, which can be found in . The group then discussed the details of the personal letter (i.e., informing households by post). For instance, the participants discussed that the letter should cover the prevalence of radon, the dangers, locations and ways to order dosimeters, the concerning radon values, and an invitation to participate in the measurements. They discussed that the municipality should draft the letter with an official signature. Further, they discussed the possibility of opening a special office to manage the radon campaign. The group also discussed whom to target and whether it should be addressed or unaddressed mail. They mentioned that a special message could be printed on the envelope, such as “it’s about your health.” The participants agreed that the letter should be sent in the winter. Creating a logo or corporate identity was also discussed, using red and yellow, as these colors are associated with radon areas, and green because it is associated with a solution. In the first part, the logo should be intimidating, and reassuring in the second part, as a solution. The group also discussed that the letter should be distributed by e-mail and social media. For the second question concerning the advice on mitigation, the group formulated 13 ideas. Examples included personal testimonials of people during mitigation, a list of mitigation contractors, social media campaigns, and personal communication with a selected advisor. The full list can be found in . Results of voting for the second question, resulting in the following prioritized ideas, can be seen in . The idea that received the most support was to hear people’s testimonials about their experiences with mitigation. The stories could either include a successful experience or lessons learned from less successful experiences. There was an idea to organize this through social networks online, for instance, through municipalities on social media. The group agreed that the information should not be too technical and should not resemble a commercial. Finally, they also discussed the need to target younger generations who are buying and building houses, and that information channels should be chosen accordingly. By setting up a qualitative co-design workshop with homeowners, we aimed at gaining more in-depth knowledge about the barriers that people experience in mitigating their house, on the one hand, and collecting their creative input and insights about ways to communicate the dangers of radon could be improved on the other. First of all, the results show that the barriers people experience are situated within different levels of interventions and different steps of behavior, as described in the literature review. The stages discussed in this section are simplified and focus on awareness, testing, and mitigating behavior for clarification purposes. Barriers related to the first stages of behavior were focused on a lack of awareness and engaging communication. The participants agreed that awareness should be the first step. In Belgium, the focus was placed on more attention-grabbing awareness campaigns, such as social media campaigns and humor, while Slovenia focused on personalized letters. This is in line with the research of Weinstein et al., where they tested whether personalized phone calls and letters affected perceived susceptibility and self-protective behavior (i.e., intention to test). They determined that personal susceptibility did increase significantly for those who received the phone call and the letter; however, no differences were detected in terms of intention to test. This could indicate that the proposed letters by the participants could successfully increase engagement with the health topic, yet that other communication strategies are needed to address the further steps in the mitigating process . These results also show the nuance of the concept of awareness, where a discrepancy between being aware and making a personal risk assessment remains. As Poortinga et al. reported, high levels of awareness do not always result in higher levels of concern; therefore, raising awareness could be focused more on grabbing attention and raising curiosity rather than merely informing. Barriers associated with testing behavior include the lack of available active and passive dosimeters in Slovenia. According to the participants, communication in this stage should be more specific than in the awareness stage; for instance, a comprehensive website with information, workshops, or newspaper articles would provide them with the information they need without overwhelming. Moreover, information from different stakeholders, such as medical doctors, could help emphasize the importance of radon testing. Apart from the accessibility of tests in Slovenia, no issues were mentioned regarding the costs of test kits. When examining the next stage, it can be observed that many barriers are related to mitigating behavior. Participants highlighted the importance of personalized advice after testing, with a clear step-wise approach on what steps to take next and how to do so. Finding mitigating companies with radon experience was challenging, according to the participants in Belgium and Slovenia. Moreover, the lack of guaranteed results after mitigation was a particularly important barrier in Belgium. Participants indicated that this had to be the state’s responsibility to implement regulations for these companies, as that would facilitate the process of the homeowners finding the best help for their particular radon problem. This could be achieved by certifying certain mitigating companies or involving inspections at mitigating companies, as proposed by the participants. Further, the financial burden of mitigation was mentioned in both workshops, emphasizing the need for subventions or financial aid from the government. Regarding mitigation behavior, the participants indicated a need for communication on different levels, for instance, stakeholder communication. They felt the involved stakeholders (e.g., medical professionals, mitigating companies, local authorities) are not sufficiently up to date in helping homeowners accordingly with radon issues. Especially in this stage, participants expressed a need for detailed and clear information, and both countries suggested using testimonials. The participants emphasized that the testimonial should contain a story of someone who mitigated their house or what lessons could be learned from unsuccessful mitigations. In that way, both the problem and the solution were addressed. This idea is already supported by the literature on narratives, stating that narratives could help in facilitating information processing, comprehension, and recall . Overarching barriers were related to legislation and regulation. On the policy level, both workshops showed a need for obligatory radon measures in new buildings; moreover, a need for a European standard was also expressed in Belgium. Despite the European Basic Safety Standards and the inclusion of radon measures in the building permit in Belgium, participants still expressed these aspects as a need for future policy-level interventions (Council Directive 2013/59/EURATOM, of 5 December 2013). This aligns with the current policy measures; however, the policy must be implemented sufficiently to impact homeowners’ barriers. Further, policy changes in adding radon levels to the energy certificate to regulate radon levels in the housing market were proposed, which agrees with previous research on mitigating . Regarding communication, the participants highlighted the need for a holistic step-wise approach, where communication follows the different stages of behavior and a consistent message is conveyed across stakeholders, channels, and time. Generally, it is important to note that behavior change will only occur if the environment is ready. In other words, barriers related to, for instance, the availability of dosimeters and mitigation companies should be addressed first before communicating about the health risks to ensure fitting solutions are available. This study indicated that co-design workshops and participatory research are crucial to gaining the users’ perspectives and ideas early in the intervention design. The face-to-face workshop was preferred when comparing both workshops, especially since this setting increased the group dynamic and collaboration efforts. The online format was, given the circumstances, still valuable in understanding the barriers and collaborating on communication ideas, yet a face-to-face setting was needed to conduct an even more in-depth inquiry. Design thinking workshops have shown to be valuable in the intervention design process related to radon; however, other health topics could and should also be addressed with participatory methods, such as design thinking, early on to maximize the involvement and input of the target group. 5.1. Limitations Just like any study, this study also experienced some limitations. Ideally, both workshops would be conducted in a face-to-face setting instead of the online setting in Belgium. This would facilitate even more creativity and sharing experiences among the participants. Moreover, recruitment challenges limited us to one workshop with two sessions in each country. Although we gained many new perspectives and ideas, more workshops with more participants would allow for saturation among the population instead of saturation among the sample. Regarding the sample of these workshops, we focused on homeowners that had measured (high) radon levels in their homes. Although this was the purpose of the study, it created selection bias. 5.2. Future Research Future research should explore more participatory research designs, both in intervention design research and radon health communication, emphasizing different social categories and countries. Moreover, scholars could investigate more comprehensive communication strategies with adapted messages depending on the sample’s behavior change stage. Finally, researchers could explore the ideas provided by the participants further in terms of theoretical framework, but also in terms of effectiveness in a lab setting. Just like any study, this study also experienced some limitations. Ideally, both workshops would be conducted in a face-to-face setting instead of the online setting in Belgium. This would facilitate even more creativity and sharing experiences among the participants. Moreover, recruitment challenges limited us to one workshop with two sessions in each country. Although we gained many new perspectives and ideas, more workshops with more participants would allow for saturation among the population instead of saturation among the sample. Regarding the sample of these workshops, we focused on homeowners that had measured (high) radon levels in their homes. Although this was the purpose of the study, it created selection bias. Future research should explore more participatory research designs, both in intervention design research and radon health communication, emphasizing different social categories and countries. Moreover, scholars could investigate more comprehensive communication strategies with adapted messages depending on the sample’s behavior change stage. Finally, researchers could explore the ideas provided by the participants further in terms of theoretical framework, but also in terms of effectiveness in a lab setting. In this study, the questions were raised: what are the main barriers and facilitators to engaging in radon-protective behavior experienced by homeowners, and how are these addressed in current interventions? Second, how can the communication about radon be improved to be more relevant and engaging for the target group? To investigate these questions, we designed a participatory co-design research method with homeowners in Belgium and Slovenia. The findings of these workshops show that participants require more policy and legislation, for instance, about certifying mitigation companies or including radon measurement on the energy certificate. Moreover, they experience a need for support from the state during radon testing and mitigating procedures, both in terms of financial aid and communication or advice. Furthermore, they indicated a need for more awareness among the general public and, more specifically, a lack of engagement. A holistic communication approach is also needed, including by stakeholders such as general practitioners and architects. When looking at communication specifically, both workshops suggested that communication strategies should be amended to match the stage from awareness to having a radon-safe home. Communication tools such as radio spots with humor or personalized letters to raise awareness and engagement were proposed. Further, testimonials were pointed out as an effective way to highlight the issues and solutions of people who reported similar experiences. Further research should adopt co-design methods, both in research about radon health communication and in different fields. Further, scholars could test the effectiveness of some of these ideas in a controlled setting and in an integrated, multi-stage intervention. |
Anisotropic shortening in the wavelength of electrical waves promotes onset of electrical turbulence in cardiac tissue: An | d50269c9-06a7-48c4-a485-d32692506fda | 7069633 | Physiology[mh] | Nonlinear waves in the form of spirals occur in many excitable media, examples of which include Belousov-Zhabotinsky-type systems , calcium-ion waves in Xenopus oocytes , the aggregation of Dictyostelium discoideum by cyclic-AMP signaling , the oxidation of carbon monoxide on a platinum surface , and, most important of all, cardiac tissue . Understanding the development of such spiral waves and their spatiotemporal evolution is an important challenge in the study of extended dynamical systems, in general, and especially in cardiac tissue, where these waves are associated with abnormal rhythm disorders, which are also called arrhythmias. Cardiac tissue can support many patterns of nonlinear waves of electrical activation, like traveling waves, target waves, and spiral and scroll waves . The occurrence of spiral- and scroll-wave turbulence of electrical activation in cardiac tissue has been implicated in the precipitation of life-threatening cardiac arrhythmias like ventricular tachycardia (VT) and ventricular fibrillation (VF), which destroy the regular rhythm of a mammalian heart and render it incapable of pumping blood. These arrhythmias are the leading cause of death in the industrialized world . Biologically, VF can arise because of many complex mechanisms. Some of these are associated with the development of instability-induced spiral- or scroll-wave turbulence . One such instability-inducing factor is ionic heterogeneity , which arises from variations in the electrophysiological properties of cardiac cells (myocytes), like the morphology and duration of their action-potentials ( AP s) . Such variations may appear in cardiac tissue because of electrical remodeling , induced by alterations in ion-channel expression and activity, which arise, in turn, from diseases like ischemia , some forms of cardiomyopathy , and the long-QT syndrome . To a certain extent, some heterogeneity is normal in healthy hearts; and it has an underlying physiological purpose ; but, if the degree of heterogeneity is more than is physiologically normal, it can be arrhythmogenic . It is important, therefore, to explore ionic-heterogeneity-induced spiral- or scroll-wave turbulence in mathematical models of cardiac tissue, which allow us to control this heterogeneity precisely, in order to be able to identify the nonlinear-wave instability that leads to such turbulence. We initiate such a study by examining the effects of this type of heterogeneity in three cardiac-tissue models, which are, in order of increasing complexity and biological realism, (a) the two-variable Aliev-Panfilov model , (b) the ionically realistic O’Hara-Rudy (ORd) model in two dimensions (2D), and (c) the ORd model in an anatomically realistic simulation domain. In each one of these models, we control parameters (see below) in such a way that the ion-channel properties change anisotropically in our simulation domains, thereby inducing an anisotropic spatial variation in the local action potential duration APD . We show that this variation in the APD leads, in all these models, to an anisotropic reduction of the wavelength of the spiral or scroll waves; and this anisotropic reduction of the wavelength paves the way for an instability that precipitates turbulence, the mathematical analog of VF, in these models.
The Aliev-Panfilov model provides a simplified description of an excitable cardiac cell . It comprises a set of coupled ordinary differential equations (ODEs), for the normalized representations of the transmembrane potential V and the generalized conductance r of the slow, repolarizing current: d V d t = - k V ( V - a ) ( V - 1 ) - V r ; (1) d r d t = [ ϵ + μ 1 r μ 2 + V ] [ - r - k V ( V - b - 1 ) ] ; (2) fast processes are governed by the first term in , whereas, the slow, recovery phase of the AP is determined by the function ϵ + μ 1 r μ 2 + V in . The parameter a represents the threshold of activation and k controls the magnitude of the transmembrane current. We use the standard values for all parameters , except for the parameter k . We write k = g × k o , where g is a multiplication factor and k o is the control value of k . In 2D simulations we introduce a spatial gradient (a linear variation) in the value of k along the vertical direction of the domain. To mimic the electrophysiology of a human ventricular cell, we perform similar studies using a slightly modified version of the ionically-realistic O’Hara-Rudy model (ORd) . Here, the transmembrane potential V is governed by the ODE d V d t = - I ion C m , I i o n = Σ x I x , (3) where I x , the membrane ionic current, for a generic ion channel x , of a cardiac cell, is I x = G x f 1 ( p a c t ) f 2 ( p i n a c t ) ( V m - E x ) , (4) where C m = 1 μ F is the membrane capacitance, f 1 ( p act ) and f 2 ( p inact ) are, respectively, functions of probabilities of activation ( p act ) and inactivation ( p inact ) of the ion channel x , and E x is its Nernst potential. We give a list of all the ionic currents in the ORd model in . We write G i = g × G io , where G io is the original value of the maximal conductance of the ion channel x in the ORd model, and g is a multiplication factor. We model gradients in G i as follows: G i ( y ) = [ g m i n + y ( g m a x - g m i n ) L ] G i o , 0 ≤ y ≤ L ; (5) here, L is the length of the side of the square simulation domain, and g max and g min are, respectively, the maximal and minimal values of g ; we can impose gradients in k in the Aliev-Panfilov model in the same manner. For simplicity, we induce the gradient along one spatial direction only: the vertical axis in 2D; and the apico-basal (apex-to-base) direction in 3D. The spatiotemporal evolution of V in both models is governed by the following reaction-diffusion equation: ∂ V ∂ t + I = ∇ . ( D ∇ V ) , (6) where D is the diffusion tensor, and I = I ion C m and kV ( V − a )( V − 1) + Vr for ORd and Aliev-Panfilov models, respectively. For the numerical implementation of the diffusion term in , we follow Refs. . We construct our anatomically realistic simulation domain with processed human-ventricular data, obtained by using Diffusion Tensor Magnetic Resonance Imaging (DTMRI) . For our 2D isotropic-domain with ORd model, we set D = 0.0012 cm 2 / ms . The temporal and spatial resolutions are set to be δx = 0.02 cm and δt = 0.02 ms, respectively, and all the simulations are performed in a domain with 960 × 960 grid points. For the anatomically-realistic domain, we use a phase-field method for the boundary conditions . The value of diffusion constant along the fiber ( D ∥ ) is set equal to the value of D in the 2D isotropic case (i.e., 0.0012 cm 2 / ms ) and its value perpendicular ( D ⊥ ) to the fiber is 1/4 times D ∥ . The simulation is performed in a cubical domain with 512 3 grid points with the same spatial and temporal resolutions that we use in our 2D simulations. We do not incorporate the intrinsic ionic heterogeneities that are present in real mammalian hearts . In our single-cell simulations, the APD is calculated by measuring the duration over which the cell depolarizes and repolarizes to 90% ( APD 90 ) of its peak transmembrane voltage in the action potential.
Spiral-wave instability In we show the variation, with the parameter g , of A P D ¯ = A P D / A P D o , where APD o is the control APD value for g = 1. We find that A P D ¯ decreases with increasing g . Changes in the APD at the single-cell level influence electrical-wave dynamics at the tissue level. In particular, such changes affect the rotation frequency ω of reentrant activity (spiral waves). If θ and λ denote, respectively, the conduction velocity and wavelength of a plane electrical wave in tissue, then ω ≃ θ λ , λ ≃ θ × APD . Therefore, if we neglect the effects of curvature and excitable gap, the spiral-wave frequency ω ≃ 1 A P D . (7) We find, in agreement with this simple, analytical estimate, that ω decreases as the APD increases. We show this in by plotting ω ¯ = ω / ω 0 versus g ; here, ω 0 is the frequency for g = 1. For the parameter a this simple relation between ω and APD is not observed, because change in a affects not only the APD but also other quantities like θ , which has effects on the value of ω . The spiral-wave frequency ω is obtained by simulating a spiral-wave in a homogeneous domain for every value of g . Similarly, in the ionically realistic ORd model, changes in the ion-channel conductances G i alter the APD of the cell and, therefore, the spiral-wave frequency ω . In we present a family of plots to illustrate the variation in A P D ¯ with changes in G i . We find that A P D ¯ decreases with an increase in g for most currents ( I Kr , I Ks , I K 1 , I Na and I NaK ); but it increases for some other currents ( I Ca , I NaCa and I to ). The rate of change of A P D ¯ is most significant when we change G Kr ; by contrast, it is most insensitive to changes in G Na and G to . In we show the variation of ω ¯ with g for different ion channels x . We find that changes in G i , which increase APD , decrease ω and vice versa; this follows from . The sensitivity of ω , with respect to changes in G i , is most for G i = G Kr and least for G i = G to : ω ¯ increases by Δ ω ¯ ≃ 1 . 23 , as g goes from 0.2 to 5; for G to , the same variation in g decreases the value of ω ¯ by Δ ω ¯ ≃ 0 . 04 . We have done many simulations for each G i with different values of Δ ω ¯ to check if a critical value Δ ω ¯ c exists such that above (below) Δ ω ¯ c we see wave breaks (no wave breaks) for all G i s. We find, however, that no such Δ ω ¯ c exists, that is common for all G i s, which is because the stability of the spiral waves depends on the local values of the gradients in APD . We now investigate the effects, on spiral-wave dynamics, of spatial gradients in k , in the 2D Aliev-Panfilov model, and in G i , in the 2D ORd model. A linear gradient in k , in the Aliev-Panfilov model, induces a gradient in ω ¯ (see ); and such a spatial gradient in ω ¯ induces a spiral-wave instability in the low- ω ¯ region. In we demonstrate how a gradient in k ( g max = 1.5 and g min = 0.5) leads to the precipitation of this instability (also see ). Similarly, for each current listed in for the ORd model, we find wave breaks in a medium with a gradient in G i . We illustrate, in , such wave breaks in our 2D simulation domain, with a gradient (∇ G i ) in any G i , for 3 representative currents; we select I Kr , because it has the maximal impact on the single-cell APD , and also on ω in tissue simulations; and we choose I K 1 and I NaCa , because they have moderate and contrary effects on APD and ω . Our results indicate that gradient-induced wave breaks are generic, insofar as they occur in both the simple two-variable (Aliev-Panfilov) and the ionically realistic (ORd) models of cardiac tissue. In , we present power spectra of the time series of V , recorded from a representative point of the simulation domain; these spectra show broad-band backgrounds, which are signatures of chaos, for the gradients ∇ G Kr and ∇ G K 1 ; however, the gradient ∇ G NaCa induces wave breaks while preserving the periodicity of the resultant, reentrant electrical activity, at least at the points from which we have recorded V . The instability in spiral waves occurs because spatial gradients in k (Aliev-Panfilov) or in G i (ORd) induce spatial variations in both A P D ¯ and ω ¯ : In our simulation domain, the local value of ω ¯ ( A P D ¯ ) decreases (increases) from the top to the bottom. In the presence of a single spiral wave (left panel of ), the domain is paced, in effect, at the frequency ω of the spiral, i.e., with a fixed time period T = 1/ ω = APD + DI , where DI is the diastolic interval (the time between the repolarization of one AP and the initiation of the next AP ). Thus, the bottom region, with a long APD , has a short DI and vice versa. The restitution of the conduction velocity θ implies that a small DI leads to a low value of θ and vice versa (see ). To compensate for this reduction of θ , the spiral wave must reduce its wavelength λ, in the bottom, large- APD (small- DI ) region, so that its rotation frequency ω ≃ θ λ remains unchanged, as shown in (also see ), where the shortening of the spiral arms is indicated by the variation of λ along the spiral arm (λ 2 > λ 1 , in the pseudocolor plot of V m in the top-left panel t = 1.46 s). Clearly, this shortening is anisotropic, because of the uni-directional variation in k or G i ; this anisotropy creates functional heterogeneity in wave propagation, causing a local conduction block, which leads in turn to the spiral-wave instability we have discussed above . The phenomenon of conduction block in a medium with a gradient in ionic properties has been extensively investigated in an earlier study ; here, it is this local conduction block (caused by the anisotropy of the medium) that leads to the break-up of the spiral arms. It should be noted that the stability of the spiral wave depends on the APD difference between the region, where the spiral is initiated, and the top region, where the APD is maxmium; therefore, its stability depends on the location of the spiral-wave initiation along the vertical direction. In the ORd model, we find that gradients in G Kr easily induce instabilities of the spiral for small values of Δ g ≡ g max − g min ≃ 0.5; by contrast, in a medium with gradients in G to , the spiral remains stable for values of Δ g as large as 4.8 (shown in ). This implies that the stability of the spiral depends on the magnitude of the gradient in ω that is induced in the medium. Scroll-wave instability In (also see ), we extend our study to illustrate the onset of scroll-wave instabilities in a 3D, anatomically realistic human-ventricular domain, in the presence of spatial gradients in G Kr . In mammalian hearts, the APD is typically lower in the apical region as compared to that in the basal region . Therefore, we use values of the APD that increase from the apex to the base (and, hence, ω decreases from the apex to base). With g max ( G Kr ) = 6 and Δ g = 4, we observe breakup in a scroll wave that is otherwise stable in the absence of this spatial gradient. We note that the mechanism for the onset of such scroll-wave instabilities is the same as in 2D, and it relies on the gradient-induced anisotropic shortening of the scroll wavelength. For control, we also perform a simulation with small Δ g = 0.1 that does not show scroll-wave instability (see ).
In we show the variation, with the parameter g , of A P D ¯ = A P D / A P D o , where APD o is the control APD value for g = 1. We find that A P D ¯ decreases with increasing g . Changes in the APD at the single-cell level influence electrical-wave dynamics at the tissue level. In particular, such changes affect the rotation frequency ω of reentrant activity (spiral waves). If θ and λ denote, respectively, the conduction velocity and wavelength of a plane electrical wave in tissue, then ω ≃ θ λ , λ ≃ θ × APD . Therefore, if we neglect the effects of curvature and excitable gap, the spiral-wave frequency ω ≃ 1 A P D . (7) We find, in agreement with this simple, analytical estimate, that ω decreases as the APD increases. We show this in by plotting ω ¯ = ω / ω 0 versus g ; here, ω 0 is the frequency for g = 1. For the parameter a this simple relation between ω and APD is not observed, because change in a affects not only the APD but also other quantities like θ , which has effects on the value of ω . The spiral-wave frequency ω is obtained by simulating a spiral-wave in a homogeneous domain for every value of g . Similarly, in the ionically realistic ORd model, changes in the ion-channel conductances G i alter the APD of the cell and, therefore, the spiral-wave frequency ω . In we present a family of plots to illustrate the variation in A P D ¯ with changes in G i . We find that A P D ¯ decreases with an increase in g for most currents ( I Kr , I Ks , I K 1 , I Na and I NaK ); but it increases for some other currents ( I Ca , I NaCa and I to ). The rate of change of A P D ¯ is most significant when we change G Kr ; by contrast, it is most insensitive to changes in G Na and G to . In we show the variation of ω ¯ with g for different ion channels x . We find that changes in G i , which increase APD , decrease ω and vice versa; this follows from . The sensitivity of ω , with respect to changes in G i , is most for G i = G Kr and least for G i = G to : ω ¯ increases by Δ ω ¯ ≃ 1 . 23 , as g goes from 0.2 to 5; for G to , the same variation in g decreases the value of ω ¯ by Δ ω ¯ ≃ 0 . 04 . We have done many simulations for each G i with different values of Δ ω ¯ to check if a critical value Δ ω ¯ c exists such that above (below) Δ ω ¯ c we see wave breaks (no wave breaks) for all G i s. We find, however, that no such Δ ω ¯ c exists, that is common for all G i s, which is because the stability of the spiral waves depends on the local values of the gradients in APD . We now investigate the effects, on spiral-wave dynamics, of spatial gradients in k , in the 2D Aliev-Panfilov model, and in G i , in the 2D ORd model. A linear gradient in k , in the Aliev-Panfilov model, induces a gradient in ω ¯ (see ); and such a spatial gradient in ω ¯ induces a spiral-wave instability in the low- ω ¯ region. In we demonstrate how a gradient in k ( g max = 1.5 and g min = 0.5) leads to the precipitation of this instability (also see ). Similarly, for each current listed in for the ORd model, we find wave breaks in a medium with a gradient in G i . We illustrate, in , such wave breaks in our 2D simulation domain, with a gradient (∇ G i ) in any G i , for 3 representative currents; we select I Kr , because it has the maximal impact on the single-cell APD , and also on ω in tissue simulations; and we choose I K 1 and I NaCa , because they have moderate and contrary effects on APD and ω . Our results indicate that gradient-induced wave breaks are generic, insofar as they occur in both the simple two-variable (Aliev-Panfilov) and the ionically realistic (ORd) models of cardiac tissue. In , we present power spectra of the time series of V , recorded from a representative point of the simulation domain; these spectra show broad-band backgrounds, which are signatures of chaos, for the gradients ∇ G Kr and ∇ G K 1 ; however, the gradient ∇ G NaCa induces wave breaks while preserving the periodicity of the resultant, reentrant electrical activity, at least at the points from which we have recorded V . The instability in spiral waves occurs because spatial gradients in k (Aliev-Panfilov) or in G i (ORd) induce spatial variations in both A P D ¯ and ω ¯ : In our simulation domain, the local value of ω ¯ ( A P D ¯ ) decreases (increases) from the top to the bottom. In the presence of a single spiral wave (left panel of ), the domain is paced, in effect, at the frequency ω of the spiral, i.e., with a fixed time period T = 1/ ω = APD + DI , where DI is the diastolic interval (the time between the repolarization of one AP and the initiation of the next AP ). Thus, the bottom region, with a long APD , has a short DI and vice versa. The restitution of the conduction velocity θ implies that a small DI leads to a low value of θ and vice versa (see ). To compensate for this reduction of θ , the spiral wave must reduce its wavelength λ, in the bottom, large- APD (small- DI ) region, so that its rotation frequency ω ≃ θ λ remains unchanged, as shown in (also see ), where the shortening of the spiral arms is indicated by the variation of λ along the spiral arm (λ 2 > λ 1 , in the pseudocolor plot of V m in the top-left panel t = 1.46 s). Clearly, this shortening is anisotropic, because of the uni-directional variation in k or G i ; this anisotropy creates functional heterogeneity in wave propagation, causing a local conduction block, which leads in turn to the spiral-wave instability we have discussed above . The phenomenon of conduction block in a medium with a gradient in ionic properties has been extensively investigated in an earlier study ; here, it is this local conduction block (caused by the anisotropy of the medium) that leads to the break-up of the spiral arms. It should be noted that the stability of the spiral wave depends on the APD difference between the region, where the spiral is initiated, and the top region, where the APD is maxmium; therefore, its stability depends on the location of the spiral-wave initiation along the vertical direction. In the ORd model, we find that gradients in G Kr easily induce instabilities of the spiral for small values of Δ g ≡ g max − g min ≃ 0.5; by contrast, in a medium with gradients in G to , the spiral remains stable for values of Δ g as large as 4.8 (shown in ). This implies that the stability of the spiral depends on the magnitude of the gradient in ω that is induced in the medium.
In (also see ), we extend our study to illustrate the onset of scroll-wave instabilities in a 3D, anatomically realistic human-ventricular domain, in the presence of spatial gradients in G Kr . In mammalian hearts, the APD is typically lower in the apical region as compared to that in the basal region . Therefore, we use values of the APD that increase from the apex to the base (and, hence, ω decreases from the apex to base). With g max ( G Kr ) = 6 and Δ g = 4, we observe breakup in a scroll wave that is otherwise stable in the absence of this spatial gradient. We note that the mechanism for the onset of such scroll-wave instabilities is the same as in 2D, and it relies on the gradient-induced anisotropic shortening of the scroll wavelength. For control, we also perform a simulation with small Δ g = 0.1 that does not show scroll-wave instability (see ).
We have shown that gradients in parameters that affect the APD of the constituent cells induce spatial gradients in the local value of ω . This gradient in the value of ω leads to an anisotropic reduction in the wavelength of the waves, because of the conduction-velocity restitution property of the tissue, and it paves the way for spiral- and scroll-wave instability in the domain. We would like to point out that this instability is not because of the condition of steep APD restitution curves as reported in ref. . We find that the value of slopes of APD restitution curves for all values of g for all G i in are less than one. Therefore, the instability of waves in our study is induced by the anisotropic variation of APD in the medium. This gradient-induced instability is a generic phenomenon because we obtain this instability in the simple Aliev-Panfilov and the detailed ORd model for cardiac tissue. Such an instability should be observable in any excitable medium that has the conduction-velocity-restitution property. We find that the spiral or scroll waves always break up in the low- ω region. This finding is in line with that of the experimental study by Campbell, et al ., on neonatal-rat-ventricular cell cultures and a computational study by Xie, et al ., , who observe spiral-wave break-up in regions with a large APD . We find that the stability of the spiral is determined by the magnitude of the gradient in ω ; the larger the magnitude of the gradient in the local value of ω , the more likely is the break up of the spiral or scroll wave. By using the ORd model, we find that ω varies most when we change G Kr (as compared to other ion-channel conductances) and, therefore, spiral waves are most unstable in the presence of a gradient of G Kr . By contrast, we find that ω varies most gradually with G to , and hence the spiral wave is most stable in the presence of a gradient in G to (as compared to gradients in other conductances). Earlier studies have investigated the effects of ionic-heterogeneity on spiral-wave dynamics. The existence of regional ionic heterogeneities have been found to initiate spiral waves , attract spiral waves to the heterogeneity , and destabilize spiral waves . The presence of APD gradients in cardiac tissue has been shown to drive spirals towards large- APD (low- ω ) regions or small- APD regions , called ‘anomalous drift’, by varying model parameters. We have also observed the drift of spiral waves towards the large- APD region (see ) in the initial time before the waves break up. A study by Zimik, et al ., finds that spatial gradients in ω , induced by gradients in the density of fibroblasts, can precipitate a spiral-wave instability. However, none of these studies provides a clear understanding of the mechanisms underlying the onset of spiral- and scroll-wave instabilities, from a fundamental standpoint. Moreover, none of these studies has carried out a detailed calculation of the pristine effects of each individual major ionic currents, present in a myocyte, on the spiral-wave frequency; nor have they investigated, in a controlled manner, how gradients in ion-channel conductances lead to spiral- or scroll-wave instabilities. Our work makes up for these lacunae and leads to specific predictions that should be tested experimentally. We end our paper by discussing certain limitations of our work. We have shown that large spatial gradients in APD can induce scroll-wave breaks in real hearts via a representative simulation on anatomically realistic heart domain with fiber orientation; however, we have not incorporated other important physiological details of real mammalian hearts, like the intrinsic heterogeneities that exists in them , and the bidomain nature of the tissue . Moreover, in our study we induce heterogeneity in the medium by applying a spatial gradient that extends throughout the domain, but, heterogeneities in real hearts tend to occur in localized regions. However, our results of spiral-wave break at large- APD region should still hold even if the heterogeneites are localized, as has been shown in .
S1 Video Spiral-wave instability in the Aliev-Panfilov model. Video of pseudocolor plots of transmembrane potential V showing the formation of spiral-wave instability in a medium with gradient in k : g min = 0.5 and g max = 1.5. For the video, we use 10 frames per second with each frame separated from the succeeding frame by 20ms in real time. (AVI) Click here for additional data file. S2 Video Spiral-wave instability in the ORd model. Video pseudocolor plots of transmembrane potential V m showing the formation of spiral-wave instability in a medium with a gradient in G Naca ( g min = 0.2 and g max = 2). For the video, we use 10 frames per second with each frame separated from the succeeding frame by 20ms in real time. (AVI) Click here for additional data file. S3 Video Scroll-wave instability. Video pseudocolor plots of transmembrane potential V m showing the formation of scroll-wave instability in an anatomically realistic model for human ventricles. A linear gradient in G Kr is applied along the apico-basal direction: g min = 2 in the apex and g max = 6 in the base. For the video, we use 10 frames per second with each frame separated from the succeeding frame by 20ms in real time. (AVI) Click here for additional data file. S4 Video Stable scroll-wave. Video pseudocolor plots of transmembrane potential V m showing a stable scroll-wave for small Δg = 0.1 in an anatomically realistic model for human ventricles. A linear gradient in G Kr is applied along the apico-basal direction: g min = 2 in the apex and g max = 2.1 in the base. For the video, we use 10 frames per second with each frame separated from the succeeding frame by 20ms in real time. (AVI) Click here for additional data file.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.